SIGNAL PROCESSING APPARATUS, SIGNAL PROCESSING METHOD, AND PROGRAM

Information

  • Patent Application
  • 20250061913
  • Publication Number
    20250061913
  • Date Filed
    December 23, 2021
    3 years ago
  • Date Published
    February 20, 2025
    2 months ago
Abstract
A signal processing device includes: a separation signal updating unit that solves a minimization problem of an upper bound function related to a separation matrix W in a majorization-minimization algorithm for a signal source separation technique (independent vector analysis, IVA) by dividing a mixed matrix A into sub matrixes A1, . . . , AL having d columns (d is an integer equal to or larger than 2) and updating sets (W, Al) (l=1, . . . , L) of the separation matrix W and the sub matrixes A1, . . . , AL one by one and updates a separation signal Y according to updating of the separation matrix W.
Description
TECHNICAL FIELD

The present invention relates to a signal processing device, a signal processing method, and a program.


BACKGROUND ART

A signal source separation technique (or a sound source separation technique) for estimating a source signal before mixing from a mixed signal which is observed is a technique that is widely used for preprocessing of speech recognition and the like. As a method of performing signal source separation using a plurality of sensors, independent component analysis (ICA, Non Patent Literature 1) and independent vector analysis (IVA, Non Patent Literature 2) are known.


As an optimization algorithm for the ICA and the IVA, an algorithm called iterative projection (IP) has been developed. As the IP, IP1 (Non Patent Literature 2) and IP2 (Non Patent Literature 3) have been developed.


As another optimization algorithm for the ICA and the IVA, an algorithm called iterative source steering (ISS, Non Patent Literature 4) has also been developed. The ISS is referred to as ISS1 in the present specification.


CITATION LIST
Non Patent Literature





    • Non Patent Literature 1: P. Comon, “Independent component analysis, a new concept?” Signal processing 36.3 (1994), 287-314.

    • Non Patent Literature 2: N. Ono, “Stable and fast update rules for independent vector analysis based on auxiliary function technique,” IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2011, pp. 189-192.

    • Non Patent Literature 3: Nobutaka Ono, “San ongen ijo ni taisuru dokuritsu seibun bunseki dokuritsu bekutoru bunseki dokuritsu tei ranku gyoretsu bunseki no kosoku kaiho (in Japanese) (High-speed solution of independent component analysis, independent vector analysis, and independent low-rank matrix analysis for three or more sound sources)”, Proceedings of Acoustical Society of Japan, March 2018.

    • Non Patent Literature 4: R. Scheibler and N. Ono, “Fast and Stable Blind Source Separation with Rank-1 Updates,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 236-240.





SUMMARY OF INVENTION
Technical Problem

The IP2 corresponding to an extension of the IP1 has fast convergence, but has a problem that a calculation amount per iteration is large. On the other hand, the ISS1 has a small calculation amount per iteration, but has a problem that convergence is slow.


Therefore, an object of the present invention is to provide a signal processing device that achieves both fast convergence of the IP2 and a small calculation amount of the ISS1.


Solution to Problem

The signal processing device of the present invention includes a separation signal updating unit.


The separation signal updating unit solves a minimization problem of an upper bound function related to a separation matrix W in a majorization-minimization algorithm for a signal source separation technique (independent vector analysis, IVA) by dividing a mixed matrix A into sub matrixes A1, . . . , AL having d columns (d is an integer equal to or larger than 2) and updating sets (W, Al) (l=1, . . . , L) of the separation matrix W and the sub matrixes A1, . . . , AL one by one, and updates a separation signal Y according to updating of the separation matrix W.


Advantageous Effects of Invention

According to the signal processing device of the present invention, it is possible to achieve both fast convergence of the IP2 and a small calculation amount of the ISS1.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram illustrating a functional configuration of a signal processing device of Example 1.



FIG. 2 is a flowchart illustrating an operation of the signal processing device of Example 1.



FIG. 3 is a graph showing a result of an experiment for comparing a method in the related art and a method of the present invention.



FIG. 4 is a diagram illustrating a functional configuration example of a computer.





DESCRIPTION OF EMBODIMENTS

Hereinafter, an embodiment of the present invention will be described in detail. Note that components having the same functions are denoted by the same reference numerals, and redundant description will be omitted.


Independent Vector Analysis (IVA)

A problem of signal source separation handled in the present invention is defined as follows. It is assumed that X is an observation signal and is a product of a mixed matrix A and a matrix S in which source signals are arranged (expression (1)).









[

Math
.

1

]











X

[
k
]


=



A

[
k
]




S

[
k
]







m
×
n




,

k
=
1

,

,
K




(
1
)







Assuming that K≥1, m∈N is the number of sensors, n∈N is the number of sample points, X[k]∈Cm×n is an observation signal, S[k]∈Cm×n is original m source signals, and A[k]∈GL(m) is a mixed matrix.


In order to estimate the source signal S, instead of the mixed matrix A, a separation matrix W(=A−1) that is an inverse matrix of A may be estimated. A separation result is Y=WX. The separation matrix W[k]∈GL(m), k=1, . . . , K is defined by the following expression (2).









[

Math
.

2

]












W

[
k
]




A

[
k
]



=


D

[
k
]



Π


,

k
=
1

,

,
K




(
2
)







D[k] and Π are a certain diagonal matrix and a certain permutation matrix each of which has a size of m×m, and respectively correspond to a scale and an ambiguity of a permutation of the separation signal represented as follows.








[Math . 3]









Y

[
k
]



=




W

[
k
]




X

[
k
]













A model of the signal source separation technique IVA handled in the present invention is defined as follows. In the IVA, it is assumed that a multivariate vector which has a length K and is given by the following expression (3) follows a probability density function having a correlation of a second order or higher.









[

Math
.

4

]










y

i

j



=





[


Y
ij

[
1
]


,



,

Y
ij

[
K
]



]







K






(
3
)







It is assumed that probability variables {yij}ij are independent of each other. In this model, as a cost function for optimizing the separation matrix represented as follows,








[Math . 5]








W

=




(

W

[
k
]


)


k
=
1

K











a negative log likelihood represented by the following expression (4) can be used.









[

Math

.

6

]












1

(
W
)


=





1
n


log


p

(


X

[
1
]


,


,



X

[
K
]


;
W


)


=



-

1
n







i
=
1

m





j
=
1

n


log


p

(

y

i

j


)





-




k
=
1

K


log
|
det


W

[
k
]



|
2










(
4
)








The optimization of the separation matrix W is performed so as to minimize the negative log likelihood represented by the expression (4). Both the algorithms IP1, IP2, and ISS1 in the related art and an algorithm ISS2 belonging to the present invention are iteration algorithms for solving “optimization problem of minimizing the expression (4) for W”.


The algorithms IP1, IP2, and ISS1 in the related art and the algorithm ISS2 belonging to the present invention are algorithms belonging to a framework called a majorization-minimization algorithm (MM algorithm). The MM algorithm for the IVA is as follows.


MM Algorithm for IVA

The MM algorithm for the ICA has been proposed in referenced Non Patent Literatures 1 to 3.

    • (Referenced Non Patent Literature 1: N. Ono and S. Miyabe, “Auxiliary-function-based independent component analysis for super-Gaussian sources,” in Proc. LVA/ICA, 2010, pp. 165-172.)
    • (Referenced Non Patent Literature 2: P. Ablin, A. Gramfort, J.-F. Cardoso, and F. Bach, “Stochastic algorithms with descent guarantees for ICA,” in Proc. AISTATS, 2019, pp. 1564-1573.)
    • (Referenced Non Patent Literature 3: N. Ono, “Stable and fast update rules for independent vector analysis based on auxiliary function technique,” in Proc. WASPAA, 2011, pp. 189-192.)


Here, assuming that p(y) is a symmetric probability density function,









G
:





0









[

Math
.

7

]







is defined by the following expression.










G

(



y


2

)


=
Δ




-
log



p

(
y
)



with





y


2



=
Δ




y
H


y







[

Math
.

8

]







When G′(r)/r monotonically decreases at r∈(0, ∞)=R>0, p(y) is said to be a super-Gaussian distribution. Here, G′ is a first derivative function of G (refer to referenced Non Patent Literatures 1, 2, 4 (pp. 60-61), and 5).

    • (Referenced Non Patent Literature 4: A. Benveniste, M. Metivier, and P. Priouret, Adaptive algorithms and stochastic approximations, 1st ed. Springer Science, 1990, vol. 22.)
    • (Referenced Non Patent Literature 5: J. Palmer, D. Wipf, K. Kreutz-Delgado, and B. Rao, “Variational EM algorithms for non-Gaussian latent variable models,” in Proc. NIPS, vol. 18, 2005, pp. 1059-1066.)


For example, a generalized Gaussian distribution (GGD) given by the expression (5) is a super-Gaussian distribution.









[

Math
.

9

]











G

(



y


2

)

=




y


2
β

+

const
.



,

0
<
β
<
2





(
5
)







The GGD when β=1 is a Laplace distribution. For G(r) that is a super-Gaussian distribution, in referenced Non Patent Literatures 1, 2, and 5, it is known that there is a function φ: R≥0→R satisfying the expression (6).









[

Math
.

10

]










G

(
r
)

=


min

λ

0



{



λ


r
2


2

+

φ

(
λ
)


}






(
6
)







The right side of the expression (6) takes a minimum value when λ=G′(r)/r. When the expression (6) is used for −log p(yij)=G(∥yij2) in the expression (4), for L1(W), a surrogate function (or upper bound function) denoted as L2(W, Λ) is obtained.









[

Math
.

11

]












1

(
W
)

=


min
Λ


{




2

(

W
,
Λ

)



Λ





0


m
×
n




}






(
7
)















2

(

W
,
Λ

)


=
Δ






k
=
1

K





3

[
k
]


(


W

[
k
]


,
Λ

)


+




i
=
1

m






j
=
1

n





φ
ij

(

Λ
ij

)

n








(
8
)















3

[
k
]


(


W

[
k
]


,
Λ

)


=
Δ






i
=
1

m





(

w
i

[
k
]


)

H



V
i

[
k
]




w
i

[
k
]




-

log





"\[LeftBracketingBar]"


det


W

[
k
]





"\[RightBracketingBar]"


2







(
9
)










w
i

[
k
]



=
Δ




(

W

i
,



[
k
]


)

H



(




W

[
k
]



=


[


w
1

[
k
]


,


,

w
m

[
k
]



]

H


)












V
i

[
k
]



=
Δ




1

2

n




X

[
k
]




diag

(

Λ

i
,



)




(

X

[
k
]


)

H




S
+
m






(
10
)







Under the surrogate function (expression (8)), the MM algorithm for the IVA (referenced Non Patent Literature 3) alternately updates Λ and W based on the expression (11) and the expression (12).









[

Math
.

12

]









Λ




arg

min

Λ



{




2

(

W
,
Λ

)



Λ





0


m
×
n




}






(
11
)














W

[
k
]






arg

min



W

[
k
]




GL

(
m
)







3

[
k
]


(

W

[
k
]


)



,

k
=
1

,


,
K





(
12
)








Hereinafter, when discussing L3[k], the factor Λ will be omitted. From the expression (7), the expression (11) is solved as the following expression (13).









[

Math
.

13

]











Λ
ij





G


(




y
ij



2

)





y
ij



2







>
0






(
13
)







For the expression (12), in a case of m=2, an analytical solution is obtained (referenced Non Patent Literatures 6 and 7).

    • (Referenced Non Patent Literature 6: “Determinant maximization of a nonsymmetric matrix with quadratic constraints,” SIAM J. Optim., vol. 17, no. 4, pp. 997-1014, 2007.)
    • (Referenced Non Patent Literature 7: N. Ono, “Fast stereo independent vector analysis and its implementation on mobile phone,” in Proc. IWAENC, 2012, pp. 1-4.)


Here, in a case of m≥3, an algorithm for obtaining a global optimal solution of the expression (12) is not found. For this reason, as a block coordinate descent method (BCD) for solving the expression (12), the IP1, the IP2, and the ISS1 have been developed. These algorithms are referred to as MM+BCD. In the present invention, as a new MM+BCD, ISSd is disclosed.


Hereinafter, in order to simplify notations, the upper right index [k] will be omitted when describing the expression (12).


MM+BCD Algorithm Disclosed in Present Specification

A difference between the algorithms IP1, IP2, and ISS1 in the related art and the algorithm ISSd disclosed in the present specification is a difference in the method of solving the “optimization problem (expression (12)) related to the separation matrix W of the upper bound function (expression (9))” in the MM algorithm.


The ISS (referenced Non Patent Literature 8, ISS1) in the related art is an algorithm that updates A column by column in each iteration.

    • (Referenced Non Patent Literature 8: R. Scheibler and N. Ono, “Fast and stable blind source separation with rank-1 updates,” in Proc. ICASSP, 2020, pp. 236-240.)


The ISS2 disclosed in this specification is an algorithm that updates A by two columns in each iteration. In order to extend the ISS1 to the ISS2, for a certain number d≥1, a unified method for developing an ISSd that updates A by d columns is disclosed.


Definition of ISSd

d is a divisor of m. It is considered that A is divided into L sub matrixes A1, . . . , AL having d columns.









[

Math
.

14

]









A
=


[




A
1



d





"\[LeftBracketingBar]"






"\[RightBracketingBar]"






A
L



d


]





m
×
n







(
14
)







The ISSd is an MM+BCD method of updating (W, Al) based on the expression (16) and updating the expression (15) based on the expression (11) for each l=1, . . . , L. In a case of d=1, the definition of the ISS1 of the present invention is consistent with the ISS1 in the related art (referenced Non Patent Literature 8).









[

Math
.

15

]









Λ


(

W
,

A
1


)



(

W
,

A
2


)






(

W
,

A
L


)





(
15
)













(

W
,

A



)





arg

min


(

W
,

A



)




{





3

(
W
)


WA

=

I
m


}






(
16
)







Formulation as Multiplicative Updating (MU) Algorithm for ISSd

The ISSd can be described as a multiplicative updating algorithm for W (or Y=WX). When l=1, updating (W, A1) based on the expression (16) is equivalent to the next multiplicative updating related to W (and A).









[

Math
.

16

]









T




arg

min

T



{




3

(
TW
)



T


𝒟

ISS
d




}






(
17
)












W


TW

(


and


A



AT

-
1



)






(
18
)








Here, D in the expression (17) is defined as follows.










𝒟

ISS
d



=
Δ


{



[



P



O

d
,

m
-
d







Q



I

m
-
d





]



P


GL

(
d
)



,

Q





(

m
-
d

)

×
d




}





[

Math
.

17

]







Also for general l=1, . . . , L, by updating (W, A1) based on the expression (16) as follows, multiplicative updating that is expressed by the expression (17) and the expression (18) can be realized. For the multiplicative updating, a permutation matrix defined by









[

Math
.

18

]














d

=


[










I
d





























I
d









I
d







]





m
×
n







(
19
)







is prepared. By updating the separation matrix W, the mixed matrix A, the separation signal Y, and auxiliary variable in advance according to









[

Math
.

19

]










W







d


-
1



W


,

Y







d


-
1



Y


,

Λ







d


-
1



Λ


,




(
20
)













A



A

(





d


-
1


)

T


=

[


A


,


,

A
L

,

A
1

,


,

A


-
1



]






(
21
)








using the permutation matrix, updating (W, Al) based on the expression (16) is equivalent to updating (W, Al) based on the multiplicative updating expression represented by the expression (17) and the expression (18).


Analytical Updating Expression of ISS2

Since the problem (17) can be analytically solved when d=2 is satisfied, the ISS2 disclosed in the present invention is an algorithm described using only the following analytical updating expression.












[Math. 20]


Algorithm 1: IVA by ISS2















Input: X[k]custom-characterm×n (k = 1, . . . , K)


Output: Y[k]custom-characterm×n (k = 1, . . . , K)








1
Initialize W[k] as a whitening matrix for k = 1, . . . , K.


2
Y[k] ← W[k] X[k] for each k = 1, . . . , K.


3
for t = 1, 2, . . . , max. number of MM iterations do









4
|
Λij & G′(∥yij2 + custom-character  ) / (∥yij2 + custom-character  ), where



|

custom-character   = 10−10 is added to improve numerical stability.






5
|






for




=
1

,


,


m
2



do


















6
|
|
for k = 1, . . . , K do











7
|
|
|
for i = 3, . . . , m do












8
|
|
|
|





G
i

[
k
]



=




1

2

n




Y

1
:
2


[
k
]







diag


(


Λ
i

,




)





(


Y

1
:
2


[
k
]


,



)

H











9
|
|
|
|







g
i

(
k
)



=




1

2

n




Y

1
:
2


[
k
]







diag


(


Λ
i

,




)





(

Y

i
,




[
k
]



)

H












10
|
|
|

Yi,•[k] ← Yi,•[k] − (gi[k])H(Gi[k])−1Y1:2,•[k]











11
|
|
|
for i = 1, 2 do















12
|
|
|








G
i

[
k
]




=




1

2

n




Y

1
:
2


[
k
]







diag


(


Λ
i

,




)





(


Y

1
:
2


[
k
]


,



)

H





















13
|
|
|
Update P[k]custom-character2×2 using (28)-(30).


14
|
|
|
Y1:2,•[k] ← P[k]Y1:2,•[k]custom-character2×n


15
|
|

Y[k] ← Π2Y[k] / / permute rows










16
|

Λ ← Π2Λ // permute rows




























[

Math
.

21

]









H
=



G
1

-
1




G
2





2
×
2







(
22
)











θ
1

=



Tr

(
H
)

+




(

Tr

(
H
)

)

2

-

4


det

(
H
)





2


,


θ
2

=


det

H


θ
1














u
1

=

[





H
22

-

θ
1







-

H
21





]


,


u
2

=


[




-

H
12








H
11

-

θ
2





]




2
×
1









(
23
)















p
i

=



u
i



(


u
i
H



V
i



u
i


)


1
2






2
×
1




,

i
=
1

,
2




(
24
)







Example 1

Example 1 to be described below discloses a signal processing device 1 that implements an algorithm ISSd (d is a certain natural number) for solving the optimization problem (expression (7)) related to the separation matrix W by the method described in <Definition of ISSd>. As described above, the ISSd is an extension of the technique ISS1 in the related art.


Specifically, as shown in the expression (15), the ISSd is an algorithm that updates sets of (W, Al) one by one according to the optimization problem (expression (16)). Since the update rule (expression (16)) is the same as the update rule (expression (17) and expression (18)), the (W, Al) is updated according to the update rule (expression (17) and expression (18)). A policy of updating the separation matrix W by updating a part of the mixed matrix A is a feature of the ISS.


As described above, the algorithm ISSd when d=1 is satisfied matches the ISS1 in the related art, and the algorithm ISSd when d=2 is satisfied corresponds to the ISS2 disclosed in the present example.


Signal Processing Device 1

Hereinafter, a functional configuration of a signal processing device 1 according to the present example will be described with reference to FIG. 1. As illustrated in FIG. 1, the signal processing device 1 according to the present example includes an initial value setting unit 11, an auxiliary variable updating unit 12, a separation signal updating unit 13, and a control unit 14. Hereinafter, an operation of the signal processing device 1 will be described with reference to FIG. 2.


Initial Value Setting Unit 11

The initial value setting unit 11 sets an appropriate initial value to the separation matrix W, and calculates an initial value Y of the separation signal by Y=WX (S11).


Auxiliary Variable Updating Unit 12

The auxiliary variable updating unit 12 repeatedly updates the auxiliary variable Λ according to a control of the control unit 14 (S12).


Separation Signal Updating Unit 13

The separation signal updating unit 13 repeatedly updates the separation signal Y according to a control of the control unit 14 (S13). Specifically, in the optimization problem (expression (12)) related to the separation matrix W of the upper bound function in the majorization-minimization algorithm for the signal source separation technique IVA (independent vector analysis), the separation signal updating unit 13 repeatedly updates the separation signal Y (=WX) by dividing the mixed matrix A into the sub matrixes A1, . . . , and AL having d (d is an integer equal to or larger than 2) columns (expression (15)) and repeatedly updating sets of (W, Al) (l=1, . . . , L) of the separation matrix W and the sub matrixes A1, . . . , AL one by one according to the minimization problem (expression (16)) related to W of the upper bound function (S13).


Since updating the separation matrix W is equivalent to updating the separation signal Y, it is sufficient to update only the separation signal Y without updating the separation matrix W.


Control Unit 14

The control unit 14 controls the auxiliary variable updating unit 12 and the separation signal updating unit 13 to alternately and repeatedly execute processing until a predetermined condition is satisfied.


As the predetermined condition, a condition until a predetermined number of repetitions is reached or a condition until an update amount of each parameter becomes equal to or lower than a predetermined threshold value may be used.


Experiment Result


FIG. 3 illustrates SDR improvements obtained in each method. It can be seen that convergence of the proposed ISS2 is much faster than convergence of the ISS1 and the IP1 and is comparable to convergence of IP2 (note that SDR curves of IP2 and ISS2 almost overlap). This is clear evidence of effectiveness of approach of the present invention.


Supplement

The device according to the present invention as a single hardware entity includes, for example, an input unit to which a keyboard or the like can be connected, an output unit to which a liquid crystal display or the like can be connected, a communication unit to which a communication device (for example, a communication cable) that can communicate with the outside of the hardware entity can be connected, a central processing unit (CPU in which a cache memory, a register, or the like may be included), a RAM or a ROM as a memory, an external storage device as a hard disk, and a bus that connects the input unit, the output unit, the communication unit, the CPU, the RAM, the ROM, and the external storage device such that data can be exchanged therebetween. A device (drive) or the like that can write and read data in and from a recording medium such as a CD-ROM may be provided in the hardware entity as necessary. Examples of a physical entity including such a hardware resource include a general-purpose computer.


The external storage device of the hardware entity stores a program required for implementing the above-described functions, data required for processing of the program, and the like (the program may be stored, for example, in a ROM as a read-only storage device instead of the external storage device). Further, data and the like obtained by processing of the program are appropriately stored in the RAM, the external storage device, or the like.


In the hardware entity, each program stored in the external storage device (or ROM or the like) and data required for processing of each program are read into a memory as necessary and are appropriately interpreted and processed by the CPU. Thereby, the CPU implements a predetermined function (each component represented as . . . unit, . . . means, or the like).


The present invention is not limited to the above-described embodiment and can be appropriately modified without departing from the gist of the present invention. Moreover, the processing described in the above embodiment may be executed not only in time-series according to the described order, but also in parallel or individually according to the processing capability of the device that executes the processing or as necessary.


As described above, in a case where the processing function of the hardware entity (the device according to the present invention) described in the above embodiment is implemented by a computer, processing content of the function of the hardware entity is described by a program. In addition, the computer executes the program, and thus, the processing functions of the hardware entity are implemented on the computer.


The above-described various types of processing can be performed by causing a recording unit 10020 of a computer 10000 illustrated in FIG. 4 to read a program for executing each step of the method and causing a control unit 10010, an input unit 10030, an output unit 10040, and the like to operate.


The program in which the processing content is described can be recorded in a computer-readable recording medium. The computer-readable recording medium may be, for example, any recording medium such as a magnetic recording device, an optical disk, a magneto-optical recording medium, or a semiconductor memory. Specifically, for example, a hard disk device, a flexible disk, a magnetic tape, or the like can be used as the magnetic recording device, a digital versatile disc (DVD), a DVD random access memory (DVD-RAM), a compact disc read only memory (CD-ROM), a CD recordable/rewritable (CD-R/RW), or the like can be used as the optical disc, a magneto-optical disc (MO) or the like can be used as the magneto-optical recording medium, and an electrically erasable and programmable-read only memory (EEP-ROM) or the like can be used as the semiconductor memory.


In addition, the program is distributed by, for example, selling, transferring, or renting a portable recording medium such as a DVD or a CD-ROM in which the program is recorded. Further, a configuration in which the program is stored in a storage device of a server computer and the program is distributed by transferring the program from the server computer to other computers via a network may also be employed.


For example, a computer that executes such a program first temporarily stores a program recorded on a portable recording medium or a program transferred from the server computer in a storage device of the computer. In addition, when executing processing, the computer reads the program stored in the recording medium of the computer and executes the processing according to the read program. Further, in other modes of execution of the program, the computer may read the program directly from a portable recording medium and execute processing according to the program, or alternatively, the computer may sequentially execute processing according to a received program every time a program is transferred from the server computer to the computer. In addition, the above-described processing may be executed by a so-called application service provider (ASP) type service that implements a processing function only by an execution instruction and result acquisition without transferring the program from the server computer to the computer. Note that the program in the present embodiment includes information that is used for processing by an electronic computer and is equivalent to the program (data or the like that is not a direct command to the computer but has property that defines processing performed by the computer).


In addition, although the hardware entity is configured by a predetermined program being executed on a computer in this mode, at least some of the processing contents may be implemented by hardware.

Claims
  • 1. A signal processing device comprising: a separation signal updating unit that solves a minimization problem of an upper bound function related to a separation matrix W in a majorization-minimization algorithm for a signal source separation technique (independent vector analysis, IVA) by dividing a mixed matrix A into sub matrixes A1, . . . , AL having d columns (d is an integer equal to or larger than 2) and updating sets (W, Al) (l=1, . . . , L) of the separation matrix W and the sub matrixes A1, . . . , AL one by one and updates a separation signal Y according to updating of the separation matrix W.
  • 2. The signal processing device according to claim 1, wherein d=2 is satisfied.
  • 3. A signal processing method executed by a signal processing device, the signal processing method comprising: a step of solving a minimization problem of an upper bound function related to a separation matrix W in a majorization-minimization algorithm for a signal source separation technique (independent vector analysis, IVA) by dividing a mixed matrix A into sub matrixes A1, . . . , AL having d columns (d is an integer equal to or larger than 2) and updating sets (W, Al) (l=1, . . . , L) of the separation matrix W and the sub matrixes A1, . . . , AL one by one and updating a separation signal Y according to updating of the separation matrix W.
  • 4. (canceled)
  • 5. The signal processing method according to claim 3, wherein d=2 is satisfied.
  • 6. A computer-readable non-transitory recording medium storing computer-executable program instructions that when executed by a processor cause a computer to execute a program for causing a computer to function as the signal processing device comprising: a separation signal updating unit that solves a minimization problem of an upper bound function related to a separation matrix W in a majorization-minimization algorithm for a signal source separation technique (independent vector analysis, IVA) by dividing a mixed matrix A into sub matrixes A1, . . . , AL having d columns (d is an integer equal to or larger than 2) and updating sets (W, Al) (l=1, . . . , L) of the separation matrix W and the sub matrixes A1, . . . , AL one by one and updates a separation signal Y according to updating of the separation matrix W.
  • 7. The signal processing device according to claim 6, wherein d=2 is satisfied.
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2021/047856 12/23/2021 WO