SOUND SOURCE SEPARATION METHOD, SOUND SOURCE SEPARATION APPARATUS, AND PROGARM

Information

  • Patent Application
  • 20240233744
  • Publication Number
    20240233744
  • Date Filed
    February 08, 2021
    3 years ago
  • Date Published
    July 11, 2024
    4 months ago
Abstract
A mixed acoustic signal including sound emitted from a plurality of sound sources and sound source video signals representing at least one video of the plurality of sound sources are received as inputs, and at least a separated signal including a signal representing a target sound emitted from one sound source represented by the video is acquired. However, at least the separated signal is acquired using properties of the sound source that affects sound emitted by the sound source acquired from the video and/or features of a structure used for the sound source to emit the sound.
Description
TECHNICAL FIELD

The present invention relates to sound source separation technology, and more particularly, to multimodal sound source separation.


BACKGROUND ART

In single-channel sound source separation technology for inferring the speech signal from each of speakers before having been mixed from a mixed signal of the speech of a plurality of speakers measured by a single microphone, it is general to simultaneously infer all sound source signals included in the mixed signal using a neural network. An inferred sound source signal is called a separated signal. In this framework, since the output order of signals corresponding to respective speakers included in separated signals is arbitrary, processing at the subsequent stage such as speaker identification is required when the speech of a specific speaker is extracted. Further, at the time of learning model parameters of a neural network, it is necessary to calculate errors between a separated signal and a sound source signal before mixing for each speaker and to evaluate the entire errors therefrom. In this case, there is a problem that errors cannot be determined if a separated signal and a sound source signal are not associated with each speaker. This problem is called a permutation problem.


On the other hand, permutation invariant training (PIT) for calculating errors with respect to the association between all elements of a sound source signal and a separated signal corresponding to each speaker and optimizing model parameters of a network such that the overall error based on the errors is minimized has been proposed (refer to Non Patent Literature 1, for example). Further, multimodal speech separation has been proposed in which face videos of speakers are input simultaneously with mixed speech signals, and the output order of signals corresponding to the speakers included in separated signals is uniquely determined from the videos of the speakers (refer to Non Patent Literature 2 and 3, for example). In multi-modal sound source separation, it is confirmed that the performance is higher than that of speech separation using only sound in consideration of utterance timings and utterance content at the time of separation while solving the permutation problem, by using a video of each speaker.


CITATION LIST
Non Patent Literature





    • [NPL 1] D. Yu, M. Kolbak, Z. Tan, and J. Jensen, “Permutation invariant training of deep models for speaker-independent multitalker speech separation,” in Proc. ICASSP, 2017, pp. 241-245.

    • [NPL 2] R. Lu, Z. Duan, and C. Zhang, “Audio-visual deep clustering for speech separation,” IEEE/ACM Trans. ASLP, vol. 27, No. 11, pp. 1697-1712, 2019.

    • [NPL 3] A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, and M. Rubinstein, “Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation,” ACM Trans. Graph., vol. 37, No. 4, pp. 112: 1-112: 11, 2018.





SUMMARY OF INVENTION
Technical Problem

However, in conventional PIT and multimodal sound source separation, model parameters are trained in consideration of only a distance between a sound source signal and a separated signal in the sound domain. In such a learning method, features of a speaker (for example, features such as speaker characteristics and phoneme information) included in a separated signal cannot be directly considered. This leads to the speech of other speakers remaining and distortion of the speech in the separated signal, and thus separation accuracy is deteriorated.


Such a problem is not limited to cases where sound source separation of speech is performed, and is common to cases where sound source separation of arbitrary sound is performed.


In view of the aforementioned circumstances, an object of the present invention is to improve separation accuracy of sound source separation.


Solution to Problem

A mixed acoustic signal including sound emitted from a plurality of sound sources and sound source video signals representing at least one video of the plurality of sound sources are received as inputs, and at least a separated signal including a signal representing a target sound emitted from one sound source represented by the video is acquired. However, at least the separated signal is acquired using properties of the sound source that affects sound emitted by the sound source acquired from the video and/or features of a structure used for the sound source to emit the sound.


Advantageous Effects of Invention

Accordingly, features of a sound source included in a separated signal appearing in features of a sound source video signal are considered in sound source separation, and thus the separation accuracy of sound source separation can be improved.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram illustrating a functional configuration of a sound source separation device of an embodiment.



FIG. 2 is a block diagram illustrating a functional configuration of a learning device of an embodiment.



FIG. 3 is a block diagram illustrating a hardware configuration of the device.





DESCRIPTION OF EMBODIMENTS

An embodiment of the present invention will be described below with reference to the drawings.


First Embodiment

In the present embodiment, a function for performing multimodal sound source separation in consideration of features of a separated signal is introduced. Accordingly, distortion and residual disturbance sound included in a separated signal are reduced, and thus the separation accuracy of sound source separation can be improved. A point of the present embodiment is that a model for inferring at least a separated signal on the basis of differences between features of the separated signal and features of teaching sound source video signals that are teaching data of sound source video signals representing videos of sound sources is trained. The videos of the sound sources include elements closely related to features of sound emitted from the respective sound sources. For example, when a sound source is a speaker, a video of the speaker (for example, a video including a face video of the speaker) includes elements such as an utterance timing, phoneme information inferred from the mouth, and speaker information such as sex and age, which have a close relation with features of a sound source signal. Further, a video of a sound source is not affected by surrounding sound (for example, noise), and these elements are not deteriorated even under high noise. Therefore, in the present embodiment, features of a separated signal are associated with features of teaching sound source video signals, a model for inferring the separated signal is trained on the basis of differences between the features, and sound source separation is performed using the model. In other words, speaker information obtained from an image signal, the movement of the throat used for vocalization, and a likelihood of a speech signal being generated from the mouth are inferred and used for sound source separation. It is also possible to rephrase a speech signal generation process as acquiring a speech signal from an image signal to perform sound source separation.


That is, in the present embodiment, a mixed acoustic signal representing a mixed sound of sound emitted from a plurality of sound sources and sound source video signals representing a video of at least some of the plurality of sound sources are applied to a model, and a separated signal including a signal representing a target sound emitted from a certain sound source among the plurality of sound sources is inferred. However, the model is obtained by learning based on differences between at least features of a separated signal obtained by applying a teaching mixed acoustic signal which is teaching data of the mixed acoustic signal and teaching sound source video signals which are teaching data of sound source video signals to the model, and features of the teaching sound source video signals. In this manner, by explicitly taking the relationship between the features of the separated signal and the features of videos of sound sources into model training for sound source separation, sound source separation can be performed in consideration of elements such as phoneme information inferred from utterance timing and the mouth, and speaker information such as sex and age. Accordingly, it is possible to take features which have not been handled in multimodal sound source separation into account, for example, to reduce distortion and residual disturbance sound in a separated signal and to improve the separation accuracy of sound source separation.


Hereinafter, the present embodiment will be described in detail. In the present embodiment, a case where a model is a neural network, a sound source is a speaker, and sound is a speech will be exemplified. However, the present invention is not limited thereto.


<Configuration>


As illustrated in FIG. 1, a sound source separation device 11 of the present embodiment includes a storage unit 110, a sound stream processing unit 111, a video stream processing unit 112, a fusion unit 113, a separated signal inference unit 114, and a control unit 116, and executes each type of processing which will be described later on the basis of control of the control unit 116. Although detailed description will be omitted, data obtained in sound source separation processing is stored in the storage unit 110, read and used as necessary. As illustrated in FIG. 2, a learning device 12 includes a storage unit 120, a sound stream processing unit 121, a video stream processing unit 122, a fusion unit 123, a separated signal inference unit 124, a separated signal feature inference unit 125, a control unit 126, and a parameter update unit 127, and executes each type of processing which will be described later on the basis of control of the control unit 126. Although detailed description will be omitted, data obtained in learning processing is stored in the storage unit 120, read and used as necessary.


<Sound Source Separation Processing (Multimodal Sound Source Separation Processing)>


Sound source separation processing of the present embodiment will be described with reference to FIG. 1.

    • Input: Mixed acoustic signal X={x1, . . . , xT}
      • Sound source video signal V={V1, . . . , VN}
      • Model parameters θa, θv, θf, and θs
    • Output: Separated signal Y={Y1, . . . , Yn}


The sound source separation device 11 of the present embodiment receives a mixed acoustic signal X={x1, . . . , xT} which is an acoustic signal representing a mixed sound of sound emitted from a plurality of sound sources (an acoustic signal obtained by mixing signals corresponding to the sound emitted from the plurality of sound sources), sound source video signals V={V1, . . . , VN} representing videos of at least some of the plurality of sound sources, and model parameters (a sound stream model parameter θa, a video stream model parameter θv, a fusion model parameter θf, and a separated signal inference model parameter θs) as inputs, applies the mixed acoustic signal X and the sound source video signals V to a neural network (model) determined on the basis of the model parameters θa, θv, θf, and θs, infers a separated signal Y={Y1, . . . , YN} including a signal representing a target sound emitted from a certain sound source among the plurality of sound sources, and outputs it. The neural network is obtained by learning based on differences between at least features Y of the separated signal obtained by applying a teaching mixed acoustic signal X′ which is teaching data of the mixed acoustic signal X and teaching sound source video signals V′ which are teaching data of the sound source video signals V to the neural network, and features of teaching sound source video signals which are teaching data of an acoustic signal S. (That is, the neural network is determined on the basis of the model parameters θa, θV, θf, and θs obtained by the learning.) Details of this learning processing will be described later.


The plurality of sound sources include sound sources exhibiting appearances correlated with sound emitted therefrom. Examples of sound sources exhibiting appearances correlated with sound emitted therefrom include speakers, animals, plants, natural objects, natural phenomena, machines, and the like. As an example, the present embodiment illustrates a case in which the plurality of sound sources include a plurality of different speakers. All of the plurality of sound sources may be speakers, or only some thereof may be speakers. When the sound sources are speakers, sound emitted from the speakers are speeches, the mixed acoustic signal X includes speech signals representing the speeches, and the sound source video signals V represents videos of the speakers.


The mixed acoustic signal X may be, for example, a time waveform signal (i.e., a time domain signal) obtained by digitally transforming an acoustic signal obtained by observing a mixed sound by an acoustic sensor such as a microphone, or may be a time-frequency domain signal obtained by transforming the time waveform signal into a frequency domain for each predetermined time interval (for example, an interval determined by a window function by which the time waveform signal is multiplied). Examples of the time-frequency domain signal include an amplitude spectrogram, a logarithmic mel filter bank output, and the like obtained by transforming a time waveform by short-time Fourier transform. Since the amplitude spectrogram and the logarithmic mel filter bank are well known, description thereof will be omitted. In the present embodiment, the mixed acoustic signal X is expressed as X={x1, . . . , xT}. Here, T is a positive integer representing a time frame length, xt is an element of the mixed acoustic signal X of a t-th frame, and t=1, . . . , T is a positive integer representing a frame index. That is, the mixed acoustic signal X is a time-series discrete acoustic signal.


The sound source video signals V are video signals obtained by photographing sound sources by a video sensor such as a web camera or a camera of a smartphone. For example, the sound source video signals V represent a video of each of the plurality of sound sources. The sound source video signals V may represent all the videos of the plurality of sound sources described above, or may represent only videos of some of the sound sources. For example, the sound source video signals V may represent videos of one or a plurality of sound sources emitting a target sound among the plurality of sound sources, or videos of sound sources emitting the target sound and other sound sources. For example, when sound sources are speakers, the sound source video signals V may represent videos of one or a plurality of speakers emitting the target sound among the plurality of speakers, or may represent videos of speakers emitting the target sound and other speakers. The sound source video signals V represent videos of sound sources exhibiting appearances correlated with sound emitted therefrom. For example, when the sound sources are speakers, the sound source video signals V represent videos including face videos of the speakers. In the present embodiment, the sound source video signal V is represented as V={V1, . . . , VN}. Here, VN={Vn1, . . . , VnF} represents a video signal of an n-th sound source (for example, a speaker), and Vnf represents a video signal of an f-th frame of the n-th sound source. n=1, . . . , N is a positive integer representing a sound source index, N is an integer of 1 or more representing the number of sound sources (for example, practical sound source separation processing is performed when N is an integer of 2 or more), f=1, . . . , F is a positive integer representing a video frame index, and F is a positive integer representing the number of video frames. Although the number of channels, the number of pixels, and fps of v n f are arbitrary, for example, a video that is a grayscale image having the number of channels of 1, in which the resolution of the entire face of 224 pixels×224 pixels and 25 fps is assumed as Vnf. In this example, the grayscale is used in order to reduce the resources used for calculation, but there is no problem at all in the case of an RGB image.


In the present embodiment, the acoustic signal representing sound emitted from the plurality of sound sources before mixing is represented as S={S1, . . . , SN}. Here, Sn={sn1, . . . , snT} represents the acoustic signal of sound emitted from the n-th sound source, and snt represents the acoustic signal of the t-th frame of the sound emitted from the n-th sound source. The separated signal Y is an inferred signal of the acoustic signal S. In the present embodiment, the separated signal Y is represented as Y={Y1, . . . , YN}. Here, Yn={yn1, . . . , ynT} is an inferred signal of the acoustic signal Sn={sn1, . . . , SnT} of the sound emitted from the n-th sound source, and Ynt is an inferred signal of the acoustic signal snt of the t-th frame of the sound emitted from the n-th sound source. Any or all of Y1, . . . , YN corresponds to the aforementioned “signal representing the target sound (signal representing the target sound emitted from a certain sound source among a plurality of sound sources).” Which of Y1, . . . , YN is a signal representing the target sound depends on the use of the separated signal. The acoustic signal S and the separated signal Y may be time waveform signals (i.e., time domain signals) or time-frequency domain signals such as an amplitude spectrogram or a logarithmic mel filter bank output.


<<Overall Flow of Sound Source Separation Processing>>


As illustrated in FIG. 1, the storage unit 110 stores the model parameters θa, θv, θf, and θs obtained by learning processing which will be described later. The sound stream model parameter θa is input to the sound stream processing unit 111, the video stream model parameter θv is input to the video stream processing unit 112, the fusion model parameter θf is input to the fusion unit 113, and the separated signal inference model parameter θs is input to the separated signal inference unit 114. The sound stream processing unit 111 obtains an embedding vector Ca of the mixed acoustic signal from the input mixed acoustic signal X on the basis of the sound stream model parameter θa and outputs it. The video stream processing unit 112 obtains an embedding vector Cv of the sound source video signals from the input sound source video signals V on the basis of the video stream model parameter θv and outputs it. The fusion unit 113 obtains an embedding vector M of the sound source signal from an embedding vector Ca of the input mixed acoustic signal and the embedding vector Cv of the sound source video signals on the basis of the fusion model parameter θf and outputs it. The separated signal inference unit 114 obtains a separated signal Y from the embedding vector M of the input sound source signal and the mixed acoustic signal X on the basis of the separated signal inference model parameter θs and outputs it. This will be described in detail below.


<<Processing of Sound Stream Processing Unit 111 (Step S111)>>

    • Input: Mixed acoustic signal X={x1, . . . , xT}
      • Sound stream model parameter θa
    • Output: Embedding vector Ca of mixed acoustic signal


The mixed acoustic signal X and the sound stream model parameter θa are input to the sound stream processing unit 111. The sound stream model parameter θa may be input and set to the sound stream processing unit 111 in advance, or may be input each time the mixed acoustic signal X is input. The sound stream processing unit 111 infers and outputs the embedding vector Ca of the mixed sound signal from the mixed sound signal X and the sound stream model parameter θa. The embedding vector Ca represents a feature of the mixed acoustic signal X and is expressed as a series of vectors having an arbitrary number ka of dimensions of one or more dimensions manually determined and taking continuous values or discrete values, for example. The number ka of dimensions is, for example, 1792 or the like. The series length of the embedding vector Ca is T which is the same as that of the mixed acoustic signal X. That is, the embedding vector Ca is expressed as a matrix of, for example, T×ka or ka×T. The sound stream processing unit 111 infers the embedding vector Ca according to, for example, the following formula (1).





[Math. 1]






C
a=AudioBlock(X;θa))


Here, AudioBlock( ) is a function for obtaining and outputting the embedding vector Ca of the mixed acoustic signal using the input mixed acoustic signal X and the sound stream model parameter θa. As this function, any neural network can be used as long as a learning method which will be described later can be applied thereto, and for example, a feedforward neural network or a recurrent neural network can be used. How to obtain the sound stream model parameter θa will be described with reference to learning processing which will be described later.


<<Processing of Video Stream Processing Unit 112 (Step S112)>>

    • Input: Sound source video signal V={V1, . . . , VN}
      • Video stream model parameter θv
    • Output: Embedding vector Cv={Cv1, . . . , CvN} of sound source video signal


The sound source video signals V and the video stream model parameter θv are input to the video stream processing unit 112. The video stream model parameter θv may be input and set to the video stream processing unit 112 in advance, or may be input each time the sound source video signal V is input. The video stream processing unit 112 infers the embedding vector Cv={Cv1, . . . , CvN} of the sound source video signal from the sound source video signal V and the video stream model parameter θv and outputs it. This embedding vector Cv represents a feature of the sound source video signal V, and Cvn (n=1, . . . , N) represents a feature of the video of the n-th sound source (speaker in the present embodiment). For example, Cvn has an arbitrary number kvn of dimensions of one or more dimensions manually determined and takes continuous values or discrete values. kv1, . . . , kvN may be identical, or at least some thereof may different from the others. The kvn of dimensions is, for example, 1792 or the like. The series length of the Cvn is T which is the same as that of the mixed acoustic signal. That is, Cvn is expressed as a matrix of, for example, T×kvn or kvn×T. Although the subscript “γ” in the case of “αβγ” should be originally located immediately below the subscript “β,” the subscript “γ” may be written obliquely to the right below the subscript “β” due to the restriction of description in the present description. The video stream processing unit 112 infers the embedding vector Cv according to, for example, the following formula (2).





[Math. 2]






C
v=VideoBlock(V;θv)  (2)


The VideoBlock( ) is a function for obtaining and outputting the embedding vector Cv of the sound source video signal using the input sound source video signal V and the video stream model parameter θv. As this function, any neural network can be used as long as the learning method which will be described later can be applied thereto, and for example, a three-dimensional CNN or a recurrent neural network can be used. How to obtain the video stream model parameter θv will be described with reference to learning processing which will be described later.


<<Processing of Fusion Unit 113 (Step S113)>>

    • Input: Embedding vector Ca of mixed acoustic signal
      • Embedding vector Cv of sound source video signal
      • Fusion model parameter θf
    • Output: Embedding vector M={M1, . . . , MN} of sound source signal


The embedding vector Ca of the mixed acoustic signal, the embedding vector Cv of the sound source video signal, and the fusion model parameter θf are inputted to the fusion unit 113. The fusion model parameter θf may be input and set to the fusion unit 113 in advance, or may be input each time the embedding vector Ca of the mixed acoustic signal and the embedding vector Cv of the sound source video signal are input. The fusion unit 113 infers the embedding vector M={M1, . . . , MN} of sound source signal from the embedding vector Ca of the mixed acoustic signal, the embedding vector Cv of the sound source video signal, and the fusion model parameter θf and outputs it. The embedding vector M represents features of the embedding vector Ca of the mixed acoustic signal and the embedding vector Cv of the sound source video signal. Here, Mn (n=1, . . . , N) represents an element corresponding to the n-th sound source (speaker in the present embodiment) of the embedded vector M of the sound source signal. For example, M n has an arbitrary number kmn of dimensions of one or more dimensions manually determined and takes continuous values or discrete values. Km1, . . . , KmN may be identical, or at least some thereof may be different from the others. The number kmn of dimensions is, for example, 1792 or the like. The series length of Mn is T which is the same as that of the mixed acoustic signal. That is, M n is expressed as a matrix of, for example, T×kmn or kmn×T. The fusion unit 113 infers the embedding vector M of the sound source signal, for example, according to the following formula (3).





[Math. 3]






M=FusionBlock(Ca,Cvf)  (3)


Here, FusionBlock( ) is a function for obtaining and outputting the embedded vector M of the sound source signal from the embedding vector Ca of the input mixed acoustic signal, the embedding vector Cv of the sound source video signal, and the fusion model parameter θf. As this function, any neural network can be used as long as the learning method which will be described later can be applied thereto, and for example, a feedforward neural network can be used. How to obtain the fusion model parameter θf will be described with reference to learning processing which will be described later.


<<Processing of Separated Signal Inference Unit 114 (Step S114)>>

    • Input: Embedding vector M={M1, . . . , MN} of sound source signal
      • Mixed acoustic signal X={x1, . . . , xT}
      • Separated signal inference model parameter θs
    • Output: Separated signal Y={Y1, . . . , YN}


The embedding vector M of the sound source signal, the mixed acoustic signal X, and the separated signal inference model parameter θs are inputted to the separated signal inference unit 114. The separated signal inference model parameter θs may be input and set to the separated signal inference unit 114 in advance, or may be input each time the embedding vector M of the sound source signal and the mixed acoustic signal X are input. The separated signal inference unit 114 infers a separated signal Y={Y1, . . . , Yn} from the embedding vector M of the sound source signal, the mixed acoustic signal X, and the separated signal inference model parameter θs and outputs it. The separated signal inference unit 114 infers this separated signal Y, for example, according to the following formula (4).





[Math. 4]






Y=Separation(M,X;θs)  (4)


Here, Separation( ) is a function for inferring the separated signal Y from the embedding vector M of the input sound source signal and the mixed acoustic signal X and outputting it. As this function, any neural network can be used as long as the learning method which will be described later can be applied thereto, and for example, a sigmoid function or the like can be used.


<Learning Processing (Multimodal Learning Processing)>


Learning processing of the present embodiment will be described using FIG. 2.

    • Input: Teaching mixed acoustic signals X′={x1′, . . . , xT′}
      • Teaching sound source video signal V′={V1′, . . . , VN′}
      • Teaching sound source signal S={S1, . . . , SN}
    • Output: Sound stream model parameter θa, video stream model parameter θv, fusion model parameter θf, separated signal inference model parameter θs, and separates signal feature inference model parameter θavc


The learning device 12 of the present embodiment at least obtains model parameters θa, θv, θf, θs, and θavc by learning based on differences between features of a separate signal V={Y1′, . . . , YN′} obtained by applying a teach mixed acoustic signal X′={x1′, . . . , xT′} which is teaching data of a mixed acoustic signal X and a teaching sound source video signal V′={V1′, . . . , VN′} which is teaching data of a sound source video signal V to a neural network (model), and features of the teaching sound source video signal V′={V1′, . . . , VN′} and outputs them. For example, the learning device 12 at least performs learning such that similarity between an element representing a feature of a teaching sound source video signal Sn corresponding to a first sound source among a plurality of sound sources and an element representing a feature of a separation signal Yn′ corresponding to a second sound source different from the first sound source decreases, and similarity between the element representing the feature of the teaching sound source video signal S n corresponding to the first sound source and an element representing a feature of a separated signal Yn corresponding to the first sound source increases. The present embodiment shows an example in which the model parameters θa, θv, θf, θs, and θavc are obtained by learning based on differences between the separated signal Y′ obtained by applying the teaching mixed audio signal X′ and the teaching sound source video signal V′ to a neural network (model), and teaching sound source signals S which are teaching data of separated signals corresponding to the teaching mixed audio signal X′ and the teaching sound source video signal V′ and output, in addition to the feature of the separated signal Y′ and the feature of the teaching sound source video signal V′. However, the present invention is not limited thereto.


The teaching mixed acoustic signal X′={x′1, . . . , xT′} is teaching data of the mixed acoustic signal X={x1, . . . , xT}, and the data format of the teaching mixed acoustic signal X′={x1′, . . . , xT′} is the same as the data format of the mixed acoustic signal X={x1, . . . , xT}. There are a plurality of teaching mixed acoustic signals X′, and the plurality of teaching mixed acoustic signals X′ may include or may not include the mixed acoustic signal X to be an input of sound source separation processing.


The teaching sound source video signal V′={V1′, . . . , VN′} is teaching data of the sound source video signal V={V1, . . . , VN}, and the data format of the teaching sound source video signal V′={V1′ . . . , VN′} is the same as that of the above-mentioned sound source video signal V={V1, . . . , VN}. There are a plurality of teaching sound source video signals V′, and the plurality of teaching sound source video signals V′ may include or may not include the sound source video signal V to be an input of sound source separation processing.


The teaching sound source signals S={S1, . . . , SN} are acoustic signals representing sound before mixing emitted from a plurality of sound sources corresponding to the teaching mixed sound signals X′={x1′, . . . , xT′} and the teaching sound source video signals V′={V′1, . . . , V′N}. There are a plurality of teaching sound source signals S={S1, . . . , SN} corresponding to the teaching mixed acoustic signals X′={x1′, . . . , xT′} and the teaching sound source video signals V′={V1′, . . . , VN′}. In learning processing, the following processing is performed on sets of the teaching mixed acoustic signals X′, the teaching sound source video signals V′, and the teaching sound source signals S corresponding to each other.


<Overall Flow of Learning Processing>


As illustrated in FIG. 2, the teaching mixed acoustic signals X′={x1′, . . . , xT′}, the teaching sound source video signals V′={V1′, . . . , VN′}, and the teaching sound source signals S={S1 . . . , SN} corresponding to each other are input to the learning device 12. The teaching mixed acoustic signals X′ are input to the sound stream processing unit 121 and the separated signal inference unit 124, the teaching sound source video signals V′ are input to the video stream processing unit 122, and the teaching sound source signals S are input to the parameter update unit 127. The sound stream processing unit 121 obtains an embedding vector Ca′ of a mixed acoustic signal from the input teaching mixed acoustic signals X′ on the basis of a provisional model parameter θa′ of the sound stream model parameter θa and outputs it. The video stream processing unit 122 obtains an embedding vector Ca′ of a sound source video signal from the input teaching sound source video signals V′ on the basis of a provisional model parameter θv′ of the video stream model parameter θv and outputs it. The fusion unit 123 obtains an embedding vector M′ of a sound source signal from the embedding vector Ca′ of the input mixed acoustic signal and the embedding vector Cv′ of the sound source video signal on the basis of a provisional model parameter θf′ of the fusion model parameter θf and outputs it. The separated signal inference unit 124 obtains a separated signal Y′ from the embedding vector M′ of the input sound source signal and the teaching mixed acoustic signals X′ on the basis of a separated signal inference model parameter θs′ and outputs it. The separated signal feature inference unit 125 obtains an embedding vector C avc of a separated signal from the embedded vector M′ of the input sound source signal on the basis of a provisional separated signal feature inference model parameter θavc and outputs it. The parameter update unit 127 updates the provisional model parameters θa′, θv′, θf′, θs′, and θavc′ by performing self-supervised learning based on errors between the teaching sound source signals S and the separated signal Y′ and inter-modal correspondence errors between the embedding vector Cv′ of the sound source video signal and the embedding vector Cavc of the separated signal. The provisional model parameters θa′, θv′, θf′, θs′, and θavc′ which satisfy predetermined end conditions by repeating the above-described processing become the model parameters θa, θv, θf, θs, and θavc. This will be described in detail below.


<<Initial Setting Processing of Parameter Update Unit 127 (step S1271)>>


The parameter update unit 127 stores initial values of the provisional model parameters θa′, θv′, θf′, θs′, and θavc′ of the model parameters θa, θv, θf, θs, and θavc in the storage unit 120. The initial values of the provisional model parameters θa′, θv′, θf′, θs′, and θavc′ may be any values.


<<Processing of Sound Stream Processing Unit 121 (Step S121)>>

    • Input: Teaching mixed acoustic signals X′={x1′, . . . , xT′}
      • Provisional sound stream model parameter θa′
    • Output: Embedding vector Ca′ of mixed acoustic signal


The input teaching mixed acoustic signals X′ and the provisional sound stream model parameter θa′ read from the storage unit 120 are input to the sound stream processing unit 121. The sound stream processing unit 121 infers the embedding vector Ca′ of the mixed sound signal from the teaching mixed sound signals X′ and the provisional sound stream model parameter θa′ and outputs it. This inference processing is the same as processing (step S111) (formula (1)) of the above-described sound stream processing unit 111 except that X, θa, and Ca are replaced by X′, θa′, and Ca′.


<<Processing of Video Stream Processing Unit 122 (Step S122)>>

    • Input: Teaching sound source video signal V′={V1′, . . . , VN′}
      • Provisional video stream model parameter θv′
    • Output: Embedding vector Cv′={Cv1′, . . . , CvN′ } of sound source video signal


The input teaching sound source video signals V′ and the provisional video stream model parameter θv′ read from the storage unit 120 are input to the video stream processing unit 122. The video stream processing unit 122 infers the embedding vector Cv′={Cv1′, CvN′} of the sound source video signal from the teaching sound source video signals V′ and the provisional video stream model parameter θv′ and outputs it. This inference processing is the same as processing (step S112) (formula (2)) of the above-described video stream processing unit 112 except that V, θv, and Cv are replaced by V′, θv′, and Cv′.


<<Processing of Fusion Unit 123 (Step S123)>>

    • Input: Embedding vector Ca′ of mixed acoustic signal
      • Embedding vector Cv′ of sound source video signal
      • Provisional fusion model parameter θf″
    • Output: Embedding vector M′={M1′, . . . , MN′} of sound source signal


The embedding vector Ca′ of the mixed acoustic signal, the embedding vector Cv′ of the sound source video signal, and the provisional fusion model parameter θf′ read from the storage unit 120 are input to the fusion unit 123. The fusion unit 123 infers an embedding vector M′={M1′, . . . , MN′} of a sound source signal from the embedding vector Ca′ of the mixed acoustic signal, the embedding vector Cv′ of the sound source video signal, and the provisional fusion model parameter θf′ and outputs it. The data format of the embedding vector M′ of the sound source signal is the same as the data format of the embedding vector M of the sound source signal. Further, this inference processing is the same as processing (step S113) (formula (3)) of the above-described fusion unit 113 except that Ca, Cv, θf, and M={M1, . . . , MN} are replaced by Ca′, Cv′, θf′, and M′={M1′, . . . , MN′}.


<<Processing of Separated Signal Inference Unit 124 (Step S124)>>

    • Input: Embedding vector M′={M1′, . . . , MN′} of sound source signal
      • Teaching mixed acoustic signals X′={x1′, . . . , xT′}
      • Provisional separated signal inference model parameter θs
    • Output: Separated signals Y′={Y1′, . . . , YN′}


The embedding vector M′ of the sound source signal, the teaching mixed acoustic signals X′, and the provisional separated signal inference model parameter θs′ read from the storage unit 120 are input to the separated signal inference unit 124. The separated signal inference unit 114 infers a separated signal Y′={Y1′, . . . , YN′} from the embedding vector M′ of the sound source signal, the teaching mixed acoustic signals X′, and the provisional separated signal inference model parameter θs′ and outputs it. The data format of the separated signal Y′={Y1′, . . . , YN′} is the same as the data format of the above-described separated signal Y={Y1, . . . , YN}. This inference processing is the same as processing (step S114) (formula (4)) of the above-described separated signal inference unit 114 except that M={M1, . . . , MN}, X={x1, . . . , xT}, θs, and Y={Y1, . . . , YN} are replaced by M′={M1′, . . . , MN′}, X′={x1′, . . . , XT′}, θs′, and Y′={Y1′, . . . , YN′}.


<<Processing of Separated Signal Feature Inference Unit 125 (step S125)>>

    • Input: Embedding vector M′={M1′, . . . , MN′} of sound source signal
      • Provisional separated signal feature inference model parameter θavc′
    • Output: Embedding vector Cavc′={Cavc1′, . . . , CavcN′} of separated signal


The embedding vector M′ of the sound source signal and the provisional separated signal feature inference model parameter θavc′ read from the storage unit 120 are input to the separated signal feature inference unit 125. The separated signal feature inference unit 125 infers an embedding vector Cavc′={Cavc1′, . . . , CavcN′} of a separated signal from the embedding vector M′ of the sound source signal and the provisional separated signal feature inference model parameter θavc′ and outputs it. Here, the embedding vector Cavc′ represents a feature of a separated signal Y′, and Cavcn′ (where n=1, . . . , N) represents a feature of an n-th separated signal Yn′. For example, Cavcn′ has an arbitrary number kavcn of dimensions of one or more dimensions manually determined and takes continuous values or discrete values. kavc1, . . . kavcN may be identical, or at least some thereof may be different from the others. The number kavcn of dimensions is, for example, 1792 or the like. The series length of Cavcn′ is T which is the same as that of the mixed acoustic signal. That is, Cavcn′ is expressed as a matrix of, for example, T×kavcn or kavcn×T. The separated signal feature inference unit 125 infers the embedding vector Cavcn′ according to, for example, the following formula (5).





[Math. 5]






C
avc′=AVCBlock(M′;θavc′)  (5)


Here, AVCBlock( ) is a function for obtaining the embedding vector Cavcn′ of the separated signal from the embedding vector M′ of the input sound source signal and the provisional separated signal feature inference model parameter θavc′ and outputting it. As this function, an arbitrary neural network can be used as long as the learning method can be applied thereto, and for example, a feedforward neural network or the like can be used.


<<Processing of Parameter Update Unit 127 (Step S1272)>>


The input teaching sound source signals S={S1, . . . , SN}, the separated signal Y′={Y1′, . . . , YN′} obtained in step S124, the embedding vector Cv′={Cv1′, . . . , CvN′} of the sound source video signal obtained in step S122, and the embedding vector Cavc′={Cavc1′, . . . , CavcN′} of the separated signal obtained in step S125 are input to the parameter update unit 127. The parameter update unit 127 updates the provisional model parameters Θ={θa′, θv′, θf′, θs′, θavc′} as follows on the basis of errors (differences) between the teaching sound source signals S and the separated signal Y′ and error (inter-modal error) (difference) between the embedding vector Cv′ of the sound source video signal and the embedding vector Cavc′ of the separated signal.









[

Math
.

6

]









Θ
=


argmin
Θ




{



L
1

(


Y


,
S

)

+

λ



L
2

(


C

v



,

C

avc




)



}






(
6
)







Here, formula (6) represents the provisional model parameters Θ for minimizing {L1(Y′,S)+λL2(Cv′,Cavc′)}. λ is a hyper parameter for determining weights of separation learning based on errors between the teaching sound source signals S and the separated signal Y′ and inter-modal correspondence learning based on an error between the embedding vector Cv′ of the sound source video signal and the embedding vector Cavc′ of the separated signal. For example, λ is a real number of 0 or more. The first term L1(Y′,S) on the right side of formula (6) corresponds to separation learning, and in this separation learning, errors between the teaching sound source signals S and the separated signal Y′ are minimized. In other words, in this separation learning, differences between the features of the teaching sound source signals S and the features of the separated signal Y′ separated in the sound domain are reduced. Definition of errors between the teaching sound source signals S and the separated signal Y′ may be any one of an average square error, a mean squared error, or an average absolute error. For example, when the teaching sound source signals S and the separated signal Y′ are time-frequency domain signals composed of amplitude spectrograms, and L1(Y′,S) is an average square error between the teaching sound source signals S and the separated signal Y′, L1(Y′,S) is exemplified as follows.









[

Math
.

7

]











L
1

(


Y


,
S

)

=


1
IJN







n









"\[LeftBracketingBar]"


Y
n




"\[RightBracketingBar]"


-



"\[LeftBracketingBar]"


S
n



"\[RightBracketingBar]"





F
2






(
7
)







Here, I and J represent a frequency bin and a total number of time frames of the amplitude spectrograms, |⋅| represents an absolute value, and //⋅//F represents a Frobenius norm. The second term L2(Cv′, Cavc′) of the right side of formula (6) corresponds to inter-modal correspondence learning considering correspondence between a video of a sound source (speaker) and the separated signal Y′, and this inter-modal correspondence learning minimizes an error between the embedding vector Cv′ of the sound source video signal and the embedding vector Cavc′ of the separated signal Y′. In other words, in this inter-modal correspondence learning, differences between features of a video domain of a video corresponding to the same sound source (speaker) (the embedding vector Cv′ of the sound source video signal) and features of the sound domain of the separated signal Y′ (the embedding vector Cavc′ of the separated signal) is reduced. In other words, in this inter-modal correspondence learning, differences between features of video domains of videos corresponding to different sound sources (speakers) (the embedding vector Cv′ of the sound source video signal) and the features of the sound domain of the separated signal Y′ (the embedding vector Cavc′ of the separated signal) is increased. The error between the embedding vector Cv′ of the sound source video signal and the embedding vector Cavc′ of the separated signal may be represented using similarity or may be represented using a distance. For example, L2(Cv′,Cavc′) can be represented using cosine similarity d(⋅) between column vectors of Cvn′ and Cavcn′ as follows.









[

Math
.

8

]













L
2

(


C

v




,

C

avc





)

=






n








j

[









n



n






"\[LeftBracketingBar]"


d

(


c
nj

v




,

c


n



j


avc





)



"\[RightBracketingBar]"



-

d

(


c
nj

v




,

c
nj

avc





)


]







(
8
)












[

Math
.

9

]










d

(

a
,
b

)

=



a
T


b





a


2





b


2







(
9
)







Here, Cvnj′ and Cavcnj′ are j-th column vectors of Cvn′ and Cavcn′, ⋅T is a transposition of ⋅, and //⋅// is L2 norm of the vector ⋅. n′=1, . . . , N, the numbers of dimensions of Cvn′ and Cavcn′ are both kn′, the series lengths of Cvn′ and Cavcn′ are both T, both Cvn′ and Cavcn′ are a matrix of T×kn′ or kn′×T, and j=1, . . . , T. The first term d (Cvnj′, Cavcn′j′) of the right side of formula (8) minimizes a cosine similarity d(Cvnj′, Cavcn′j′) between an element Cvnj′ of an embedding vector Cv′ of a sound source video signal corresponding to an n-th sound source video (speaker video) and an element Cavcn′j′ of an embedding vector Cavc′ of a separated signal that does not correspond to the n-th sound source, and the second term of the right side maximizes a cosine similarity d(Cvnj′, Cavcnj′) between the element Cvnj′ of the embedding vector Cv′ of the sound source video signal corresponding to the n-th source video (speaker video) and an element Cavcnj′ of an embedding vector Cavc′ of a separated signal corresponding to the n-th sound source. That is, learning is performed such that a similarity between the element Cvnj′ representing a feature of a teaching sound source video signal Sn corresponding to the n-th sound source (first sound source) among a plurality of sound sources and the element Cavcn′j′ representing a feature of a separated signal Yn′ corresponding to an n′-th (n′≠n) sound source (second sound source different from the first sound source) decreases, and a similarity between the element Cvn if representing the feature of the teaching sound source video signal Sn corresponding to the n-th sound source (first sound source) and the element Cavcnj′ representing a feature of a separated signal Yn corresponding to the n-th sound source (first sound source) increases. The inference problem of the provisional model parameters Θ={θa′, θv′, θf′, θs′, θavc′} can be solved by any method. For example, the provisional model parameters Θ can be obtained by optimization using an error back propagation method. The updated provisional model parameters Θ={θa′, θv′, θf′, θs′, θavc′} are stored in the storage unit 120.


<<End Condition Determination Processing (Step S126)>>


Next, the control unit 126 determines whether or not predetermined end conditions are satisfied. Although the end conditions are not limited, for example, it is possible to set that the number of times of updating the provisional model parameters Θ has reached a predetermined number of times, that the amount of updating the provisional model parameters Θ is within a predetermined range or less, and the like as the end conditions. If it is determined that the end conditions are not satisfied, the learning device 12 receives new teaching mixed acoustic signals X′={x1′, . . . , xT′}, teaching sound source video signals V′={V1′, . . . , VN′}, and teaching sound source signals S={S1, . . . , SN} corresponding to each other as input, and re-executes processing of steps S121, S122, S123, S124, S125, and S1272. On the other hand, if is determined that the end conditions are satisfied, θa′, θv′, θf′, and θs′ among the updated provisional model parameters Θ={θa′, θv′, θf′, θs′, θavc′} are output as the model parameters θa, θv, θf, and θs. The output model parameters θa, θv, θf, and θs are stored in the storage unit 110 of the above-described sound source separation device 11 (FIG. 1) and used for the above-described sound source separation processing. Separated signals Y={Y1, . . . , YN} obtained by sound source separation processing based on the model parameters θa, θv, θf, and θs obtained by the learning are associated with videos of respective sound sources (speakers) and reflect features of the sound sources (for example, utterance timings, phoneme information, speaker information such as sex and age, and the like). Therefore, in the present embodiment, multi-modal sound source separation with high separation accuracy can be realized.


[Hardware Configuration]


The sound source separation device 11 and the learning device 12 in each embodiment are devices configured by executing a predetermined program by a general-purpose or dedicated computer including a processor (a hardware processor) such as a central processing unit (CPU), memories such as a random-access memory (RAM) and a read-only memory (ROM). That is, each of the sound source separation device 11 and the learning device 12 includes a processing circuitry configured to implement each part included therein. This computer may include one processor and memory, or may include a plurality of processors or memories. This program may be installed in the computer or may be recorded in a ROM or the like in advance. In addition, some or all processing units may be configured using an electronic circuitry which realizes a processing function independently, instead of an electronic circuitry which realizes a functional configuration by reading a program like a CPU. Further, an electronic circuitry constituting a single device may include a plurality of CPUs.



FIG. 3 is a block diagram illustrating a hardware configuration of the sound source separation device 11 and the learning device 12 in each embodiment. As illustrated in FIG. 3, the sound source separation device 11 and the learning device 12 in this example include a central processing unit (CPU) 10a, an input unit 10b, an output unit 10c, a random access memory (RAM) 10d, a read only memory (ROM) 10e, an auxiliary storage device 10f, and a bus 10g. The CPU 10a in this example has a control unit 10aa, an arithmetic operation unit 10ab, and a register 10ac and executes various types of arithmetic operation processing according to various programs read in the register 10ac. In addition, the input unit 10b is an input terminal to which data is input, a keyboard, a mouse, a touch panel or the like. The output unit 10c is an output terminal through which data is output, a display, a LAN card controlled by the CPU 10a that has read a predetermined program, or the like. Further, the RAM 10d is a static random access memory (SRAM), a dynamic random access memory (DRAM), or the like, and has a program region 10da in which a predetermined program is stored and a data region 10db in which various types of data are stored. Further, the auxiliary storage device 10f is, for example, a hard disk, a magneto-optical (MO) disk, a semiconductor memory, or the like and has a program region 10fa in which a predetermined program is stored and a data region 10fb in which various types of data are stored. In addition, the bus 10g connects the CPU 10a, the input unit 10b, the output unit 10c, the RAM 10d, the ROM 10e and the auxiliary storage device 10f such that information can be exchanged. The CPU 10a writes a program stored in the program region 10fa of the auxiliary storage device 10f to the program region 10da of the RAM 10d according to a read operating system (OS) program. Likewise, the CPU 10a writes various types of data stored in the data region 10fb of the auxiliary storage device 10f to the data region 10db of the RAM 10d. The address on the RAM 10d in which this program or data is written is stored in the register 10ac of the CPU 10a. The control unit 10aa of the CPU 10a sequentially reads out these addresses stored in the register 10ac, reads a program or data from the region on the RAM 10d indicated by the read address, and causes the calculation unit 10ab to sequentially execute the operations indicated by the program, and stores the calculation result in the register 10ac. According to such a configuration, the functional configurations of the sound source separation device 11 and the learning device 12 are realized.


The above program can be recorded on a computer-readable recording medium. An example of computer-readable recording medium is non-transitory (non-transitory) recording medium. Examples of such a recording medium include a magnetic recording device, an optical disc, a magneto-optical recording medium, and a semiconductor memory.


The distribution of the program is performed by, for example, selling, transferring and lending or the like a portable recording medium such as a DVD or CD-ROM in which the program is recorded. Further, the program may be stored in a storage device of a server computer and transferred from the server computer to another computer via a network to distribute the program. As described above, a computer executing such a program stores, for example, a program recorded on a portable recording medium or a program transferred from a server computer, in a storage device thereof first. Then, at the time of executing processing, the computer reads the program stored in the storage device thereof and executes processing according to the read program. As another execution form of the program, the computer may directly read the program from the portable recording medium and execute processing according to the program, or each time a program is transferred from the server computer to the computer, processing according to the received program may be sequentially executed. In addition, a program is not transferred from the server computer to the computer, and the above-described processing may be executed by a so-called application service provider (ASP) type service which realizes a processing function only by an execution instruction and result acquisition. It is assumed that the program in this embodiment includes information that is used for processing by a computer and is equivalent to the program (data that is not a direct command for a computer but has the nature of defining processing of the computer).


Although the device is configured by executing a predetermined program on a computer in each embodiment, at least a part of these processing contents may be realized using hardware.


The present invention is not limited to the above-described embodiment. For example, the sound source separation device 11 and the learning device 12 may be separately configured and connected via a network such as the Internet, and model parameters may be provided from the learning device 12 to the sound source separation device 11 via the network, or the model parameters may be provided from the learning device 12 to the sound source separation device 11 via a portable recording medium such as a USB memory without via the network. Alternatively, the sound source separation device 11 and the learning device 12 may be integrally configured, and the model parameter obtained by the learning device 12 may be provided to the sound source separation device 11.


In addition, in the present embodiment, a neural network is input is used as a model. However, the present invention is not limited to thereto, and a probability model such as a hidden Markov model may be used as a model, or other models may be used.


Further, in the present embodiment, a case in which a sound source is a speaker and sound is speech is exemplified. However, the present invention is not limited thereto, and sound sources may include animals, plants, natural objects, natural phenomena, machines and the like in addition to people, and sound may include cry, friction sound, vibration sound, rain sound, lightning sound, engine sound, and the like.


In addition, the above-described various kinds of processing may be performed not only in time series in accordance with the description but also in parallel or individually based on the processing capability of an apparatus performing the processing or as needed. In addition, as a matter of course, it is possible to variously modify the present invention as appropriate without departing from the spirit of the present invention.


REFERENCE SIGNS LIST






    • 11 Sound source separation device


    • 12 Learning device


    • 111, 121 Sound stream processing unit


    • 112, 122 Video stream processing unit


    • 113, 123 Fusion unit


    • 114, 124 Separated signal inference unit


    • 125 Separated signal feature inference unit




Claims
  • 1. A method for separating a sound source, comprising: receiving a mixed acoustic signal, wherein the mixed acoustic signal represents sound emitted from a plurality of sound sources and sound source video signals, and the sound source signals represent at least one video of the plurality of sound sources as inputs; andacquiring a separated signal including a signal representing a target sound emitted from one sound source represented by the video, wherein the acquiring further comprises acquiring the separated signal using at least one of: properties of the sound source, the properties affect sound emitted by the sound source acquired from the video, orfeatures of a structure used for the sound source to emit the sound.
  • 2. A method for separating a sound source, comprising: estimating a separated signal including a signal representing a target sound emitted from a sound source among a plurality of sound sources by applying a mixed acoustic signal and sound source video signals to a model, wherein the mixed acoustic signal represents a mixed sound of sound emitted from the plurality of sound sources, the sound source video signals represent videos of at least some of the plurality of sound sources to the model,the model is obtained at least by learning the model, the learning the model is based on differences between features of the separated signal and features of teaching data of sound source video signals, the features of the separated signal are obtained by applying teaching mixed acoustic signals as teaching data of the mixed acoustic signal and teaching sound source video signals as teaching data of the sound source video signals to the model.
  • 3. The method according to claim 2, wherein the model is obtained at least by learning, the learning is based on: a similarity between a first element representing the features of the teaching sound source video signals corresponding to a first sound source among the plurality of sound sources and a second element representing the features of the separated signal corresponding to a second sound source different from the first sound source decreases, anda similarity between the first element representing the features of the teaching sound source video signals corresponding to the first sound source and a third element representing the features of the separated signal corresponding to the first sound source increases.
  • 4. The sound according to claim 2, wherein the model is further obtained by learning based on differences between the separated signal obtained by applying the teaching mixed acoustic signals and the teaching sound source video signals to the model and teaching sound source signals which are teaching data of the separated signal corresponding to the teaching mixed acoustic signals and the teaching sound source video signals.
  • 5. The method according to claim 1, wherein the sound source video signals represent a video of each of the plurality of sound sources.
  • 6. The method according to claim 1, wherein the plurality of sound sources includes a plurality of different speakers, the mixed acoustic signal includes a speech signal, and the sound source video signals represent videos of the speakers.
  • 7. The method according to claim 6, wherein the sound source video signals represent videos including face videos of the speakers.
  • 8. The method according to claim 1, wherein the separated signal includes a signal representing a target sound emitted from a certain sound source among the plurality of sound sources and a signal representing a sound emitted from another sound source.
  • 9. A sound source separation device comprising a processor configured to execute operations comprising: receiving a mixed acoustic signal, wherein the mixed acoustic signal represents sound emitted from a plurality of sound sources and sound source video signals, and the sound source video signals represent at least one video of the plurality of sound sources as inputs, andacquiring a separated signal including a signal representing a target sound emitted from a sound source represented by the at least one video,wherein the acquiring further comprises the separated signal using at least one of: properties of the sound source, the properties affect sound emitted by the sound source acquired from the video, orfeatures of a structure used for the sound source to emit the sound.
  • 10. (canceled)
  • 11. The method according to claim 2, wherein the sound source video signals represent a video of each of the plurality of sound sources.
  • 12. The method according to claim 2, wherein the plurality of sound sources includes a plurality of different speakers, the mixed acoustic signal includes a speech signal, and the sound source video signals represent videos of the speakers.
  • 13. The method according to claim 2, wherein the separated signal includes a signal representing a target sound emitted from a certain sound source among the plurality of sound sources and a signal representing a sound emitted from another sound source.
  • 14. The method according to claim 3, wherein the model is further obtained by learning based on differences between the separated signal obtained by applying the teaching mixed acoustic signals and the teaching sound source video signals to the model and teaching sound source signals which are teaching data of the separated signal corresponding to the teaching mixed acoustic signals and the teaching sound source video signals.
  • 15. The method according to claim 3, wherein the sound source video signals represent a video of each of the plurality of sound sources.
  • 16. The method according to claim 3, wherein the plurality of sound sources includes a plurality of different speakers, the mixed acoustic signal includes a speech signal, and the sound source video signals represent videos of the speakers.
  • 17. The sound source separation device according to claim 9, wherein the sound source video signals represent a video of each of the plurality of sound sources.
  • 18. The sound source separation device according to claim 9, wherein the plurality of sound sources include a plurality of different speakers, the mixed acoustic signal includes a speech signal, and the sound source video signals represent videos of the speakers.
  • 19. The sound source separation device according to claim 18, the sound source video signals represent videos including face videos of the speakers.
  • 20. The sound source separation device according to claim 9, wherein the separated signal includes a signal representing a target sound emitted from a certain sound source among the plurality of sound sources and a signal representing a sound emitted from another sound source.
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2021/004540 2/8/2021 WO
Related Publications (1)
Number Date Country
20240135950 A1 Apr 2024 US