The present invention relates to a sound source extraction technique.
A sound source extraction technique for estimating source signals of sound sources in which noise and reverberation are suppressed, by taking, as an input, an observed mixed acoustic signal, is a technique widely used for, for example, preprocessing of sound recognition. As a method of performing sound source extraction using a mixed acoustic signal observed using a plurality of microphones, an independent vector analysis (IVA) has been known, which corresponds to multivariate expansion of an independent component analysis.
It is known that, when the IVA is used in a real environment, performance is deteriorated because of the influence of background noise and reverberation. Concerning the background noise, a problem is that increasing the number of microphones M larger than the number of target sound sources K also increases a processing time although robustness of the IVA can be improved. As a method of suppressing an increase in processing speed and performing sound source at high speed even when the number of microphones M is larger than the number of sound sources K, an Over IVA (see, for example, Non-Patent Literature 1) has been known.
With the Over IVA, it is possible to perform sound source extraction robust against background noise. However, since reverberation is not considered in the Over IVA, the problem of the performance deterioration involved in the reverberation is still present.
An object of the present invention, which has been made in view of such a point, is to provide a signal processing technique for performing, at high speed, sound extraction robust against reverberation in addition to noise.
A signal processing device applies a convolutional separation filter, which is a combined filter of: a rear reverberation removal filter for suppressing a rear reverberation component from a mixed acoustic signal obtained by converting an observed mixed acoustic signal obtained by observing a source signal into a time-frequency domain; and a sound source separation filter for emphasizing components corresponding to source signals from the mixed acoustic signal, to a mixed acoustic signal string including the mixed acoustic signal and a delay signal of the mixed acoustic signal and estimates model parameters of a model for obtaining information corresponding to signals in which the rear reverberation component is suppressed and target signals emitted from target sound sources in the source signal are emphasized.
Since the convolutional separation filter is the combined filter of the rear reverberation removal filter and the sound source separation filter, in the present invention, it is possible to perform, at high speed, sound source extraction robust against reverberation in addition to noise.
An embodiment of the present invention is explained below.
First, a principle is explained.
First, a blind sound source extraction problem is defined. It is assumed that, in a state in which target signals (for example, sound signals) emitted from K target sound sources and noise signals emitted from M−K noise sources propagate in air and mixed, the target signals and the noise signals are observed by M microphones. Signals obtained by observing, with the M microphones, source signals emitted from the M sound sources (the target sound sources and the noise sound sources) are referred to as observed mixed acoustic signals. These source signals include the target signals emitted from the K target sound sources and the noise signals emitted from the M−K noise sources. M is an integer equal to or larger than 2, K is an integer equal to or larger than 1, and 1≤K≤M−1. It is assumed that the target signals are unsteady and the noise signals are steady Gaussian noise. Among mixed acoustic signals in M dimensions obtained by converting the observed mixed acoustic signals observed by the M microphones into a time-frequency (TF) domain (for example, short-time Fourier transform), a component corresponding to a k-th (k∈{1, . . . , K}) target signal is represented as xk(f, t) ∈CM. C represents a set of entire complex numbers, Cα represents an entire set of α-dimensional vectors consisting of complex number elements, and α∈β represents that α belongs to β. That is, a component corresponding to the target signal among the mixed acoustic signals of the M dimensions is x1(f, t), xK(f, t)∈CM. Among the mixed acoustic signals of the M dimensions, a mixed acoustic signal component corresponding to a z-th (z∈{K+1, . . . , M}) target signal is represented as xz(f, t)∈CM. Then, the mixed acoustic signals of the M dimensions are represented by the following Expression (1).
[Math. 1]
X(f,t):Ek=1Kxk(f,t)+xz(f,t)∈CM (1)
where, f∈{1, . . . , F} and t∈{1, . . . , T} are respectively indexes of a frequency bin and a time frame (indexes of a discrete frequency and a discrete time). F and T are positive integers. α:=β means that α is defined as β.
In the following explanation, considering the influence of reverberation, a mixed acoustic signal component xi(f, t) of sound sources i∈{1, . . . , K, z} can be decomposed into a sum di(f, t)∈CM of a direct sound component and an initial reflection component and a rear reverberation component ri(f, t)∈CM. It is assumed that di(f, t) follows the a space model described below.
x
i(f,t)=di(f,t)+ri(f,t),i∈{1, . . . ,K,z} (2)
d
k(f,t)=ak(f)sk(f,t)∈CM,k∈{1, . . . ,K} (3)
d
z(f,t)=Az(f)z(f,t)∈CM (4)
a
k(f)∈CM,sk(f,t)∈C,k∈{1, . . . ,K} (5)
A
z(f)∈CMx(M−K),z(f,t)∈CM−K (6)
where, ak(f) and sk(f, t) are respectively a transfer function and a source signal (a target signal) of a target sound source k and Az(f) and z(f, t) are respectively matrix representation of transfer functions and source signals of M−K noise sources. A problem of estimating x1(f, t), . . . , xK(f, t) only from an observed signal under an assumption that sound sources are independent from one another is known as a blind source separation problem. In contrast, the blind sound source extraction problem treated in this embodiment is defined as a problem of estimating d1 (f, t), . . . , dk (f, t) to which reverberation removal is also applied in addition to sound source separation. The number of target sound sources K is known.
<Probability Model of IVEconv>
A sum of sound source signals after removing a rear reverberation component from a mixed acoustic signal x(f, t) is put as indicated by Expression (7).
[Math. 2]
d(f,t):=Σk=1Kdk(f,t)+dz(f,t) (7)
A probability model of IVEconv is defined below using a hyper parameter Δ⊏N. N represents a set of entire natural numbers and α⊏β represents that a is a subset of β.
[Math. 3]
d(f,t)=x(f,t)−Στ∈ΔQτ(f)x(f,t−τ) (8)
s
k(f,t)=wk(f)Hd(f,t)∈C,k∈{1, . . . ,K} (9)
z(f,t)=Wz(f)Hd(f,t)∈CM−K (10)
s
k(t):=[sk(1,t), . . . ,sk(F,t)]T∈CF (11)
s
k(t)˜CN(0Fλk(t)IF)k∈{1, . . . ,K} (12)
z(f,t)˜CN(0M−K,IM−K) (13)
[Math. 4]
p({sk(t),z(f,t)}k,f,t)=Πk,tsk(t)·Πf,tz(f,t) (14)
where, αT is transposition of α, αH is Hermitian transposition of α, λ(t) is a power spectrum of sk(t), CN(μ, Σ) is a complex normal distribution of a distributed covariance matrix Σ in an average vector μ, Iα is a unit matrix of α×α, 0α, is an α-dimensional vector, all elements of which are 0, β-CN(μ, Σ) represents that β conforms to the complex normal distribution CN(μ, Σ), p(α) is a probability of α, wk(f) is a sound source separation filter for emphasizing a component corresponding to a target signal emitted from a k-th target sound source, and Wz(f) is a sound source separation filter for emphasizing a component corresponding to a noise signal emitted from a z-th noise source.
Model parameters of the probability model of IVEconv are the following four:
Rear reverberation removal filter: Qδ(f) ∈CM×M, δ∈Δ
Sound source separation filter of a target signal: wk(f) ∈CM
Power spectrum of the target signal: λk(t)∈R≥0
Sound source separation filter of a noise signal: Wz(f) ∈CM×(M−K)
R≥0 means a set of entire real numbers equal to or larger than 0.
<Simplification of the Probability Model of IVEconv>
In the model described above, since the reverberation removal filter and the sound source separation filter are generally alternately optimized, it is likely that a result of the optimization tends to fall into a localized solution. Therefore, in this embodiment, the reverberation removal filter and the sound source separation filter, which are the model parameters of the probability model of IVEconv, are converted into one filter obtained by combining both the filters to rewrite the probability model of IVEconv into a simple model. An element of a hyper parameter Δ is represented by Δ={τ1, . . . , τ|Δ|}. Δ∈{τ1, . . . , τ|Δ|} and |Δ| is a positive integer representing the number of elements of the hyper parameter Δ. There are the following definitions.
where, Qδ(f) is a rear reverberation removal filter and x{circumflex over ( )}(f, t) is referred to as mixed acoustic signal string. Note that the superscript “{circumflex over ( )}” of x{circumflex over ( )}(f, t) should originally be described immediately on “x” but is sometimes described above right of “x” like x{circumflex over ( )}(f, t) because of limitation of description. At this time, a set of Q(f) and W(f)=[w1(f), . . . , wK(f), Wz(f)] is converted into the following Expression (17) one to one according to the following Expressions (15) and (16).
p
k(f)=Q(f)wk(f)∈CM(|Δ|+1) (15)
P
z(f)=Q(f)Wz(f)∈cM(|Δ|+1)×(M−K) (16)
P(f)=[p1(f), . . . ,pK(f),Pz(f)] (17)
where, Cα×β represents an entire set of an α×β matrix consisting of complex number elements and pk(f)=Q(f)wk(f) is a convolutional separation filter component corresponding to a target signal emitted from a k-th target sound source. Pz(f)=Q(f)Wz(f) is a convolutional separation filter component corresponding to a noise signal emitted from a z-th noise source.
In this embodiment, a filter P(f) that simultaneously achieves rear reverberation removal and sound source separation is referred to as convolutional separation filter. That is, the convolutional separation filter is a combined filter of a rear reverberation removal filter Q(f) for suppressing a rear reverberation component from the mixed acoustic signal x(f, t) and a sound source separation filter W(f) for emphasizing components corresponding to source signals from the mixed acoustic signal x(f, t). According to this conversion, Expressions (8) to (10) are converted like the following Expressions (18) and (19).
[Math. 7]
S
k(f,t)=pk(f)H{circumflex over (x)}(f,t)∈C,K∈{1, . . . ,K} (18)
[Math. 8]
z(f,t)=Pz(f)H{circumflex over (x)}(f,t)∈CM−K (19)
Consequently, the probability model of IVEconv is organized as Expressions (11) to (14) and (18) to (19). This probability model is a model for applying the convolutional separation filter P(f) to a mixed acoustic signal string x{circumflex over ( )}(f, t) including a mixed acoustic signal x(f, t) and a delay signal x(f, t−τ1), . . . , x(f, t−τ|Δ|) of the mixed acoustic signal explained below and obtaining information corresponding to signals in which a rear reverberation component is suppressed and target signals sk(f, t) emitted from target sound sources among source signals are emphasized. The mixed acoustic signal x(f, t) is a signal obtained by converting an observed mixed acoustic signal obtained by observing a source signal into a time-frequency domain. The convolutional separation filter P(f) is a combined filter of a rear reverberation removal filter Qδ(f) for suppressing a rear reverberation component from the mixed acoustic signal x(f, t) and a sound source separation filter W(f) for emphasizing components corresponding to source signals from the mixed acoustic signal x(f, t). Model parameters of this model are the convolutional separation filter P(f) of Expression (17) and the power spectrum λk(t) of the target signal of Expression (12).
<Optimization of the Simplified Probability Model of IVEconv>
Model parameters of the simplified probability model of IVEconv can be estimated by a maximum likelihood method. This is achieved by minimizing a target function J, which is negative log likelihood, represented by the following Expression (20).
where, |α| is the absolute value of α, ∥α∥ is a norm of α, det(α) is a determinant of α, and “const.” is a constant not depending on parameters. First M row components of the convolutional separation filter P(f) is W(f)=[w1 (f), . . . , wK (f), Wz(f)].
In this embodiment, the convolutional separation filter P(f) and the power spectrum λk(t) of the target signal sk(f, t) are alternately optimized. If the convolutional separation filter P(f) is fixed, a global optimal solution of the power spectrum λk(t) is as follows:
Accordingly, in power spectrum estimation, the power spectrum λk(t) of the target signals sk(f, t) is estimated according to Expression (21) with convolutional separation filter P(f) fixed.
When the power spectrum λk(t) of the target signal sk(f, t) is fixed, a problem of optimizing the convolutional separation filter P(f) to optimize (minimize) the negative target function J can be divided into F problems of minimizing the target function J about convolutional separation filters P(1), . . . , P(F) of frequency bins. A problem of minimizing the target function J about the convolutional separation filter P(f) is represented as follows:
where, the following is satisfied.
[Math. 12]
∫P(f)=Σk=1KPkH(f)Gk(f)Pk(f)+tr(PzH(f)Gz(f)Pz(f))−2 log|detW(f)|
where, tr(α) is a diagonal partial sum of α.
Gz is a covariance matrix of the mixed acoustic signal string x{circumflex over ( )}(f, t). Gk can be grasped as a noise covariant matrix at the time when a signal other than the target signal sk(f, t) is regarded as a noise signal. As explained above, in the convolutional separation filter estimation, a convolutional separation filter P(f) for optimizing a target function Jp(f) for a mixed acoustic signal at frequencies is estimated for each of the frequencies with the power spectrum λk(t) of the target signals sk(f, t) fixed.
The processing of the power spectrum estimation and the processing of the convolutional separation filter estimation explained above are alternately executed until a predetermined condition is satisfied.
A first embodiment is explained with reference to the drawings.
As shown in
<Processing>
As explained above, the signal processing device 1 estimates model parameters of a model for applying the convolutional separation filter P(f), which is a combined filter of: a rear reverberation removal filter Qδ(f) for suppressing a rear reverberation component from a mixed acoustic signal x(f, t) obtained by converting an observed mixed acoustic signal obtained by observing a source signal into a time-frequency domain; and a sound source separation filter W(f) for emphasizing components corresponding to source signals from the mixed acoustic signal x(f, t), to a mixed acoustic signal string x(f, t) including a mixed acoustic signal x(f, t) and a delay signal x(f, t−τ1), . . . , x(f, t-τ|Δ|) of the mixed acoustic signal and obtaining information corresponding to signals in which a rear reverberation component is suppressed and target signals sk(f, t) emitted from target sound sources among source signals are emphasized. Processing is explained in detail below.
<<Processing of the Initial Setting Unit 11 (Step S11)>>
As illustrated in
<<Processing of the Power-Spectrum Estimation Unit 12 (Step S12)>>
The power-spectrum estimation unit 12 uses x{circumflex over ( )}(f, t) and P(f)=[p1(f), . . . , PK(f), Pz(f)], obtains, about all f and t, a target signal sk(f, t) according to Expression (18), and further obtains a power spectrum λk(t) of the target signal sk(f, t) according to Expressions (11) and (21). That is, the power-spectrum estimation unit 12 estimates the power spectrum λk(t) of target signals sk(f, t) with the convolutional separation filter P(f) fixed. The power-spectrum estimation unit 12 outputs the power spectrum λk(t) to the convolutional-separation-filter estimation unit 13 (step S12).
<<Processing of the Convolutional-Separation-Filter Estimation Unit 13 (Step S13)>>
The convolutional-separation-filter estimation unit 13 estimates, with the power spectrum λk(t) of the target signals sk(f, t) fixed, for each of frequencies, a convolutional separation filter P(f) for optimizing (minimizing) a target function Jp(f) (Expression (22)) for the mixed acoustic signal xk(f, t) at the frequencies (f∈{1, . . . , F}). This is equivalent to solving a problem of minimizing the target function J about the convolutional separation filter P(f) in frequency bins f=1, . . . , F. For example, as illustrated in
Update processing of P(f) (
First, the control unit 133 sets k=1 (step S133a).
Subsequently, the qk(f) operation unit takes P(f) and Gz(f)−1 as an input and obtains, about all f, qk(f) according to Expression (25) and output qk(f).
[Math. 15]
qk(f)Gk(f)−1(W(f)−H
where, as explained above, a first M row component of P(f) is W(f)=[w1(f), . . . , WK(f), Wz(f)], ek is an M-dimensional unit vector, a k-th component of which is 1, and α−H is Hermitian transposition of an inverse matrix of α (step S131).
The pk(f) operation unit 132 takes qk(f), x{circumflex over ( )}(f, t), and λk(t) as an input and obtains, about all f, pk(f) according to Expressions (23) and (26) and outputs pk(f) (step S132).
The control unit 133 determines whether k=K (step S133). When not k=K, the control unit 133 sets k+1 as new k (step S133c) and returns the processing to step S131. On the other hand, when k=K, the Pz(f) operation unit 134 takes Gz(f)−1 and pk(f) as an input and obtains, about all f, Pz(f) according to Expression (27) and outputs Pz(f).
where, ek is an M-dimensional unit vector, a k-th component of which is 1, Ez:=[eK+1, . . . , eM]∈CM×(M−K), Es:=[e1, . . . , eK]∈CM×K, Ws (f):=[w1(f), . . . , wK(f)]∈CM×K, and 0α×β is an α×β matrix, all elements of which is 0. As explained above, a first M row component of P(f) is W(f)=[w1(f), . . . , wK(f), Wz(f)] (step S134).
The pk(f) operation unit 132 outputs pk(f) about all k and f. The Pz(f) operation unit 134 outputs Pz(f) about all z and f. That is, the convolutional-separation-filter estimation unit 13 outputs an optimized convolutional separation filter P(f)=[p1(f), . . . , pK(f), Pz(f)]. Further, the convolutional-separation-filter estimation unit 13 may normalize P(f) after update as explained below and output P(f) after the normalization.
Consequently, it is possible to improve numerical stability. However, this normalization is not essential and may not be performed (step S135).
As explained above, the convolutional-separation-filter estimation unit 13 solves the problem of Expression (22) as shown in
<<Processing of the Control Unit 14 (Step S14)>>
The control unit 14 determines whether a predetermined condition is satisfied. An example of the predetermined condition is that, for example, the number of times of repetition of the processing of the power spectrum estimation (step S12) and the convolutional separation filter estimation (step S13) reaches a predetermined number of times of repetition or an update amount of model parameters is equal to or smaller than a predetermine threshold. When the predetermined condition is not satisfied, the control unit 14 returns the processing to step S12. On the other hand, when the predetermined condition is satisfied, the control unit 14 advances the processing to step S15. That is, the control unit 14 alternately executes the processing of the power-spectrum estimation unit 12 and processing of the convolutional-separation-filter estimation unit 13 until the predetermined condition is satisfied (step S14).
In step S15, about all f and k, the power-spectrum estimation unit 12 outputs the target signal sk(f, t) optimized as explained above (step S12). The convolutional-separation-filter estimation unit 13 outputs the convolutional separation filter P(f) optimized as explained above (step S15).
In this embodiment, since the model using the convolutional separation combined filter of the rear reverberation removal filter and the sound source separation filter is used, it is possible to perform, at high speed, sound source extraction robust against reverberation in addition to noise. The processing explained above can be executed by real-time processing.
Subsequently, a second embodiment is explained. When the number of target sound sources K is 1, a convolutional separation filter can be optimized at higher speed. This scheme is explained in the second embodiment. The second embodiment is different from the first embodiment in limitation to K=1 and an optimization procedure of the convolutional separation filter. In the following explanation, differences from the matters explained above are mainly explained and the matters explained above are denoted by the same reference numbers and processing is simplified.
[Configuration]
As illustrated in
<Processing>
In this embodiment as well, the signal processing device 2 estimates model parameters of a model for applying the convolutional separation filter P(f) to a mixed acoustic signal string x{circumflex over ( )}(f, t) including a mixed acoustic signal x(f, t) and a delay signal x(f, t−τ1), . . . , x(f, t-T|Δ|) of the mixed acoustic signal and obtaining information corresponding to signals in which a rear reverberation component is suppressed and target signals sk(f, t) emitted from target sound sources among source signals are emphasized. Processing is explained in detail below.
<<Processing of the Initial Setting Unit 21 (Step S21)>>
As illustrated in
<<Processing of the Power-Spectrum Estimation Unit 12 (Step S12)>>
As explained in the first embodiment, the power-spectrum estimation unit 12 estimates the power spectrum λk(t) of target signals sk(f, t) with the convolutional separation filter P(f) fixed. The power-spectrum estimation unit 12 outputs the power spectrum λk(t) to the convolutional-separation-filter estimation unit 23 (step S12).
<<Processing of the Convolutional-Separation-Filter Estimation Unit 23 (Step S23)>>
The convolutional-separation-filter estimation unit 23 estimates, with the power spectrum λk(t) of the target signals sk(f, t) fixed, for each of frequencies, a convolutional separation filter P(f) for optimizing (minimizing) a target function Jp(f) (Expression (22)) for the mixed acoustic signal xk(f, t) at the frequencies (f∈{1, . . . , F}). For example, as illustrated in
Update processing of P(f) (
The equation solving unit 231 uses x{circumflex over ( )}(f, t) and λ1(t) and obtains, about all f, G1(f) according to Expression (23). Further, th equation solving unit 231 calculates, about all f, an M×M matrix V1 (f) ECM×M and an L×M matrix C(f)∈CL×M satisfying an equation of Expression (28) and outputs the M×M matrix V1 (f) ∈CM×M and the L×M matrix C(f)∈CL×M.
[Math. 21]
G1(f)(V
The M×M matrix V1(f) is output to the eigenvalue-problem solving unit 232 and the p1(t) operation unit 234 and the L×M matrix C(f) is output to the p1(t) operation unit 234 (step S231).
The eigenvalue-problem solving unit 232 takes V1(f) and Vz(f) as an input, solves, about all f, a generalized eigenvalue problem V1(f)q=λVz(f)q, obtains an eigenvector q=a1(f) corresponding to a maximum eigenvalue λ, and outputs the eigenvector q=a1(f). The eigenvector q=a1(f) is output to the p1(t) operation unit 234 (step S232).
The p1(t) operation unit 234 takes V1(f), a1(f), and C(f) as an input and calculates, about all f, a target signal p1(f) according to Expression (29) and outputs the target signal p1(f) (step S234).
<<Processing of the Control Unit 14 (Step S14)>>
The control unit 14 determines whether a predetermined condition is satisfied. When the predetermined condition is not satisfied, the control unit 14 returns the processing to step S12. On the other hand, when the predetermined condition is satisfied, the control unit 14 advances the processing to step S25.
In step S25, first, the convolutional-separation-filter estimation unit 13 of the convolutional-separation-filter estimation unit 23 obtains Pz(f) about all f and outputs Pz(f) as explained in the first embodiment. Further, for all f and k, the power-spectrum estimation unit 12 outputs the target signal sk(f, t) optimized as explained above (step S12). The convolutional-separation-filter estimation unit 23 outputs the convolutional separation filter P(f)=[p1(f), Pz(f)] optimized as explained above (step S25).
The eigenvalue-problem solving unit 232 may obtain an eigenvector q=a1(f) corresponding to the maximum eigenvalue λ in step S232 according to the following Expression (30).
where, inverse matrixes Vz−1 and V1−1 of Vz and V1 can be respectively considered covariance matrixes of a mixed acoustic signal string and a noise signal string after removal of the influence of reverberation. Therefore, processing by Expression (32) can be grasped as steering vector estimation based on MaxSNR. Step S234 is equivalent to calculation of a convolutional beam former. Therefore, IVEconv by the convolutional-separation-filter estimation unit 23 is considered to be equivalent to repetition of the steering vector estimation based on MaxSNR and sound source extraction by the convolutional beam former.
In a third embodiment, a sum dk(f, t) of a direct sound component and an initial reflection component of the target signal sk(f, t) is obtained from the target signal sk(f, t) and the convolutional separation filter P(f) optimized in the first and second embodiments or the modification of the second embodiment and is output.
As illustrated in
The signal extraction device 3 takes, as inputs, the optimized target signal sk(f, t) and the optimized convolutional separation filter P(f) and obtains, about all k, f, and t, dk(f, t) according to the following Expression (31) and outputs dk(f, t).
[Math. 24]
dk(f,t)=(W(f)−Hek)sk(f,t) (31)
Thereafter, the obtained dk(f, t) may be used in other processing in a time-frequency domain or may be converted into a time domain.
[Experiment]
In an experiment, performance evaluation of four methods written in Table 1 was performed. In Table 1, (a) is a conventional method described in “N. Ono, Proc. WASPAA, pp. 189-192, 2011” (reference document 1), (b) is a conventional method described in “R. Scheibler and N. Ono, arXiv preprint arXiv:1910. 10654, 2019” (reference document 2), and (c) is a conventional method based on “T. Yoshioka and T. Nakatani, IEEE Trans. ASLP, vol. 20, no. 10, pp. 2707-2720, 2012” (reference document 3). However, (c) is alternate optimization of WPE and IVA and is a method obtained by increasing speed of alternate optimization of WPE and ICA (IVA) proposed in the reference document 3. Experiment conditions are as shown in Table 2. Note that RTF represents processing speed. In (a) and (c), among M (>K) outputs, K outputs having large power were selected as a sound source extraction result and SDR/SIR was calculated. Effectiveness of the method of this embodiment was successfully confirmed from Table 1.
[Hardware Configuration]
The signal processing devices 1 and 2 and the signal extraction device 3 in the embodiments are devices composed of a general-purpose or dedicated computer including a processor (a hardware processor) such as a CPU (central processing unit) and a memory such as a RAM (random-access memory) or a ROM (read-only memory) executing a predetermined program. The computer may include one processor and one memory or may include a plurality of processors and a plurality of memories. The program may be installed in the computer or may be recorded in the ROM or the like in advance. A part or all of processing units may be configured using not an electronic circuitry into which a program is read to realize a functional configuration like the CPU but an electronic circuitry that independently realizes a processing function. An electronic circuitry configuring one device may include a plurality of CPUs.
The program explained above can be recorded in a computer-readable recording medium. An example of the computer-readable recording medium is a non-transitory recording medium. Examples of such a recording medium are a magnetic recording device, an optical disk, a magneto-optical recording medium, a semiconductor memory, and the like.
Distribution of the program is performed by, for example, selling, transferring, or lending a portable recording medium such as a DVD or a CD-ROM recording the program. Further, the program may be stored in a storage device of a server computer and distributed by being transferred from the server computer to other computers via a network. As explained above, first, the computer that executes such a program once stores, in a storage device of the computer, the program recorded in the portable recording medium or the program transferred from the server computer. At an execution time of processing, the computer reads the program stored in the storage device of the computer and executes processing conforming to the read program. As another execution form of the program, the computer may directly read the program from the portable recording medium and execute the processing conforming to the program. Further, every time the program is transferred to the computer from the server computer, the computer may sequentially execute processing conforming to the received program. The transfer of the program from the server computer to the computer may not be performed. The processing explained above may be executed by a service of a so-called ASP (Application Service Provider) type for realizing a processing function according to only an instruction for the execution and acquisition of a result. Note that the program in this embodiment includes information served for processing by an electronic computer and equivalent to the program (data or the like that is not a direct command to the computer but has a characteristic of specifying processing of the computer).
In the embodiments, the devices are configured by causing the computer to execute the predetermined programs. However, at least a part of processing content of the devices may be realized in a hardware manner.
Note that the present invention is not limited to the embodiments explained above. For example, the various kinds of processing explained above may be not only executed in time series according to the description but also executed in parallel or individually according to processing abilities of the devices that execute the processing or according to necessity. Besides, it goes without saying that changes are possible as appropriate in a range not departing from the gist of the present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/007643 | 2/26/2020 | WO |