Self-attention-based speech quality measuring method and system for real-time air traffic control

Information

  • Patent Grant
  • 12051440
  • Patent Number
    12,051,440
  • Date Filed
    Thursday, February 29, 2024
    10 months ago
  • Date Issued
    Tuesday, July 30, 2024
    5 months ago
  • Inventors
  • Original Assignees
    • Civil Aviation Flight University of China
  • Examiners
    • Sirjani; Fariba
    Agents
    • Pilloff Passino & Cosenza LLP
    • Pilloff; Rachel K.
    • Passino; Sean A.
Abstract
Disclosed are a self-attention-based speech quality measuring method and system for real-time air traffic control, including following steps: acquiring real-time air traffic control speech data and generating speech information frames; detecting the speech information frames, discarding unvoiced information frames of the speech information frames, generating a voiced long speech information frame; performing mel spectrogram conversion, attention extraction and feature fusion on the long speech information frame to obtain a predicted mos value.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority of Chinese Patent Application No. 202310386970.9 filed on Apr. 12, 2023, the entire contents of which are incorporated herein by reference.


TECHNICAL FIELD

The application relates to the technical field of aviation air traffic management, and in particular to a self-attention-based speech quality measuring method and system for real-time air traffic control.


BACKGROUND

The quantitative evaluation of the speech quality for air traffic control has always been one of the difficult problems in the aviation industry, and control speech is the most important way of communication between controllers and crew flights. At present, the main flow for processing control speech is as follows: firstly, the control speech data is obtained by automatic speech recognition (ASR) technology, and then the speech information is extracted from the control speech data and analyzed by Natural Language Processing (NLP). It can be seen that the correctness of speech recognition result is the most important part in the control speech processing, and the quality of the control speech itself is an important factor affecting the correctness of the speech recognition result.


At present, there are two main evaluation methods for speech quality, one is an objective evaluation method based on numerical operation, and the other is a subjective evaluation method based on expert system scoring. The subjective evaluation method is the most typical method in speech quality measurement, and takes Mean Opinion Score (MOS) value as the index of speech quality evaluation. Generally, the recommendations of ITU-TP.800 and P.830 are adopted for the MOS value. Different people compare the subjective feelings of the original corpus and the faded corpus after systematic processing, and get the MOS value. Finally, the average value of MOS value is obtained. The average value is distributed between 0 and 5, with 0 representing the worst quality and 5 representing the best quality.


For subjective speech quality measurement, it has an advantage of intuitive effect, but at the same time, it has the following shortcomings: firstly, because of the characteristics of MOS scoring itself, it takes a long time and costs a lot to evaluate a single speech; then, the scoring system may only be carried out offline, and fails to process streaming control speech in real time; lastly, scoring is very sensitive to the unvoiced part of speech, so it is necessary to remove the unvoiced part of speech for evaluation.


SUMMARY

The application aims at overcoming the problems that the scoring system in the prior art consumes a long time, and fails to process streaming speech in real time and the unvoiced part of speech, and provides a self-attention-based speech quality measuring method and system for real-time air traffic control.


In order to achieve the above objective, the present application provides the following technical scheme.


self-attention-based speech quality measuring method for real-time air traffic control includes:


S1, acquiring real-time air traffic control speech data, time stamping and encapsulating, and then combining with control data for secondary encapsulating to generate speech information frames;


S2, detecting the speech information frames, dividing into an unvoiced information frame queue and a voiced information frame queue and predetermining a time length; when a length of any one queue inserting into the speech information frames exceeds a predetermined time length, dequeueing the speech information frames in the unvoiced information frame queue and the voiced information frame queue at a same time, and discarding dequeued information frames of the unvoiced information frame queue, and detecting dequeued information frames of the voiced information frame queue, and merging information larger than 0.2 second to generate a long speech information frame; and


S3, processing the long speech information frame through a self-attention neural network and obtaining a predicted mos value, where the neural network includes a mel spectrum auditory filtering layer, an adaptive convolutional neural network layer, a transformer attention layer and a self-attention pooling layer.


Optionally, in the S2, the long speech information frame is generated, with a start time of a speech information frame at a head of the voiced information frame queue as a start time and an end time of a speech information frame at a tail of the voiced information frame queue as an end time, the control data is mergeable with the long speech information frame at a self-defined time.


Optionally, the mel spectrum auditory filtering layer converts the long speech information frame into a power spectrum, and then the power spectrum is dot product with mel filter banks to map a power into a mel frequency and linearly distribute the mel frequency. The following formula is used to map:








H
m

(
k
)

=

{





0





k
<

f


(

m
-
1

)










k
-

f

(

m
-
1

)




f

(
m
)

-

f

(

m
-
1

)









f


(

m
-
1

)



k


f


(
m
)











f

(

m
+
1

)

-
k



f

(

m
+
1

)

-

f

(
m
)









f


(

m
-
1

)



k


f


(
m
)








0





k
>

f


(

m
+
1

)






,






where k represents an input frequency and is used to calculate a frequency corresponding Hm(k) of each of mel filters, m represents a serial number of the filters, f(m−1) and f(m), and f(m+1) respectively correspond to a starting point, an intermediate point and an ending point of an m-th filter, and a mel spectrogram is generated after dot product.


Optionally, converting the long speech information frame into the power spectrum includes differentially enhancing high-frequency components in the long speech information frame to obtain an information frame, segmenting and windowing the information frame, and then converting a processed information frame into the power spectrum by using Fourier transform.


Optionally, the adaptive convolutional neural network layer includes a convolutional layer and an adaptive pool, resamples the mel spectrogram, merges data convolved by convolution kernels in the convolutional layer into a tensor, and then normalizes the tensor into a feature vector.


Optionally, the transformer attention layer applies a multi-head attention model to carry out embedding the feature vector for time sequence processing, and applies learning matrices to convert a processed vector, and applies a calculation formula to calculate an attention weight of a converted vector. The calculation formula is as follows:








W
attention

=


(


e


Q
*

K
T



d










i
=
0

n



e


Q
*

K
T



d





)


V


,




where KT is the transpose of the K matrix, √{square root over (d)} is the length of the feature vector and Wattention is the weight, and the attention vector Xattention is obtained by dot-producting the weight with the feature vector.


Optionally, after the extraction of the attention vector is completed, a multi-head attention vector Yattention′ is calculated by using the multi-head attention model, multi-head attention vector Yattention′ is normalized by layernorm to obtain Ylayernorm and then activated by gelu to obtain the final attention vector Yattention. The calculation formula is as follows:

Yattention′=concat[Xattention1,Xattention2, . . . ,Xattentionm]1*n*W0


where concat is a vector connection operation and Wo is a learnable multi-head attention weight matrix;


a gelu activation formula is as follows:







Y
attention

=

0
.5
*

Y
layernorm

*


(

1
+

tan


h

(



2
π


*

(


Y

l

a

y

e

r

n

o

r

m


+


0
.
0


4

4

715


Y

l

a

y

e

r

n

o

r

m

3



)


)



)

.






Optionally, the self-attention pooling layer compresses the length of the attention vector through a feed-forward network, codes and masks the vector part beyond the length, normalizes a coded masked vector, dot-products the coded masked vector with the final attention vector, and a dot-product vector passes through a fully connected layer to obtain a predicted mos value vector.


Optionally, the mos value is linked with the corresponding long speech information frame to generate real-time measurement data.


In order to achieve the above objective, the application also provides the following technical scheme.


A self-attention-based speech quality measuring system for real-time air traffic control includes a processor, a network interface and a memory. The processor, the network interface and the memory are connected with each other. The memory is used for storing a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the self-attention-based speech quality measuring method for real-time air traffic control.


Compared with the prior art, the application has following beneficial effects.


According to the self-attention-based speech quality measuring method and system for real-time air traffic control provided by the application, through sampling the streaming speech data input in real time at a fixed time and then storing in the form of bits, and then encapsulating the control data and merging into the speech information frame, the problem that the real-time speech data fails to be processed and stored at the same time is solved, and the problem of long-term silence in the real-time speech data is solved through the cooperative processing of the voiced queue and the unvoiced queue, the influence of unvoiced speech on the evaluation is avoided, and the objectivity of speech evaluation is improved. Finally, based on the processing of self-attention neural network and taking mos scoring framework as a model, the real-time control speech data is scored through stimulating expert system, and the machine replaces labor, which solves the problem that speech evaluation takes a long time and may only be carried out off-line, and realizes the real-time scoring of control speech.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flowchart of generating real-time speech information frame according to the present application.



FIG. 2 is a flowchart of processing voiced and unvoiced information frame queues according to the present application.



FIG. 3 is a flow chart of processing by mel spectrogram auditory filtering layer according to the present application.



FIG. 4 is a schematic diagram of convolutional neural network processing according to the present application.



FIG. 5 is a flowchart of resampling mel spectrogram according to the present application.



FIG. 6 is a flowchart of processing by a transformer attention layer and an attention model according to the present application.



FIG. 7 is a flow chart of processing by a self-attention pooling layer according to the present application.



FIG. 8 is a flow chart of a self-attention-based speech quality measuring method for real-time air traffic control.





DETAILED DESCRIPTION OF THE EMBODIMENTS

In the following, the application will be further described in detail in combination with experimental examples and specific embodiments. However, it should not be understood that the scope of the above-mentioned subject matter of the present application is limited to the following embodiments, and all technologies achieved based on the contents of the present application belong to the scope of the present application.


Embodiment 1

As shown in FIG. 8, self-attention-based speech quality measuring method for real-time air traffic control provided by the present application includes:


S1, acquiring real-time air traffic control speech data, time stamping and encapsulating, and then combining with control data for secondary encapsulating to generate speech information frames;


S2, detecting the speech information frames, dividing into an unvoiced information frame queue and a voiced information frame queue and predetermining a time length; when a length of any one queue inserting into the speech information frames exceeds a predetermined time length, dequeueing the speech information frames in the unvoiced information frame queue and the voiced information frame queue at a same time, and discarding dequeued information frames of the unvoiced information frame queue, and detecting dequeued information frames of the voiced information frame queue, and merging information larger than 0.2 second in dequeued information frames of the voiced information frame queue to generate a long speech information frame; and


S3, obtaining a predicted mos value through a self-attention neural network, where the neural network includes a mel spectrum auditory filtering layer, an adaptive convolutional neural network layer, a transformer attention layer and a self-attention pooling layer.


Specifically, the S3 includes:


S31, differentially enhancing the high-frequency components in the long speech information frame to obtain an information frame, segmenting and windowing the information frame, and then converting the processed information frame into a power spectrum by using Fast Fourier Transform (FFT), and dot-producting the power spectrum with mel filter banks to generate a mel spectrogram;


S32, resampling the mel spectrogram segment based on a convolutional neural network containing a convolutional layer and adaptive pooling to generate a feature vector;


S33, extracting attention from the feature vector and generating an attention vector based on the transformer attention layer and the multi-head attention model;


S34, performing feature fusion on the attention vector based on the self-attention pooling layer to obtain a predicted mos value; and


S35, linking the mos value and the corresponding long speech information frame to generate real-time measurement data.


Specifically, in the measuring method provided by the present application, the S1 is for processing and generating the real-time speech information frame. Referring to FIG. 1, the real-time analysis thread stores the speech data in the internal memory in the form of bit, and at the same time, the real-time recording thread starts timing, and takes the speech data out of the internal memory at a time interval of 0.1 second and stamps the speech data with a time tag for the first time of encapsulating. After the encapsulating, the speech data is encapsulated with the control data for the second time to form a speech information frame. Among them, the control data include latitude and longitude of aircraft, wind speed, and some real-time air traffic control data. The generated speech information frame is the minimum processing information unit for the subsequent steps.


Specifically, in the measuring method provided by the present application, the S2 is for detecting and synthesizing voice or voiceless in the speech information frame. Referring to FIG. 2, in the detected speech information frame, the detected speech information frame with voice is added to the voiced information frame queue, and the detected speech information frame without voice is added to the unvoiced information frame queue. The length of the two queues is constant at 33. In other words, the maximum number of inserted speech frames is 33, and the total speech length is 3.3 seconds. When one of the voiced information frame queue or the unvoiced information frame queue is full, the speech information frames in the two queues are dequeued at the same time, the dequeued information in the unvoiced information frame queue is discarded, and the dequeued speech information frames in the voiced information frame queue are detected.


The dequeued speech information frames are detected whether the queue length is greater than 2. In other words, the total speech time length in the dequeued speech frame queue is greater than 0.2 second, and is the shortest control speech instruction time length. If the dequeued speech information frame length is less than 2, the frame is discarded, and if the dequeued speech information frame length is greater than 2, the data is merged. Among them, the process of data merging combines the speech composed of in a form of bit into a long speech information frame and saving the long speech information frame in external memory.


In generating long speech information frame, the starting time of the speech information frame at the head of the voiced information frame queue is taken as the starting time, and the ending time of the speech information frame at the tail of the voiced information frame queue is taken as the ending time, and the control data encapsulated with the speech information frame may be merged with the long speech information frame at a self-defined time.


Specifically, in the measuring method provided by the present application, the S31 is for emphasizing the long speech information frame, differentially enhancing the long speech information frame, converting the long speech information frame into a power spectrum, and generating a mel spectrogram, as shown in FIG. 3. Firstly, the input long speech information frame is assigned a value of X [1 . . . n], and is subjected to one time of difference in time domain. The difference formula is:

y[n]=x[n]−αx[n−1],


where α takes 0.95, y[n] is the long speech information frame after differential enhancement, and this step segments the long speech information frame. In this embodiment, 20 milliseconds is chosen as the interval for segmentation, and in order to protect the information between two frames, 10 milliseconds is taken as the interval between two adjacent frames.


The long speech information frame after framing is windowed by Hamming window in order to obtain better sidelobe reduction amplitude, and then speech signal is converted into power spectrum by fast Fourier transform, and the fast Fourier formula is:








X

(

2

l

)

=




n
=
0



N
2

-
1





[


x

(
n
)

+

x

(

n
+

N
2


)


]



W
n

2

k

n





,

k
=
0

,
1
,


,
N












X

(


2

l

+
1

)

=




n
=
0



N
2

-
1




[


x

(
n
)

-

x

(

n
+

N
2


)


]



W
n

2

k

n





]



W
n
n


,

k
=
0

,
1
,


,


N
2

-
1





The power spectrum is dot-product with mel filter banks to map the power spectrum to mel frequency and distribute the mel frequency linearly. In this embodiment, 48 mel filter banks are selected, and the mapping formula is as follows:








H
m

(
k
)

=

{




0





k
<

f

(

m
-
1

)









k
-

f

(

m
-
1

)




f

(
m
)

-

f

(

m
-
1

)







f


(

m
-
1

)



k


f


(
m
)










f

(

m
+
1

)

-
k



f

(

m
+
1

)

-

f

(
m
)







f


(

m
-
1

)



k


f


(
m
)







0



k
>

f


(

m
+
1

)






,






where k represents an input frequency and is used to calculate a frequency corresponding Hm(k) of each of mel filters, m represents a serial number of the filters, f(m−1) and f(m), and f(m+1) respectively correspond to a starting point, an intermediate point and an ending point of an m-th filter. After the above steps are completed, one mel spectrogram segment with a length of 150 milliseconds and a height of 48 is generated for every 15 groups, in which 40 milliseconds is selected as the interval between segments.


Specifically, in the measuring method provided by the present application, the S32 is for processing and normalizing the input mel spectrogram through the adaptive convolutional neural network layer. FIG. 4 is a schematic diagram of the processing of convolutional neural network. First, a picture Xij of 48*15 is input, and processed by a 3*3 two-dimensional convolutional neural network. The formula is as follows:

Yconv=W*Xij+b,


where Xij is the input picture with i*j pixel, Yconv is the vector after convolution, W is the convolution kernel value and b is an offset value.


The convolved vector is normalized by two-dimensional batch. Firstly, the sample mean and variance of the vector are calculated, and the formula is as follows:







μ
β

=


1
m






i
=
1

m


x
i












σ
β
2

=


1
m






i
=
1

m



(


x
i

-

μ
β


)

2







After obtaining μβ and σβ2, normalization calculation is carried out by following formula:









x
ˆ

i

=



x
i

-

μ
β





σ
β
2

-
ϵ




,




where ∈ is a smaller value added to the variance to prevent division by zero, Xi is the vector after convolution.


The two-dimensional batch normalization formula is as follows:

YbatchNorm2D=γ{circumflex over (x)}i+β,


where γ is a trainable proportional parameter, β is a trainable deviation parameter and YbatchNorm2D is a two-dimensional batch normalized value.


The two-dimensional batch normalized value is activated by using an activation function, where the activation function is as follows:

Yrelu=max(0,(W*Xij+b)),


where W is the convolution kernel value and b is the vector of the offset value after convolution. In order to ensure a reasonable gradient when training the network, the adaptive maximum two-dimensional pool is selected for pooling, which is the core of the adaptive convolutional neural network.


The vector Yrelu obtained above is recorded as XW*H, with the height of H and the width of Wand then following formulae are used for calculation:







hstart
=

floor
(

i
*


H

i

n



H

o

u

t




)


,











h

e

n

d

=

ceil

(


(

i
+
1

)

*


H

i

n



H

o

u

t




)


)

,









wstart
=

floor
(

j
*


W

i

n



W

o

u

t




)


,











w

e

n

d

=

ceil


(


(

j
+
1

)

*


W

i

n



W

o

u

t




)



)

,





YAdaptiveMaxPool2D=max(input[hstart: hend, wstart: wend]),


where floor is a downward integer function and ceil is an upward integer function.


The above steps are carried out six times. Referring to FIG. 5, the input mel spectrogram segment of 48*15 is resampled to the size of 6*3. Then the data convolved by 64 convolution kernels in the convolutional layer are merged into a tensor of 64*6*1, and finally normalized into a feature vector Xcnn with a length of 384, where Xcnn=[X1, X2 . . . Xn]1*384.


Specifically, in the measuring method provided by the present application, the S33 is for extracting features related to speech quality by using multi-head attention in the transformer model, and the flowchart of this step is as shown in FIG. 6. Each head in the multi-head attention model carries out embedding with the corresponding vector to obtain the time sequence information. The vector that has completed the time sequence processing is first transformed by three learning matrices WQ, Wk, Wv, and the transformation formulae are:

Q=XWQ,
K=XWK, and
V=XWV.


The attention weight of the transformed matrices is calculated, and the formula is:








W
attention

=


(



Q
*

K
T



e


d










i
=
o

n



e


Q
*

K
T



d





)


V


,




where KT is the transpose of K matrix and √{square root over (d)} is the length of Xcnn.


The weight is dot-product with the vector to get the attention vector extracted for each head in the multi-head attention model. The calculation formula is as follows:

Xattention=Wattention*Xcnn,

where Xcnn is the feature vector.


The embodiment provided by the application selects an 8-head attention model, so that the result vector generated by attention is:

Yattention′=concat[Xattention1,Xattention2. . . Xattention8]1*8*W0,


where concat is a vector connection operation and Wo is a learnable multi-head attention weight matrix.


The generated multi-head attention passes through two fully connected layers, and the dropout of 0.1 is used between the fully connected layers, and the output of fully connected layers is normalized by the layernorm, and the formula is as follows:









μ
=


1
H






i
=
1

H


x
i









σ
=




1
H







i
H




(


x
i

-
μ

)

2


+
ε









Y
layernorm

=

f


(



g
σ



(

x
-
μ

)


+
b

)









The normalized vector Ylayernorm obtained is activated by gelu, and the calculation formula is as follows:








Y
attention

=

0
.5
*

Y
layernorm

*

(

1
+

tan


h

(



2
π


*

(


Y
layernorm

+


0
.
0


4

4

715


Y
layernorm
3



)


)



)



,




where Yattention is the final attention vector.


Specifically, in the measuring method provided by the present application, the S34 is for using self-attention pooling to carry out feature fusion and completing the evaluation of the quality of control speech. The processing flow chart of self-attention pooling is shown in FIG. 7.


The vector Xattention=[Xij]69*64 with attention generated in the S33 enters a layer of feed-forward network, where the feed-forward network includes two fully connected layers, and fully connected layers are activated by relu activation function, and then passes through one fully connected layer after a dropout of 0.1, and the formula is as follows:

Xfeedforward=linear2(relu(linear1(Xattention))),
Xfeedforward=A2(relu(A1(Xattention)+b1))+b2,


After the above steps are completed, the vector Xattention is compressed to a length of 1*69, and the parts beyond this length is coded and masked, and the formula is as follows:








X

m

a

s

k



i

=

{







X
feedforward


i

,

i

69







0
,

i
>

6

9






.






The coded vector is normalized by softmax function, and the formula is as follows:







X
softmax

=



e

X

m

a

s

k










i
=
1


6

9




e

X

m

a

s

k





.





In order to avoid the problem of attention fraction dissipation caused by feed-forward network processing, the final attention vector Yattention is dot-product with the vector Xsoftmax by using dot product method of vector itself, and the formula is as follows:

Xdotplus=Yattention·Xsoftmax.


Finally, the obtained vector Xdotplus passes through the last fully connected layer, and the obtained vector is the predicted mos value of the current speech segment.


Specifically, in the measuring method provided by the application, the S35 links the mos value and the corresponding long speech information frame to generate real-time measurement data. For each acquired real-time speech, a series of mos score values may be obtained through the above steps, and each value corresponds to the speech quality in a time period.


The above is only the preferred embodiment of the application, and it is not used to limit the application. Any modification, equivalent substitution and improvement made within the spirit and principle of the application should be included in the protection scope of the application.

Claims
  • 1. A self-attention-based speech quality measuring method for real-time control, comprising: S1, acquiring real-time air traffic control speech data, time stamping and encapsulating, and then combining with control data for secondary encapsulating to generate speech information frames;S2, detecting the speech information frames, dividing into an unvoiced information frame queue and a voiced information frame queue and predetermining a time length; when a length of any one queue inserting into the speech information frames exceeds a predetermined time length of 33 frames, wherein the duration of each frame is 0.1 second, dequeueing the speech information frames in the unvoiced information frame queue and the voiced information frame queue at a same time, wherein the voiced information frame queue includes frames including voice activity and the unvoiced information frame queue includes frame without voice activity, anddiscarding dequeued information frames of the unvoiced information frame queue, and detecting dequeued information frames of the voiced information frame queue,wherein in one subset of the dequeued information frames length of the dequeued information frames is less than 2 frames and the frames are discarded, and wherein in another subset of the dequeued information frames, the length of the dequeued information frames is greater than or equal to 2 frames and data is merged to generate a long speech information frame; andS3, processing the long speech information frame through a self-attention neural network and obtaining a predicted Mean Opinion Score (mos) value, wherein the neural network comprises a mel spectrum auditory filtering layer, an adaptive convolutional neural network layer,a transformer attention layer anda self-attention pooling layer.
  • 2. The self-attention-based speech quality measuring method for real-time control according to claim 1, wherein in the S2, the long speech information frame is generated, with a start time of a speech information frame at a head of the voiced information frame queue as a start time and an end time of a speech information frame at a tail of the voiced information frame queue as an end time,the control data is mergeable with the long speech information frame at a self-defined time.
  • 3. The self-attention-based speech quality measuring method for real-time control according to claim 1, wherein the mel spectrum auditory filtering layer converts the long speech information frame into a power spectrum, followed by dot-producting with mel filter banks to map a power into a mel frequency and linearly distribute, wherein a following formula is used to map:
  • 4. The self-attention-based speech quality measuring method for real-time control according to claim 3, wherein converting the long speech information frame into the power spectrum comprises differentially enhancing high-frequency components in the long speech information frame to obtain an information frame, segmenting and windowing the information frame, and then converting a processed information frame into the power spectrum by using Fourier transform.
  • 5. The self-attention-based speech quality measuring method for real-time control according to claim 1, wherein the adaptive convolutional neural network layer comprises a convolutional layer and an adaptive pool, resamples a mel spectrogram, then merges data convolved by convolution kernels in the convolutional layer into a tensor, followed by normalizing into a feature vector.
  • 6. The self-attention-based speech quality measuring method for real-time control according to claim 1, wherein the transformer attention layer applies a multi-head attention model to carry out embedding a feature vector for time sequence processing, and applies learning matrices to convert a processed vector, and applies a calculation formula to calculate an attention weight of a converted vector, wherein the calculation formula is as follows:
  • 7. The self-attention-based speech quality measuring method for real-time control according to claim 6, wherein after an extraction of the attention vector is completed, a multi-head attention vector Xattention′ is calculated by using a multi-head attention model, normalized by layernorm to obtain Ylayernorm and then activated by gelu to obtain a final attention vector Yattention, wherein a calculation formula is as follows: Yattention′=concat[Xattention1,Xattention2. . . Xattentionn]1*n*W0,
  • 8. The self-attention-based speech quality measuring method for real-time control according to claim 1, wherein the self-attention pooling layer compresses a length of the attention vector through a feed-forward network, codes and masks a vector part beyond the length, normalizes a coded masked vector, dot-products the coded masked vector with a final attention vector, and a dot-product vector passes through a fully connected layer to obtain a predicted mos value vector.
  • 9. The self-attention-based speech quality measuring method for real-time control according to claim 1, wherein the mos value is linked with a corresponding long speech information frame to generate real-time measurement data.
  • 10. The method of claim 1, wherein the neural network is trained using air traffic control speech data of the duration and characteristics used in S2.
  • 11. A self-attention-based speech quality measuring system for real-time control, comprising a processor, a network interface and a memory, wherein the processor, the network interface and the memory are connected with each other, the memory is used for storing a computer program, the computer program comprises program instructions, the processor is configured to call the program instructions to execute the self-attention-based speech quality measuring method for real-time control according to claim 1.
  • 12. The system of claim 11, wherein the neural network is trained using air traffic control speech data of the duration and characteristics used in S2.
Priority Claims (1)
Number Date Country Kind
202310386970.9 Apr 2023 CN national
US Referenced Citations (12)
Number Name Date Kind
20070203694 Chan Aug 2007 A1
20080219471 Sugiyama Sep 2008 A1
20190122651 Arik Apr 2019 A1
20200286504 Seetharaman Sep 2020 A1
20210233299 Zhou Jul 2021 A1
20220415027 Nie Dec 2022 A1
20230282201 Bissell Sep 2023 A1
20230317093 Xiao Oct 2023 A1
20230335114 Bakst Oct 2023 A1
20230343319 Hu Oct 2023 A1
20230409882 Ahmed Dec 2023 A1
20230420085 Mukherjee Dec 2023 A1
Foreign Referenced Citations (12)
Number Date Country
106531190 Mar 2017 CN
111968677 Nov 2020 CN
112562724 Mar 2021 CN
113782036 Dec 2021 CN
114187921 Mar 2022 CN
114242044 Mar 2022 CN
115457980 Dec 2022 CN
115547299 Dec 2022 CN
115691472 Feb 2023 CN
115798518 Mar 2023 CN
115985341 Apr 2023 CN
2014228691 Dec 2014 JP
Non-Patent Literature Citations (6)
Entry
Notification to Grant Patent Right for Invention from SIPO in 202310386970.9 dated May 24, 2023.
Office action from SIPO in 202310386970.9 dated May 17, 2023.
Search report from SIPO in 202310386970.9 dated Apr. 24, 2023.
Search report from SIPO in 202310386970.9 dated May 22, 2023.
Yuchen Liu, et al., CCATMos: Convolutional Context-aware Transformer Network for Non-intrusive Speech Quality Assessment, Internet publication arxiv.org/abs/2211.02577 dated Nov. 4, 2022.
Qin Mengmeng, Study on Speech Quality Assessment Based on Deep Learning, “China Excellent Master's Degree Thesis Literature database (information technology series)” 154 Jan. 2022 (Abstract on pp. 11-12).