Multi-turn human-machine conversation method and apparatus based on time-sequence feature screening encoding module

Information

  • Patent Grant
  • 12014149
  • Patent Number
    12,014,149
  • Date Filed
    Thursday, August 24, 2023
    a year ago
  • Date Issued
    Tuesday, June 18, 2024
    2 months ago
  • CPC
    • G06F40/56
    • G06F40/30
  • Field of Search
    • CPC
    • G06F40/211
    • G06F40/253
    • G06F40/268
    • G06F40/284
    • G06F40/30
    • G10L15/00
    • G06N3/08
  • International Classifications
    • G06F40/30
    • G06F40/56
    • Term Extension
      0
Abstract
Disclosed is a multi-turn human-machine conversation method and apparatus based on a time-sequence feature screening encoding module, belonging to the technical field of natural language processing and artificial intelligence. The technical problem to be solved by the disclosure is how to screen information for each utterance in a historical conversation so as to obtain semantic information only relevant to candidate responses and how to reserve and extract time-sequence features in the historical conversation, thus improving prediction accuracy of a multi-turn human-machine conversation system. The adopted technical scheme is as follows: S1, acquiring a multi-turn human-machine conversation data set; S2, constructing a multi-turn human-machine conversation model: constructing a multi-turn human-machine conversation model based on the time-sequence feature screening encoding module; and S3, training the multi-turn human-machine conversation model: training the multi-turn human-machine conversation model constructed in S2 on the multi-turn human-machine conversation data set obtained in S1.
Description
CROSS REFERENCE TO RELATED APPLICATION

This application claims foreign priority of Chinese Patent Application No. 202310295715.3, filed on Mar. 24, 2023, in the China National Intellectual Property Administration, the disclosures of all of which are hereby incorporated by reference.


TECHNICAL FIELD

The disclosure relates to the technical field of artificial intelligence and natural language processing, and in particular to a multi-turn human-machine conversation method and apparatus based on a time-sequence feature screening encoding module.


BACKGROUND

With the continuous development of artificial intelligence technology, the way of human-machine interaction is gradually changed from a graphical user interface to a conversation user interface. The conversation with a machine is also a constantly pursued target in the field of artificial intelligence. At present, human-machine conversation technology may be divided into single-turn human-machine conversation and multi-turn human-machine conversation, depending on different conversation rounds. The multi-turn human-machine conversation has more practical applications, such as intelligent customer service, a mobile assistant and a search engine, because it is closer to a conversation scene of human beings. As an important way of human-machine interaction, the multi-turn human-machine conversation technology has great research significance and application value, but is also more challenging.


Specifically, at present, the challenges faced by the multi-turn human-machine conversation technology mainly include the following two points: first, the information contained in each utterance in a historical conversation sequence is not all useful for final response selection; the most important difference between the multi-turn human-machine conversation and the single-turn human-machine conversation is that the subject of the single-turn conversation is unique, while the subject of the multi-turn human-machine conversation may not be only one, so that how to identify and screen semantic information for each utterance in historical conversation is the primary challenge faced by the multi-turn human-machine conversation technology. Second, for a multi-turn human-machine conversation system, the sequence among utterances is important feature information that cannot be ignored; and in the course of conversation, two or more identical sentences may have different semantics and intentions due to different sequences. Therefore, if the time-sequence feature in the historical conversation can be effectively extracted, the performance of the multi-turn human-machine conversation method can be improved. So far, however, existing methods have not substantially solved these problems. Therefore, the multi-turn human-machine conversation is still a very challenging task.


Aiming at the challenges faced by the multi-turn human-machine conversation, the disclosure provides a multi-turn human-machine conversation method and apparatus based on a time-sequence feature screening encoding module, which can screen information for each utterance in historical conversation so as to obtain semantic information only relevant to candidate responses and reserve and extract time-sequence features in the historical conversation, thus improving prediction accuracy of a multi-turn human-machine conversation system.


SUMMARY

The technical problem to be solved by the disclosure is to provide a multi-turn human-machine conversation method and apparatus based on a time-sequence feature screening encoding module, which can screen information for each utterance in a historical conversation so as to obtain semantic information only relevant to candidate responses and reserve and extract time-sequence features in the historical conversation, thus improving prediction accuracy of a multi-turn human-machine conversation system.


The purpose of the disclosure is achieved by the following technical means.


A multi-turn human-machine conversation method based on a time-sequence feature screening encoding module includes following steps:

    • S1, acquiring a multi-turn human-machine conversation data set: downloading a disclosed multi-turn human-machine conversation data set from a network or automatically constructing a multi-turn human-machine conversation data set;
    • S2, constructing a multi-turn human-machine conversation model: constructing a multi-turn human-machine conversation model based on a time-sequence feature screening encoding module;
    • S3, training the multi-turn human-machine conversation model: training the multi-turn human-machine conversation model constructed in S2 using the multi-turn human-machine conversation data set obtained in S1.


As a further limitation to the present technical scheme, the constructing a multi-turn human-machine conversation model in S2 includes constructing an input module, constructing a pre-training model embedding module, constructing a time-sequence feature screening encoding module, and constructing a label prediction module.


As a further limitation to the present technical scheme, the input module is configured to, for each piece of data in the data set, records all sentences in the historical conversation as h1, h2, . . . hn respectively according to a sequence of the conversation; selects a response from a plurality of responses as a current response, and formalizes the response as r; determines a label of the data according to whether the response is a positive response, that is, if the response is a positive response, records the label as 1, otherwise, records the label as 0; and h1, h2, . . . hn, r and the label form a piece of input data together.


As a further limitation to the present technical scheme, the pre-training model embedding module is configured to perform embedding processing on the input data constructed by the input module by using a pre-training language module BERT, to obtain embedding representation and candidate response embedding representation of each utterance in the historical conversation, recorded as {right arrow over (E1h)}, {right arrow over (E2h)}, . . . {right arrow over (Enh)} and {right arrow over (Er)}; and for specific implementation, see the following formula:

{right arrow over (E1h)}=BERT(h1), {right arrow over (E2h)}=BERT(h2), Enh=BERT(hn); {right arrow over (Er)}=BERT(r);


where h1, h2, . . . hn represent the first utterance, the second utterance, . . . , the nth utterance in the historical conversation, and r represents the candidate response.


As a further limitation to the present technical scheme, the time-sequence feature screening encoding module is configured to receive the embedding representation and candidate response embedding representation of each utterance in the historical conversation output by the pre-training model embedding module, and then perform encoding operation on the embedding representation and candidate response embedding representation respectively by using an encoder, thus completing the semantic information screening and time-sequence feature extraction process through an attention mechanism and fusion operation, so as to obtain semantic feature representation of the conversation.


Specifically, the implementation process of the module is as follows:

    • first, performing encoding operation on the candidate response embedding representation by using the encoder, to obtain candidate response encoding representation, recorded as {right arrow over (Fr)}; and for specific implementation, see the following formula:

      {right arrow over (Fr)}=Encoder({right arrow over (Er)});
    • where {right arrow over (Er)} represents the candidate response embedding representation;
    • then performing encoding operation on embedding representation of the first utterance in the historical conversation by using the encoder, to obtain encoding representation 1, recorded as {right arrow over (F1h)}; then completing the information screening process of the candidate response encoding representation to the encoding representation 1 by using the attention mechanism, to obtain encoding alignment representation 1, recorded as {right arrow over (Z1h)}; and for specific implementation, see the following formula:

      {right arrow over (F1h)}=Encoder({right arrow over (E1h)});
      {right arrow over (Z1h)}=Attention({right arrow over (Fr)};{right arrow over (F1h)});
    • where {right arrow over (E1h )} represents embedding representation of the first utterance, namely, embedding representation 1; {right arrow over (Fr )} represents candidate response encoding representation;
    • then performing encoding operation on embedding representation of the second utterance in the historical conversation by using the encoder, to obtain encoding representation 2, recorded as {right arrow over (F2h)}; then completing the information fusion process for the encoding alignment representation 1 and the encoding representation 2 by using addition operation, to obtain encoding fusion representation 2, recorded as {right arrow over (T2h)}; finally, completing the information screening process of the candidate response encoding representation to the encoding fusion representation 2 by using the attention mechanism, to obtain encoding alignment representation 2, recorded as {right arrow over (Z2h)}; and for specific implementation, see the following formula:

      {right arrow over (F2h)}=Encoder({right arrow over (E2h)});
      {right arrow over (T2h)}={right arrow over (Z1h)}+{right arrow over (F2h)}:
      {right arrow over (Z2h)}=Attention({right arrow over (Fr)};{right arrow over (T2h)});
    • where {right arrow over (E2h )} represents embedding representation of the second utterance, namely, embedding representation 2; {right arrow over (Z1h )} represents encoding alignment representation 1; and {right arrow over (Fr)} represents candidate response encoding representation;
    • then performing encoding operation on embedding representation of the third utterance in the historical conversation by using the encoder with an operation process being the same as the operation process of the encoding operation performed on the embedding representation of the second utterance in the historical conversation by using the encoder, to obtain encoding alignment representation 3, recorded as {right arrow over (Z3h)}, and so on, until the same operation is performed on the nth utterance in the historical conversation, finally, obtaining the encoding alignment representation n, recorded as {right arrow over (Znh)}; for the nth utterance in the historical conversation, performing encoding operation on an embedding representation of the nth utterance in the historical conversation by using the encoder, to obtain encoding representation n, recorded as {right arrow over (Fnh)}; then completing the information fusion process for the encoding alignment representation n-1 and the encoding representation n by using addition operation, to obtain encoding fusion representation n, recorded as {right arrow over (Tnh)}; finally, completing the information screening process of the candidate response encoding representation to the encoding fusion representation n by using the attention mechanism, to obtain encoding alignment representation n, recorded as {right arrow over (Znh)}; and for specific implementation, see the following formula:

      {right arrow over (Fnh)}=Encoder({right arrow over (Enh)});
      {right arrow over (Tnh)}={right arrow over (Zn−1h)}+{right arrow over (Fnh)};
      {right arrow over (Znh)}=Attention({right arrow over (Fr)};{right arrow over (Tnh)});
    • where {right arrow over (Enh)} represents embedding representation of the nth utterance, namely, embedding representation n; {right arrow over (Zn−1h)} represents encoding alignment representation n−1; and {right arrow over (Fr)} represents candidate response encoding representation;
    • finally, completing the information fusion process for the encoding alignment representation n and the candidate response encoding representation by using addition operation, to obtain semantic feature representation of the conversation, recorded as {right arrow over (Q)}; and for specific implementation, see the following formula:

      {right arrow over (Q)}={right arrow over (Znh)}+{right arrow over (Fr)};


where {right arrow over (Znh )} represents the encoding alignment representation n; and {right arrow over (Fr)} represents the candidate response encoding representation.


A multi-turn human-machine conversation apparatus applied in the above method, including:

    • a multi-turn human-machine conversation data set acquisition unit, configured to download a disclosed multi-turn human-machine conversation data set from a network or automatically construct a multi-turn human-machine conversation data set;
    • a multi-turn human-machine conversation model construction unit, configured to construct a pre-training model embedding module, construct a time-sequence feature screening encoding module, and construct a label prediction module, so as to construct a multi-turn human-machine conversation model; and
    • a multi-turn human-machine conversation model training unit, configured to construct a loss function and an optimization function, thus completing prediction of a candidate response.


Compared with the existing technology, the disclosure has the following beneficial effects:


(1) By utilizing the pre-training model embedding module, deep semantic embedding features in the historical conversation and the candidate response may be captured, thus obtaining richer and more accurate embedding representation.


(2) By utilizing the time-sequence feature screening encoding module, information screening may be performed on each utterance in the historical conversation so as to obtain semantic information only relevant to candidate response, thus obtaining more complete and accurate semantic feature representation.


(3) By utilizing the time-sequence feature screening encoding module, time-sequence feature in the historical conversation may be reserved and extracted, thus improving prediction accuracy of the multi-turn human-machine conversation system.


(4) According to the method and apparatus provided by the disclosure, in conjunction with the time-sequence feature screening encoding module, the prediction accuracy of the multi-round man-machine conversation model may be effectively improved.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a flowchart of a multi-turn human-machine conversation method based on a time-sequence feature screening encoding module;



FIG. 2 is a flowchart for constructing a multi-turn human-machine conversation model;



FIG. 3 is a flowchart for training a multi-turn human-machine conversation model;



FIG. 4 is a flowchart of a multi-turn human-machine conversation apparatus based on a time-sequence feature screening encoding module;



FIG. 5 is a structural schematic diagram of a time-sequence feature screening encoding module; and



FIG. 6 is a framework diagram of a multi-turn human-machine conversation model based on a time-sequence feature screening encoding module.





DETAILED DESCRIPTION OF THE EMBODIMENTS

The technical schemes in the embodiments of the disclosure are clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the disclosure. Apparently, the embodiments described are only a part rather than all of the embodiments of the disclosure. All other embodiments obtained by those of ordinary skilled in the art based on the embodiments of the disclosure without creative efforts shall fall within the protection scope of the disclosure.


As shown in FIG. 1, a multi-turn human-machine conversation method based on a time-sequence feature screening encoding module includes following steps:


At S1, a multi-turn human-machine conversation data set is acquired: a disclosed multi-turn human-machine conversation data set is downloaded from a network or a multi-turn human-machine conversation data set is automatically constructed;


At S2, a multi-turn human-machine conversation model is constructed: a multi-turn human-machine conversation model is constructed by using a time-sequence feature screening encoding module.


At S3, the multi-turn human-machine conversation model is trained: the multi-turn human-machine conversation model constructed in S2 is trained using the multi-turn human-machine conversation data set obtained in S1.


At S1, a multi-turn human-machine conversation data set is acquired:

    • a disclosed multi-turn human-machine conversation data set is downloaded from a network.


For example: there are a number of disclosed multi-turn human-machine conversation data sets on the network, such as Ubuntu Dialogue Corpus. The data format in the data set is as follows:
















Historical
S1
i need to remove all lines matching


con-

__number__ __number__ from access log can


versation

anyone tell me the piped command im noob



S2
sed removes entire lines x1q??



S3
sed −i __path__ __number__ __number__




__path__ myfile



S4
that alters myfile



S5
it can ys



S6
with that pattern


Candidate
Positive
oh wait uh matches any char sorry sed


responses
(label: 1)
__path__ 0\ 0\ 1/d deletes all lines with




__number__ __number__ in them



Negative
it s stable version if you don' t mind paste



(label: 0)
whole output on paste ubuntu com









In a training set and a validation set, there is only one positive response (Positive(label: 1)) and one negative response (negative(label:0)) for a same historical conversation sequence; in a test set, there is only one positive response (Positive (label: 1)) and nine negative responses (negative(label:0)).


At S2, a multi-turn human-machine conversation model is constructed.


The process of constructing the multi-turn human-machine conversation model is shown in FIG. 2, the main operation including constructing an input module, constructing a pre-training model embedding module, constructing a time-sequence feature screening encoding module, and constructing a label prediction module.


At S201, an input module is constructed.


For each piece of data in the data set, all sentences in the historical conversation are recorded as h1, h2, . . . hn respectively according to a sequence of the conversation; a response is selected from a plurality of responses as a current response, and formalized as r; according to whether the response is a positive response, a label of the data is determined, that is, if the response is a positive response, the label is recorded as 1; otherwise, the label e is recorded as 0; and a piece of input data is formed by h1, h2, . . . hn, r and the label together.


For example, data shown in S1 is taken as an example to form a piece of input data. The result is as follows:

    • (h1: i need to remove all lines matching _number_ _number_ from access log can anyone tell me the piped command im noob, h2: sed removes entire lines x1q??, h3: sed−i _path_ _number_ _number_ _path_ myfile, h4: that alters myfile, h5: it can ys, h6: with that pattern, r: oh wait uh matches any char sorry sed _path_0\0\1/d deletes all lines with _number_ _number_ in them, 1).


At S202, a pre-training model embedding module is constructed.


The pre-training model embedding module is configured to perform embedding processing on the input data constructed in S201 by using a pre-training language module BERT, to obtain embedding representation and candidate response embedding representation of each utterance in the historical conversation, recorded as {right arrow over (E1h)}, {right arrow over (E2h)}, . . . {right arrow over (Enh)} and {right arrow over (Er)}; and for specific implementation, see the following formula:

{right arrow over (E1h)}=BERT(h1), {right arrow over (E2h)}=BERT(h2), . . . {right arrow over (Enh)}=BERT(hn) {right arrow over (Er)}=BERT(r);

    • where h1, h2, . . . , hn represent the first utterance, the second utterance, . . . the nth utterance in the historical conversation, and r represents the candidate response.


For example, when the disclosure is implemented on the Ubuntu Dialogue Corpus data set, the operation of the module is completed by using the pre-training language model BERT, and all settings are according to the default settings of BERT in pytorch. In pytorch, the code described above is implemented as follows:


#Input data is encoded by using an embedding layer of BERT.


h_encoder_list=[ ]


for i in h_embed_list:

    • h_encoder_list.append(BERT(i) [1])


r_embed=BERT(r) [1] where


h_embed_list represents each utterance in the historical conversation, r is candidate response, h_encoder_list represents embedding representation of each utterance in the historical conversation, and r_embed represents embedding feature representation of a question.


At S203, a time-sequence feature screening encoding module is constructed.


The time-sequence feature screening encoding module is shown in FIG. 5. The time-sequence feature screening encoding module is configured to receive the embedding representation and candidate response embedding representation of each utterance in the historical conversation output by the pre-training model embedding module, then perform encoding operation on the embedding representation and candidate response embedding representation by using an encoder, thus completing the semantic information screening and time-sequence feature extraction process through an attention mechanism and fusion operation, so as to obtain semantic feature representation of the conversation, recorded as {right arrow over (Q)}, and transmits the semantic feature representation to a label prediction module.


At S204, a label prediction module is constructed.


The semantic feature representation of the conversation obtained in S203 will be taken as input of the module, which is processed by a dense network with a dimension of 1 and an activation function of Sigmod to obtain the probability that the current response is a positive response.


When the model is not yet trained, S3 needs to be further performed for training to optimize parameters of the model; after the model is trained, S204 maybe performed to predict which of the candidate responses is the positive response.


At S3, a multi-turn human-machine conversation model is trained.


The multi-turn human-machine conversation model constructed in S2 is trained on the multi-turn human-machine conversation data set obtained in S1. The process is shown in FIG. 3.


At S301, a loss function is constructed.


In the disclosure, cross entropy is taken as the loss function, the formula is as follows:








L
loss

=

-




i
=
1

n



(

y
true

)



log

(

y

p

r

e

d


)





,





where ytrue is a real label, and ypred is the correct probability of model output.


For example, in pytorch, the code described above is implemented as follows:


# The error between a predicted value and the label is calculated by using across entropy loss function.


loss_fct=CrossEntropyLoss ( )


loss=loss_fct(logits. view(−1, self.num_labels),


labels.view(−1))

    • where labels are real labels, and logits is the correct probability of model output.


At S302, an optimization function is constructed.


The model tests various optimization functions, and finally, selects an AdamW optimization function as the optimization function, except its learning rate set to 2e-5, other hyperparameters of AdamW are set to the default values in pytorch.


For example, in pytorch, the code described above is implemented as follows:


#Model parameters are optimized by an AdamW optimizer.


optimizer=AdamW (optimizer_grouped_parameters, 1r=2e−5) where optimizer_grouped_parameters are parameters to be optimized, default to all parameters in an auto question-answering model.


When the model is not yet trained, S3 needs to be further performed for training to optimize parameters of the model; after the model is trained, S204 may be performed to predict which of the candidate responses is the positive response.


An apparatus mainly includes three units, namely, a multi-turn human-machine conversation data set acquisition unit, a multi-turn human-machine conversation model construction unit and a multi-turn human-machine conversation model training unit. The process is shown in FIG. 4, and the specific function of each unit is as follows.


The multi-turn human-machine conversation data set acquisition unit is configured to download a disclosed multi-turn human-machine conversation data set from a network or automatically construct a multi-turn human-machine conversation data set.


The multi-turn human-machine conversation model construction unit is configured to construct a pre-training model embedding module, construct a time-sequence feature screening encoding module, and construct a label prediction module, so as to construct a multi-turn human-machine conversation model.


The multi-turn human-machine conversation model training unit is configured to construct a loss function and an optimization function, thus completing prediction of a candidate response.


Furthermore, the multi-turn human-machine conversation model construction unit further includes:

    • a construction input module unit, which is in charge of preprocessing an original data set to construct input data;
    • a construction pre-training model embedding module unit, which is in charge of performing embedding processing on the input data respectively by using a pre-training language module BERT, to obtain embedding representation and candidate response embedding representation of each utterance in the historical conversation;
    • a construction time-sequence feature screening encoding module unit, which is in charge of receiving the embedding representation and candidate response embedding representation of each utterance in the historical conversation output by the pre-training model embedding module, and then performing encoding operation on the embedding representation and candidate response embedding representation by using an encoder, thus completing the semantic information screening and time-sequence feature extraction process through an attention mechanism and fusion operation, so as to obtain semantic feature representation of the conversation; and
    • a construction label predication module unit, which is in charge of judging whether the current response is a positive response or not based on semantic feature representation of the conversation.


The multi-turn human-machine conversation model training unit further includes:

    • a construction loss function unit, which is in charge of calculating the error between a predicated result and real data by using the cross entropy loss function; and
    • a construction optimization function unit, which is in charge of training and adjusting parameters in model training, so as to reduce a predication error.


The disclosure provides a storage medium storing a plurality of instructions where the instructions are loaded by a processor to execute the steps of the above multi-turn human-machine conversation method.


The disclosure provides an electronic device, the electronic device including:

    • the above storage medium; and a processor, configured to execute the instructions in the storage medium. A model framework based on a time-sequence feature screening encoding module.


The overall model framework structure of the disclosure is shown in FIG. 6. As shown in FIG. 6, the main framework structure of the disclosure includes a pre-training model embedding module, a time-sequence feature screening encoding module, and a label prediction module. The pre-training model embedding module is configured to perform embedding processing on each utterance and candidate response in historical conversation by using a pre-training language module, to obtain embedding representation and candidate response embedding representation of each utterance in the historical conversation, and transmit the embedding representation and candidate response embedding representation to the time-sequence feature screening encoding module. The time-sequence feature screening encoding module performs screening and fusion processing on the utterances in the historical conversation respectively, thus finally obtaining semantic feature representation of the conversation, and transmits the semantic feature representation to the label prediction module. The label prediction module maps the semantic feature representation of the conversation to a floating point value on the specified interval, which serves as the matching degree between the candidate response and the historical conversation; and then the matching degree of each candidate response is compared, and the response with the highest matching degree is taken as the positive response.


The time-sequence feature screening encoding module is shown in FIG. 5. The time-sequence feature screening encoding module is configured to receive the embedding representation and candidate response embedding representation of each utterance in the historical conversation output by the pre-training model embedding module, and then perform encoding operation on the embedding representation and candidate response embedding representation by using an encoder, thus completing the semantic information screening and time-sequence feature extraction process through an attention mechanism and fusion operation, so as to obtain semantic feature representation of the conversation.


Specifically, the implementation process of the module is as

    • follows:


first, performing encoding operation on the candidate response embedding representation by using the encoder, to obtain candidate response encoding representation, recorded as {right arrow over (Fr)}; and for specific implementation, see the following formula:

{right arrow over (F1h)}=Encoder({right arrow over (Er)});

    • where {right arrow over (Er)} represents the candidate response embedding representation.
    • then performing encoding operation on embedding representation of the first utterance in the historical conversation by using the encoder, to obtain encoding representation 1, recorded as {right arrow over (F1h)}; then completing the information screening process of the candidate response encoding representation to the encoding representation 1 by using the attention mechanism, to obtain encoding alignment representation 1, recorded as {right arrow over (Z1h)}; and for specific implementation, see the following formula:

      {right arrow over (F1h)}=Encoder({right arrow over (E1h)});
      {right arrow over (Z1h)}*=Attention({right arrow over (Fr)}; {right arrow over (F1h)});
    • where {right arrow over (E1h )} represents embedding representation of the first utterance, namely, embedding representation 1; and {right arrow over (Fr)} represents candidate response encoding representation.
    • then performing encoding operation on embedding representation of the second utterance in the historical conversation by using the encoder, to obtain encoding representation 2, recorded as {right arrow over (F2h)}; then completing the information fusion process for the encoding alignment representation 1 and the encoding representation 2 by using addition operation, to obtain encoding fusion representation 2, recorded as {right arrow over (T2h)}; finally, completing the information screening process of the candidate response encoding representation to the encoding fusion representation 2 by using the attention mechanism, to obtain encoding alignment representation 2, recorded as {right arrow over (Z2h)}; and for specific implementation, see the following formula:

      {right arrow over (F2h)}=Encoder({right arrow over (E2h)});
      {right arrow over (T2h)}={right arrow over (Z1h)}+{right arrow over (F2h)};
      {right arrow over (Z2h)}=Attention({right arrow over (Fr)}; {right arrow over (T2h)});
    • where {right arrow over (E2h )} represents embedding representation of the second utterance, namely, embedding representation 2; {right arrow over (Z1h)} represents encoding alignment representation 1; and {right arrow over (Fr)} represents candidate response encoding representation.
    • then performing encoding operation on embedding representation of the third utterance in the historical conversation by using the encoder with an operation process being the same as the operation process of the encoding operation performed on the embedding representation of the second utterance in the historical conversation by using the encoder, to obtain encoding alignment representation 3, recorded as {right arrow over (Z3h)}, and so on, until the same operation is performed on the nth utterance in the historical conversation, finally, obtaining the encoding alignment representation n, recorded as {right arrow over (Znh)}; for the nth utterance in the historical conversation, performing encoding operation on an embedding representation of the nth utterance in the historical conversation by using the encoder, to obtain encoding representation n, recorded as {right arrow over (Fnh)}; then completing the information fusion process for the encoding alignment representation n−1 and the encoding representation n by using addition operation, to obtain encoding fusion representation n, recorded as {right arrow over (Tnh)}; finally, completing the information screening process of the candidate response encoding representation to the encoding fusion representation n by using the attention mechanism, to obtain encoding alignment representation n, recorded as {right arrow over (Znh)}; and for specific implementation, see the following formula:

      {right arrow over (Fnh)}=Encoder({right arrow over (Enh)});
      {right arrow over (Tnh)}={right arrow over (Zn−1h)}+{right arrow over (Fnh)}
      {right arrow over (Znh)}=Attention({right arrow over (Fr)};{right arrow over (Tnh)});
    • where {right arrow over (Enh )} represents embedding representation of the nth utterance, namely, embedding representation n; {right arrow over (Zn−1h )} represents encoding alignment representation n−1; and {right arrow over (Fr)} represents candidate response encoding representation.
    • finally, completing the information fusion process for the encoding alignment representation n and the candidate response encoding representation by using addition operation, to obtain semantic feature representation of the conversation, recorded as {right arrow over (Q)}; and for specific implementation, see the following formula:

      {right arrow over (Q)}={right arrow over (Fnh)}+{right arrow over (Fr)};
    • where {right arrow over (Znh )} represents the encoding alignment representation n; and {right arrow over (Fr)} represents the candidate response encoding representation.


For example, when the disclosure is implemented on the Ubuntu Dialogue Corpus data set, a Transformer Encoder is selected as the encoding structure Encoder, with the encoding dimension set to 768; the number of layers set to 2; a Dot-Product Attention calculation method is selected as the Attention mechanism, and with calculation of the encoding alignment representation 1 as an example, the calculation process is as follows:

F({right arrow over (Fr)},{right arrow over (F1h)}={right arrow over (Fr)}⊗{right arrow over (F1h)};


the formula represents the interaction calculation between the candidate response encoding representation and the encoding representation 1 through dot product multiplication operation, {right arrow over (Fr )}represents the candidate response encoding representation, F1h represents the encoding representation 1, and ⊗ represents the dot product multiplication operation.








α
i

=


exp

(

F

(



F
r



,


F
1
h




)

)






i


=
1

l


exp

(

F

(



F
r



,


F

1

i

h




)

)




,

i
=
1

,
2
,

esen
;







    • the formula represents acquiring attention weight α through normalization operation, I and i′ represent element subscripts in the corresponding input tensor, 1 represents the number of elements in the input tensor F1h, and meanings of other signs are the same as in the above formula;











Z


=




i
=
1

l



α
i




F

1

i

h






;





the formula represents completing feature screening of the encoding representation 1 by using the obtained attention weight, so as to obtain the encoding alignment representation 1; 1 represents the number of elements in F1h and α.


In pytorch, the code described above is implemented as follows:


# the calculation process of attention is defined.

    • def dot_attention(s1, s2):
      • s1_s2 dot=tf.expand_dims(s1, axis=1)*tf.expand_dims(s2, axis=2)
        • sd1=tf.multiply (tf.tanh (K.dot(s1_s2_dot, self.Wd)), self.vd)
        • sd2=tf.squeeze (sd1, axis=−1)
        • ad=tf.nn.softmax(sd2)
        • qdq=K. batch_dot(ad, s2)
        • return qdq


# the encoding structure is defined.

    • encoder_layer=nn. TransformerEncoderLayer(d_model=512, nhead=8) self. transformer_encoder=nn. TransformerEncoder (encoder_layer,


layers=2)


# codes

    • e_r=response_embed
    • f_r=self. transformer_encoder(e_r)
    • e_h_list=history_embed_list
    • lin=‘ ’
    • for i in range (n):
      • f_h=self. transformer_encoder(e_h_list[i])
      • if lin !=‘ ’:
        • f_h=f_h+lin
      • lin=dot_attention(f_r, f_h)
    • z_r_final=lin


final_I=z_r_final+f_r where history_embed_list represents a list of embedding representations of all utterances in the historical conversation; response_embed represents candidate response embedding representation; z_r_final represents encoding alignment representation n; final_I represents semantic feature representation of conversation; d_model represents the size of the word vector required by the encoder, which is set to 512 here; n head represents the number of heads in a multi-head attention model, which is set to 8 here; layers represents the number of layers of the encoding structure, which is set to 2 here.


Although the embodiments of the disclosure have been shown and described, those of ordinary skill in the art can understand that various changes, modifications, replacements, and variations can be made to these embodiments without departing from the principle and spirit of the disclosure, and the scope of the disclosure is defined by the appended claims and equivalents thereof

Claims
  • 1. A multi-turn human-machine conversation method based on a time-sequence feature screening encoding module, comprising: S1, acquiring a multi-turn human-machine conversation data set;S2, constructing a multi-turn human-machine conversation model: constructing a multi-turn human-machine conversation model based on a time-sequence feature screening encoding module;S3, training the multi-turn human-machine conversation model: training the multi-turn human-machine conversation model constructed in S2 using the multi-turn human-machine conversation data set obtained in S1;the constructing a multi-turn human-machine conversation model in S2 comprises constructing an input module, constructing a pre-training model embedding module, constructing a time-sequence feature screening encoding module, and constructing a label prediction module, wherein the input module constructs input data;the input module is configured to, for each piece of data in the data set, record all sentences in a historical conversation as h1, h2, . . . hn respectively according to a sequence of the conversation; select a response from a plurality of responses as a current response and formalize the response as r; determine a label of the data according to whether the response is a positive response, that is, if the response is a positive response, record the label as 1, otherwise, record the label as 0; and h1, h2, . . . hn, r and the label form a piece of the input data together;the pre-training model embedding module is configured to perform embedding processing on the input data by using a pre-training language module BERT, to obtain embedding representation and candidate response embedding representation of each utterance in the historical conversation, recorded as {right arrow over (E1h)}, {right arrow over (E2h)}, . . . Enh and {right arrow over (Er)}; E1h=BERT(h1), {right arrow over (E2h)}=BERT(h2), . . . , {right arrow over (Enh)}=BERT(hn); {right arrow over (Er)}=BERT(r)
  • 2. The multi-turn human-machine conversation method based on a time-sequence feature screening encoding module according to claim 1, comprising a multi-turn human-machine conversation apparatus applied in the method, the apparatus comprising: a multi-turn human-machine conversation data set acquisition unit, configured to download a disclosed multi-turn human-machine conversation data set from a network or automatically construct a multi-turn human-machine conversation data set;a multi-turn human-machine conversation model construction unit, configured to construct a pre-training model embedding module, construct a time-sequence feature screening encoding module, and construct a label prediction module, so as to construct a multi-turn human-machine conversation model; anda multi-turn human-machine conversation model training unit, configured to construct a loss function and an optimization function, thus completing prediction of a candidate response.
Priority Claims (1)
Number Date Country Kind
202310295715.3 Mar 2023 CN national
US Referenced Citations (41)
Number Name Date Kind
10802937 Crosby Oct 2020 B2
11416688 Wu Aug 2022 B2
11431660 Leeds Aug 2022 B1
11803703 Gamon Oct 2023 B2
20160219048 Porras Jul 2016 A1
20180330723 Acero Nov 2018 A1
20190020606 Vasudeva Jan 2019 A1
20190138268 Andersen et al. May 2019 A1
20190163965 Yoo May 2019 A1
20190341036 Zhang Nov 2019 A1
20210099317 Hilleli Apr 2021 A1
20210118442 Poddar Apr 2021 A1
20210150118 Le et al. May 2021 A1
20210174026 Wu Jun 2021 A1
20210174798 Wu Jun 2021 A1
20210183484 Shaib Jun 2021 A1
20210256417 Kneller Aug 2021 A1
20210294781 Fernandez et al. Sep 2021 A1
20210294828 Tomkins Sep 2021 A1
20210294829 Bender Sep 2021 A1
20210294970 Bender Sep 2021 A1
20210295822 Tomkins Sep 2021 A1
20210327411 Wu Oct 2021 A1
20220019579 Meyerzon Jan 2022 A1
20220019740 Meyerzon Jan 2022 A1
20220019905 Meyerzon Jan 2022 A1
20220036890 Yuan Feb 2022 A1
20220068263 Roy Mar 2022 A1
20220139384 Wu May 2022 A1
20220164548 Tumuluri May 2022 A1
20220277031 Quamar Sep 2022 A1
20220358297 Ma Nov 2022 A1
20220405489 Radkoff Dec 2022 A1
20230014775 Dotan-Cohen Jan 2023 A1
20230085781 Zhuge Mar 2023 A1
20230237276 Lima Jul 2023 A1
20230244855 Attwater Aug 2023 A1
20230244968 Gurin Aug 2023 A1
20230306205 Maeder Sep 2023 A1
20230315999 Mohammed Oct 2023 A1
20240078264 Solis Mar 2024 A1
Foreign Referenced Citations (6)
Number Date Country
111125326 May 2020 CN
113537024 Oct 2021 CN
114281954 Apr 2022 CN
114722838 Jul 2022 CN
115129831 Sep 2022 CN
115544231 Dec 2022 CN