The present invention relates to an interactive responding method and a computer system using the same, and more particularly, to an interactive responding method and a computer system capable of enabling a Chatbot to respond more interactively.
With the advancement and development of technology, the demand of interactions between a computer system and a user is increased. Human-computer interaction technology, e.g. somatosensory games, virtual reality (VR) environment, online customer service and Chatbot, becomes popular because of its convenience and efficiency. Such human-computer interaction technology may be utilized in gaming or websites, and Chatbot is one of common human-computer interaction technologies, which conducts a conversation via auditory or texts with the user through a computer program or an artificial intelligence. For example, Chatbot replies simple text messages or text questions to the user. In this way, Chatbot can only reply simple questions or machine responses in texts, which limits the interactions between Chatbot and the user. Therefore, an improvement is necessary to the prior art.
Therefore, the present invention provides an interactive responding method and a computer system to improve interactions between the Chatbot and the user and provide a better user experience.
An embodiment of the present invention discloses an interactive responding method, comprising receiving an input data from a user; generating an output data according to the input data; retrieving a plurality of attributes from the output data; determining a plurality of interactions corresponding to the plurality of attributes of the output data; and displaying the plurality of interactions via a non-player character; wherein the input data and the output data are related to a text.
An embodiment of the present invention further discloses a computer system, comprising a processing device; and a memory device coupled to the processing device, for storing a program code, wherein the program code instructs the processing device to perform an interactive responding method, and the interactive responding method comprises receiving an input data from a user; generating an output data according to the input data; retrieving a plurality of attributes from the output data; determining a plurality of interactions corresponding to the plurality of attributes of the output data; and displaying the plurality of interactions via a non-player character; wherein the input data and the output data are related to a text.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Please refer to
In detail, please refer to
Step 202: Start.
Step 204: Receive the input data from the user.
Step 206: Generate the output data according to the input data.
Step 208: Retrieve the attributes from the output data.
Step 210: Determine the interactions corresponding to the attributes of the output data.
Step 212: Display the interactions via the non-player character.
Step 214: End.
In step 204, the Chatbot 102 receives the input data from the user. The input data may be texts or texts translated from an audio or a speech generated by the user. In an embodiment, when the user is in a gaming environment, the user may input text messages to the Chatbot 102 and ask simple questions. Alternatively, when the user generates a speech, the speech is translated to the text by a program, and utilized as the input data for the Chatbot 102.
After receiving the input data (i.e. the texts), in step 206, the Chatbot 102 may instantly generate the output data corresponding to the input data. In an example, when the input data inputted by the user is “How are you”, the Chatbot 102 may instantly generate the output data “I'm fine”, which maybe utilized as a base for retrieving the attributes in step 208 accordingly. In step 208, the output data is retrieved to determine the attributes, such as, an emotion, an intention, a semantic role or a keyword of the output data. In an embodiment, the output data is retrieved by the processing unit 104 of the computer system 10 or a server. As such, the processing unit 104 may retrieve the emotions, intentions, semantic roles or keywords from the output data simultaneously. In an example, the processing unit 104 determines that the output data generated by the Chatbot 102 contains a sad emotion in the output data and retrieves the sad emotion consequently. Similarly, the processing unit 104 determines that the user is happy when the user sends a happy emoji. Notably, multiple emotions, intentions, semantic roles or keywords may be retrieved from the output data, and not limited thereto. Moreover, the processing unit 104 may also be implemented in the Chatbot, the computer system 10 or the server, so as to process and retrieve the text messages from the user in real-time.
In step 210, the interactions corresponding to the attributes of the output data is determined. In an embodiment, the interactions corresponding to the attributes of the output data is determined by the text-to-gesture unit 106. The interactions are at least one of an action, a facial expression, a gaze, a text, a speech, a gesture, an emotion or a movement and displayed via the virtual reality avatar. The interactions may be determined by a machine learning process or a rule based process adopted by the text-to-gesture unit 106, which collects a plurality of videos having a plurality of body languages and a plurality of transcripts for the machine learning process or the rule based process. More specifically, the videos may be utilized for training the text-to-gesture unit 106 to determine and store the interactions corresponding to the transcripts or texts. Aside from that, the text-to-gesture unit 106 may learn the corresponding attributes from body languages or transcripts presented in the video. For example, when a man in the video waves his hand and laughs loudly, the text-to-gesture unit 106 may learn that a happiness emotion corresponds to a laugh face. Alternatively, when a man says “I hate you” with a hatred facial expression, the text-to-gesture unit 106 may learn that ‘I hate you’ corresponds to a dislike emotion. Therefore, the text-to-gesture unit 106 may automatically identify the corresponding attributes according to the output data, when the user input related words or phrases.
After the interactions corresponding to attributes are retrieved from the output data, in step 212, the interactions are displayed via the NPC. In an embodiment, the NPC is a virtual reality avatar, which may display the interactions determined in step 208. That is, when the attributes is a sad emotion, the NPC may display the sad emotion through the facial expression of the virtual reality avatar. Under the situation, the Chatbot 102 may interact with the user via the virtual reality avatar according to the interactions determined by the text-to-gesture unit 106, rather than answering machine replies to the user with texts via the Chatbot 102.
In an embodiment, the computer system 10 may be utilized as a spokesman or an agent of a company. Since not every company may adopt or afford an artificially intelligent (AI) system to answer customers' questions, the computer system 10 of the present invention may perceive the emotion, the intention, the semantic role or the keyword from the questions asked by the customer, as such, the computer system 10 may understand customer's interest and behavior by retrieving the attributes by the text inputted by the customer. In this way, not only the response is delivered by the Chatbot 102, but also the determined interactions are displayed via the NPC to interact with the customer. Therefore, the computer system 10 of the present invention may be the spokesman or the agent for the company, which helps to improve images of the company.
Please refer to
Notably, the embodiments stated above illustrate the concept of the present invention, those skilled in the art may make proper modifications accordingly, and not limited thereto. For example, the determination to retrieve the attributes from the text-based output data is not limited to the machine learning method, and the machine learning method is not limited to a collection of videos, which may be realized by other methods and all belongs to the scope of the present invention.
In summary, the present invention provides an interactive responding method and computer system to improve interactions between the Chatbot and the user, such that the NPC may interact with the user with involvements of speeches, body gestures and emotions and provide a better user experience.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.