EMOTION RECOGNIZER, ROBOT INCLUDING THE SAME, AND SERVER INCLUDING THE SAME

Information

  • Patent Application
  • 20200086496
  • Publication Number
    20200086496
  • Date Filed
    September 12, 2019
    5 years ago
  • Date Published
    March 19, 2020
    4 years ago
Abstract
An emotion recognizer includes: an uni-modal preprocessor configured to include a plurality of recognizers for each modal learned to recognize emotion information of a user contained in uni-modal input data; and a multi-modal recognizer configured to merge output data of the plurality of recognizers for each modal, and be learned to recognize the emotion information of the user contained in the merged data. The emotion recognizer may output a complex emotion recognition result including an emotion recognition result of each of the plurality of recognizers for each modal and an emotion recognition result of the multi-modal recognizer.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority under 35 U.S.C. § 119 to Korean Application No. 10-2018-0110500, filed in Korea on Sep. 14, 2018, the entire subject matter of which is hereby incorporated by reference.


BACKGROUND
1. Field

Embodiments may relate to an emotion recognizer (or emotion recognition processor), a robot including the same, and a server including the same. More particularly, embodiments may relate to an emotion recognizer capable of recognizing various emotions of a user, and a robot including the same, and a server including the same.


2. Background

Robots have been developed for industrial use and have been part of factory automation. As the application field of robots has further expanded, medical robots, aerospace robots, and/or the like have been developed, and household robots that can be used in ordinary homes have been manufactured.


As use of robots has been increased, there is a growing demand for robots that can provide various information, fun, and services while understanding and communicating with users beyond performing simple functions.


In various fields as well as robot field, there is a growing interest in recognizing human emotions and in providing corresponding therapies and services. Research on methods of recognizing human emotion has been actively conducted.


A user may create and use a unique character by using his/her face, or the like. U.S. Pat. No. 9,262,688B1, the subject matter of which is incorporated herein by reference, may disclose a method and system for recognizing an emotion or expression from multimedia data according to a certain algorithm using a fuzzy set.


However, in this document, an analyzer module may finally select one emotion or expression from the candidate emotion or expression database, and output the result.


Outputting only one emotion value may be insufficient to provide an emotion-based service by acquiring accurate and various data related to emotion. It may be impossible or difficult to determine the difference of the emotion for each input data, and even if the acquired data of various sources is used, there may be a limit in that it is greatly influenced by the first set weight for each source.





BRIEF DESCRIPTION OF THE DRAWINGS

Arrangements and embodiments may be described in detail with reference to the following drawings in which like reference numerals refer to like elements and wherein:



FIG. 1 is a block diagram of a robot system that includes a robot according to an embodiment of the present invention;



FIG. 2 is a front view showing an outer shape of a robot according to an embodiment of the present invention;



FIG. 3 is an example of an internal block diagram of a robot according to an embodiment of the present invention;



FIG. 4 is an example of an internal block diagram of a server according to an embodiment of the present invention;



FIG. 5 is an example of an internal block diagram of an emotion recognizer according to an embodiment of the present invention;



FIG. 6 is a diagram for explaining emotion recognition according to an embodiment of the present invention;



FIGS. 7 to 9 are diagrams for explaining uni-modal emotion recognition according to an embodiment of the present invention;



FIG. 10 is a diagram for explaining multi-modal emotion recognition according to an embodiment of the present invention;



FIG. 11 is a diagram illustrating emotion recognition result according to an embodiment of the present invention;



FIG. 12 is a diagram for explaining emotion recognition post-processing according to an example embodiment of the present invention;



FIG. 13 is a diagram for explaining an emotional interchange user experience of a robot according to an example embodiment of the present invention; and



FIG. 14 is a flowchart illustrating an operation method of an emotion recognizer according to an embodiment of the present invention.





DETAILED DESCRIPTION

Exemplary embodiments of the present invention may be described with reference to the accompanying drawings in detail. The same reference numbers may be used throughout the drawings to refer to the same or like parts. Detailed descriptions of well-known functions and structures incorporated herein may be omitted to avoid obscuring the subject matter of the present invention. Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. The suffixes “module” and “unit” in elements used in description below are given only in consideration of ease in preparation of the specification and do not have specific meanings or functions. Therefore, the suffixes “module” and “unit” may be used interchangeably.



FIG. 1 is a block diagram of a robot system that includes a robot according to an embodiment of the present invention.


Referring to FIG. 1, the robot system may include at least one robot 100, and a home appliance 10 that has a communication module to communicate with other apparatuses, the robot 100, a server 70, and/or the like, and/or to be connected to a network.


For example, the home appliance 10 may include an air conditioner 11 having a communication module, a robot cleaner 12, a refrigerator 13, a washing machine 14, a cooking appliance 15, and/or the like.


The communication module included in the home appliance 10 may be a wi-fi communication module, but embodiments are not limited to the communication method.


Alternatively, the home appliance 10 may include other types of communication modules or a plurality of communication modules. For example, the home appliance 10 may include an NFC module, a zigbee communication module, a Bluetooth communication module, and/or the like.


The home appliance 10 can be connected to a server 70 through the wi-fi communication module or the like, and can support smart functions such as remote monitoring, remote control, and/or the like.


The robot system may include a portable terminal such as a smart phone, a tablet PC, and/or the like.


The user may check information on the home appliance 10 in a robot system or control the home appliance 10 through the portable terminal.


It may be inconvenient for a user to use the portable terminal even all the time even when the user desires to control the home appliance 10 or check certain information in the home.


For example, it may be more efficient to have a means to control the home appliance 10 in other ways when the user does not know a current location of the portable terminal or when the portable terminal is in another place.


The robot 100 may receive a user's speech input (or audio input) and thus control the home appliance 10 directly or control the home appliance 10 via the server 70.


Accordingly, the user may control the home appliance 10 without operating any other apparatus excluding the robot 100 disposed in the room, living room, or the like.


The robot system may include a plurality of Internet of Things (IoT) apparatuses. Accordingly, the robot system may include the home appliance 10, the robot 100, and the Internet of Things (IoT) apparatuses.


The robot system is not limited to a communication method constituting a network.


For example, the home appliance 10, the robot 100, and the Internet of Things (IoT) apparatuses may be communicatively connected through a wired/wireless router (not shown).


Additionally, the apparatuses in the robot system may be configured in a mesh topology that is individually communicatively connected.


The home appliance 10 in the robot system may communicate with the server 70 or the robot 100 via a wired/wireless router.


Further, the home appliance 10 in the robot system may communicate with the server 70 or the robot 100 by Ethernet.


The robot system may include a network apparatus such as a gateway. Alternatively, at least one of the robots 100 provided in the home may be configured to include the gateway function.


The home appliances 10 included in the robot system may be network-connected directly between apparatuses or via the gateway.


The home appliance 10 may be network-connected to be able to communicate with the server 70 directly or via the gateway.


The gateway may communicate with the server 70 or the mobile terminal 50 by Ethernet.


Additionally, the gateway may communicate with the server 70 or the robot 100 via the wired/wireless router.


The home appliance 10 may transmit apparatus operation state information, setting value information, and/or the like to the server 70 and/or the gateway.


The user may check information related to the home appliance 10 in the robot system or control the home appliance 10 through the portable terminal or the robot 100.


The server 70 and/or the gateway may transmit a signal for controlling the home appliances 10 to each apparatus in response to a user command input through the robot 100 or a specific event occurred in the home appliance 10 in the robot system.


The gateway may include output means such as a display, a sound output unit, and/or the like.


The display and the sound output unit (or sound output device) may output image and audio stored in the gateway or based on a received signal. For example, a music file stored in the gateway may be played and outputted through the sound output unit.


The display and the sound output unit may output the image and audio information related to the operation of the gateway.


The server 70 may store and manage information transmitted from the home appliance 10, the robot 100, and other apparatuses. The server 70 may be a server operated by a manufacturer of the home appliance or a company entrusted by the manufacturer.


Information related to the home appliance 10 may be transmitted to the robot 100, and the robot 100 may display the information related to the home appliance 10.


The home appliance 10 may receive information or receive a command from the robot 100. The home appliance 10 may transmit various information to the server 70, and the server 70 may transmit part or all of the information received from the home appliance 10 to the robot 100.


The server 70 may transmit information itself received from the home appliance 10 or may process and transmit the received information to the robot 100.



FIG. 1 illustrates an example of a single server 70, but embodiments are not limited thereto, and the system according to the present invention may operate in association with two or more servers.


For example, the server 70 may include a first server for speech recognition and processing, and a second server may provide a home appliance related service such as a home appliance control.


According to an embodiment, the first server and the second server may be configured by distributing information and functions to a plurality of servers, or may be constituted by a single integrated server.


For example, the first server for speech recognition and processing may be composed of a speech recognition server for recognizing words included in a speech signal and a natural language processing server for recognizing the meaning of a sentence including words included in the speech signal.


Alternatively, the server 70 may include a server for emotion recognition and processing, and a server for providing a home appliance related service, such as a home appliance control. The server for emotion recognition and processing may be configured by distributing information and functions to a plurality of servers, or may be constituted by a single integrated server.



FIG. 2 is a front view showing an outer shape of a robot according to an embodiment of the present invention. FIG. 3 is an example of an internal block diagram of a robot according to an embodiment of the present invention.


Referring to FIGS. 2 and 3, the robot 100 includes a main body that forms an outer shape and houses various components therein.


The main body includes a body 101 forming a space in which various components constituting the robot 100 are accommodated, and a support 102 that is disposed in the lower side of the body 101 and supports the body 101.


The robot 100 may include a head 110 disposed in the upper side of the main body. A display 182 for displaying an image may be disposed on the front surface of the head 110.


In this disclosure, the front direction means the +y axis direction, the up and down direction means the z axis direction, and the left and right direction means the x axis direction.


The head 110 may rotate within a certain angle range about the x-axis.


Accordingly, when viewed from the front, the head 110 can perform a nodding operation that moves in an up and down direction in a similar manner as a person nods his or her head in the up and down direction. For example, the head 110 may perform an original position return operation one or more times after rotating within a certain range in a similar manner as a person nods his/her head in the up and down direction.


At least a part of the front surface on which the display 182 corresponding to the face of the person in the head 100 is disposed may be configured to be nodded.


Accordingly, in the present disclosure, an embodiment may allow the entire head 110 to move in the up and down direction. However, unless specifically described, the vertically nodding operation of the head 110 may be replaced with a nodding operation in the up and down direction of at least a part of the front surface on which the display 182 is disposed.


The body 101 may be configured to be rotatable in the left-right direction. That is, the body 101 may be configured to rotate 360 degrees about the z-axis.


The body 101 also may be configured to be rotatable within a certain angle range about the x-axis, so that it can move as if it nods in the up and down direction. In this example, as the body 101 rotates in the up and down direction, the head 110 may also rotate about the axis in which the body 101 rotates.


Accordingly, the operation of nodding the head 110 in the up and down direction may include both the example where the head 110 itself rotates in the up and down direction when viewed from the front based on a certain axis, and the example where when the head 110 connected to the body 101 rotates and is nodded together with the body 101 as the body 101 is nodded in the up and down direction.


The robot 100 may include a power supply unit (or power supply device) which is connected to an outlet in a home and supplies power to the robot 100.


The robot 100 may include a power supply unit provided with a rechargeable battery to supply power into the robot 100. Depending on an embodiment, a power supply unit may include a wireless power receiving unit for wirelessly charging the battery.


The robot 100 may include an image acquisition unit 120 (or image acquisition device) that can photograph a certain range around the main body or at least the front surface of the main body.


The image acquisition unit 120 may photograph surroundings of the main body, the external environment, and/or the like, and may include a camera module. The camera module may include a digital camera. The digital camera may include an image sensor (e.g., a CMOS image sensor) configured to include at least one optical lens, and a plurality of photodiodes (e.g., pixel) that form an image by light that passed through the optical lens, and a digital signal processor (DSP) that forms an image based on a signal outputted from the photodiodes. The digital signal processor may generate a moving image composed of still images as well as still image.


Several cameras may be installed for each part of the robot for photographing efficiency. The image acquisition unit 120 may include a front camera provided in the front surface of the head 110 to acquire an image of the front of the main body. However, the number, disposition, type, and photographing range of the cameras provided in the image acquisition unit 120 may not be limited thereto.


The image acquisition unit 120 may photograph the front direction of the robot 100, and may photograph an image for user recognition.


The image photographed and acquired by the image acquisition unit 120 may be stored in a storage unit 130 (or storage).


The robot 100 may include a speech input unit 125 (or voice input unit) for receiving a speech input of a user. The speech input unit may also be called an audio input unit or a voice/audio/speech input device.


The speech input unit 125 may include a processor for converting an analog speech into digital data, or may be connected to the processor to convert a speech signal inputted by a user into data to be recognized by the server 70 or a controller 140 (FIG. 3).


The speech input unit 125 may include a plurality of microphones to enhance accuracy of reception of user speech input, and to determine the position of the user.


For example, the speech input unit 125 may include at least two microphones.


The plurality of microphones (MICs) may be disposed at different positions, and may acquire an external audio signal including a speech signal to process the audio signal as an electrical signal.


At least two microphones, which are an input device, may be used to estimate the direction of a sound source that generated sound and a user, and the resolution (angle) of the direction detection becomes higher as the distance between the microphones is physically far.


Depending on the embodiment, two microphones may be disposed at the head 110.


The position of the user on a three-dimensional space can be determined by further including two microphones in the rear surface of the head 110.


Referring to FIG. 3, the robot 100 may include the controller 140 for controlling the overall operation, the storage unit 130 (or storage device) for storing various data, and a communication unit 190 (or communication device) for transmitting and receiving data with other apparatuses such as the server 70.


The robot 100 may include a driving unit 160 (or driving device) that rotates the body 101 and the head 110. The driving unit 160 may include a plurality of driving motors for rotating and/or moving the body 101 and the head 110.


The controller 140 controls overall operation of the robot 100 by controlling the image acquisition unit 120, the driving unit 160, the display 182, and/or the like, which constitute the robot 100.


The storage unit 130 may record various types of information required for controlling the robot 100, and may include a volatile or nonvolatile recording medium. The recording medium stores data that can be read by a microprocessor, and may include a hard disk drive (HDD), a solid state disk (SSD), a silicon disk drive (SDD), a ROM, a RAM, a CD-ROM, a Magnetic tape, a floppy disk, an optical data storage device, and/or the like.


The controller 140 may transmit an operation state of the robot 100, user input, and/or the like to the server 70, or the like through the communication unit 190.


The communication unit 190 may include at least one communication module so that the robot 100 is connected to the Internet or a certain network.


The communication unit 190 may be connected to the communication module provided in the home appliance 10 and process data transmission/reception between the robot 100 and the home appliance 10.


The storage unit 130 may store data for speech recognition (or voice recognition), and the controller 140 may process the speech input signal of the user received through the speech input unit 125 and perform a speech recognition process.


Since various known speech recognition algorithms can be used for the speech recognition process, a detailed description of the speech recognition process may be omitted in this disclosure.


The controller 140 may control the robot 100 to perform a certain operation based on a speech recognition result.


For example, when a command included in the speech signal is a command for controlling operation of a certain home appliance, the controller 140 may control to transmit a control signal based on the command included in the speech signal to a control target home appliance.


When the command included in the speech signal is a command for controlling the operation of a certain home appliance, the controller 140 may control the body 101 of the robot to rotate in the direction toward the control target home appliance.


The speech recognition process may be performed in the server 70 without being performed in the robot 100.


The controller 140 may control the communication unit 190 so that the user input speech signal is transmitted to the server 70.


Alternatively, a speech recognition may be performed by the robot 100, and a high-level speech recognition (such as natural language processing) may be performed by the server 70.


For example, when a keyword speech input including a preset keyword is received, the robot may switch from a standby state to an operating state. In this example, the robot 100 may perform only the speech recognition process up to the input of the keyword speech, and the speech recognition for the subsequent user speech input may be performed through the server 70.


Depending on an embodiment, the controller 140 may compare the user image acquired through the image acquisition unit 120 with information stored in the storage unit 130 in order to determine whether the user is a registered user.


The controller 140 may control to perform a specific operation only for the speech input of the registered user.


The controller 140 may control rotation of the body 101 and/or the head 110, based on user image information acquired through the image acquisition unit 120.


Accordingly, interaction and communication between the user and the robot 100 can be easily performed.


The robot 100 may include an output unit 180 (or output device) to display certain information as an image or to output certain information as a sound.


The output unit 180 may include a display 182 for displaying, as an image, information corresponding to a user's command input, a processing result corresponding to the user's command input, an operation mode, an operation state, an error state, and/or the like.


The display 182 may be disposed at the front surface of the head 110 as described above.


The display 182 may be a touch screen having a mutual layer structure with a touch pad. The display 182 may be used as an input device for inputting information by a user's touch as well as an output device.


The output unit 180 may include a sound output unit 181 (or sound output device) for outputting an audio signal. The sound output unit 181 may output as sound, a notification message such as a warning sound, an operation mode, an operation state, and an error state, and/or the like, information corresponding to a command input by a user, a processing result corresponding to a command input by the user, and/or the like. The sound output unit 181 may convert an electric signal from the controller 140 into an audio signal and output the signal. For this purpose, a speaker, and/or the like may be provided.


Referring to FIG. 2, the sound output unit 181 may be disposed in the left and right sides of the head 110, and may output certain information as sound.


The outer shape and structure of the robot shown in FIG. 2 are illustrative, and embodiments are not limited thereto. For example, positions and numbers of the speech input unit 125, the image acquisition unit 120, and the sound output unit 181 may vary according to design specifications. Further, the rotation direction and the angle of each component may also vary. For example, unlike the rotation direction of the robot 100 shown in FIG. 2, the entire robot 100 may be inclined or shaken in a specific direction.


The robot 100 may access to the Internet and a computer by support of a wired or wireless Internet function.


The robot 100 can perform speech and video call functions, and such a call function may be performed by using an Internet network, a mobile communication network, or the like according to Speech over Internet Protocol (VoIP).


The controller 140 may control the display 182 to display the image of a video call counterpart and an image of the user in a video call according to setting of the user, and control the sound output unit 181 to output a speech (or audio) based on the received speech signal of the video call counterpart.


A robot system according to an example embodiment may include two or more robots that perform a video call.



FIG. 4 is an example of an internal block diagram of a server according to an embodiment of the present invention.


Referring to FIG. 4, the server 70 may include a communication unit 72 (or communication device), a storage unit 73 (or storage device), a recognizer 74, and a processor 71.


The processor 71 may control overall operation of the server 70.


The server 70 may be a server operated by manufacturer of a home appliance such as the robot 100 or a server operated by a service provider, and/or may be a kind of a cloud server.


The communication unit 72 may receive various data such as state information, operation information, handling information, and/or the like from a portable terminal, a home appliance such as the robot 100, a gateway, and/or the like.


The communication unit 72 can transmit data corresponding to the received various information to the portable appliance, the home appliance such as the robot 100, the gateway, and/or the like.


The communication unit 72 may include one or more communication modules such as an Internet module, a mobile communication module, and/or the like.


The storage unit 73 may store the received information, and may have data for generating corresponding result information.


The storage unit 73 may store data used for machine learning, result data, and/or the like.


The recognizer 74 (or recognition processor) may serve as a learning device of the home appliance such as the robot 100.


The recognizer 74 may include an artificial neural network, e.g., a deep neural network (DNN) such as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), a Deep Belief Network (DBN), and/or the like, and may learn the deep neural network (DNN).


After learning according to the setting, the processor 71 may control the artificial neural network structure of the home appliance such as the robot 100 to be updated to the learned artificial neural network structure.


The recognizer 74 may receive input data for recognition, recognize attributes of object, space, and emotion contained in the input data, and output the result. The communication unit 72 may transmit the recognition result to the robot 100.


The recognizer 74 may analyze and learn usage-related data of the robot 100, recognize the usage pattern, the usage environment, and/or the like, and output the result. The communication unit 72 may transmit the recognition result to the robot 100.


Accordingly, the home appliance products such as the robot 100 may receive the recognition result from the server 70, and operate by using the received recognition result.


The server 70 may receive the speech input signal uttered by the user and perform speech recognition. The server 70 may include a speech recognizer and may include an artificial neural network that is learned to perform speech recognition on the speech recognizer input data and output a speech recognition result.


The server 70 may include a speech recognition server for speech recognition. The speech recognition server may include a plurality of servers that share and perform a certain process during speech recognition. For example, the speech recognition server may include an automatic speech recognition (ASR) server for receiving speech data and converting the received speech data into text data, and a natural language processing (NLP) server for receiving the text data from the automatic speech recognition server and analyzing the received text data to determine a speech command. The speech recognition server may include a text to speech (TTS) server for converting the text speech recognition result outputted by the natural language processing server into speech data and transmitting the speech data to another server or the home appliance.


The server 70 may perform emotion recognition on the input data. The server 70 may include an emotion recognizer, and the emotion recognizer may include an artificial neural network that is learned to output an emotion recognition result by performing emotion recognition for the input data.


The server 70 may include an emotion recognition server for emotion recognition. That is, at least one of the servers 70 may be an emotion recognition server having an emotion recognizer for performing emotion recognition.



FIG. 5 is an example of an internal block diagram of an emotion recognizer according to an embodiment of the present invention. The emotion recognition may be an emotion recognition device.


Referring to FIG. 5, an emotion recognizer 74a provided in the robot 100 or the server 70 may perform deep learning by using emotion data as input data 590 (or learning data).


The emotion recognizer 74a may include a uni-modal preprocessor 520 including a plurality of recognizers (or recognition processor) for each modal 521, 522, and 523 that are learned to recognize emotion information of the user included in the uni-modal input data, and a multi-modal recognizer 510 that is learned to merge the output data of the plurality of recognizers for each modal 521, 522, and 523 and recognize the emotion information of the user included in the merged data.


Emotion data is emotion information data having information on the emotion of the user, and may include emotion information, such as image, speech, and bio-signal data, which can be used for emotion recognition. The input data 590 may be image data including a user's face, and more preferably, the learning data 590 may include audio data including user's speech.


Emotion is the ability to feel about stimulus, and is the nature of the mind that accepts sensory stimulation or impression. In emotion engineering, emotion is defined as a complex emotion such as pleasantness and discomfort as a high level of psychological experience inside the human body due to changes in the environment or physical stimulation from the outside.


Emotion may mean feelings of pleasantness, discomfort or the like that occur with respect to stimulation, and emotion may be recognized as any one of N representative emotional states. These N representative emotional states may be named emotion class.


For example, the emotion recognizer 74a may recognize six representative emotion classes such as surprise, happiness, sadness, displeasure, anger, and fear, and may output one of the representative emotion classes as a result of the emotion recognition, and/or may output a probability value for each of six representative emotion classes.


Alternatively, the emotion recognizer 74a may include a neutrality emotion class indicating a default emotion state in which six emotions do not occur in addition to the emotion classes such as surprise, happiness, sadness, displeasure, anger, and fear, as an emotion that can be recognized and outputted by the emotion recognizer 74a.


The emotion recognizer 74a may output, as an emotion recognition result, any one of the emotion classes selected from surprise, happiness, sadness, displeasure, anger, fear, and neutrality, and/or may, as an emotion recognition result, output a probability value for each emotion class such as surprise x %, happiness x %, sadness x %, displeasure x %, anger x %, fear x %, and neutrality x %.


When the emotion of the user is recognized by the artificial intelligence model in which learned deep learning of the emotion is to be recognized, the result is outputted as a tagging value of the data used in learning the deep learning.


In a real environment, there may be many example where the user's emotion can not be finally outputted as a single emotion. For example, although a user may express joy emotion in words, an unpleasant emotion may be expressed in a facial expression. People may often output different emotion for each modal such as speech, image, text, and/or the like.


Accordingly, when the emotion of the user is recognized and outputted as a final single emotion value, or when different emotions, contradictory emotions, similar emotions, and/or the like of each speech, image, and text are ignored, the emotion different from the feeling that is actually felt by the user may be recognized.


In order to recognize and manage each emotion based on all the information exposed to the outside of the user, the emotion recognizer 74a can recognize the emotion for each uni-modal of speech, image, and text, and may have a structure capable of recognizing emotion even in a multi-modal.


The emotion recognizer 74a may recognize, for each uni-modal, the emotion of the user inputted at a specific time point, and may simultaneously recognize the emotion complexly as a multi-modal.


The plurality of recognizers (or recognition processors) for each modal 521, 522, and 523 may recognize and process a single type uni-modal input data which are inputted respectively, and may be also named a uni-modal recognizer.


The emotion recognizer 74a may generate the plurality of uni-modal input data by separating the input data 590 for each uni-modal. A modal separator 530 may separate the input data 590 into a plurality of uni-modal input data.


The plurality of uni-modal input data may include image uni-modal input data, sound uni-modal input data, and text uni-modal input data separated from the moving image data including the user.


For example, the input data 590 may be moving image data photographed by the user, and the moving image data may include image data in which the user's face or the like is photographed and audio data including a speech uttered by a user.


The modal separator 530 may separate the content of the audio data included in the input data 590 into a text uni-modal input data 531 that is acquired by converting the audio data into text data and sound uni-modal input data 532 of the audio data such as a speech tone, magnitude, height, etc.


The text uni-modal input data may be data acquired by converting a speech separated from the moving image data into text. The sound uni-modal input data may be a sound source file of audio data itself, or a file whose preprocess has been completed, such as removing noise from a sound source file.


The modal separator 530 may separate image uni-modal input data 533 that includes one or more facial image data from the image data contained in the input data 590.


The separated uni-modal input data 531, 532, and 533 may be inputted to the uni-modal preprocessor 520 including a plurality of modal recognizers (or recognition processors) for each modal 521, 522, and 523 that are learned to recognize emotion information of a user based on each uni-modal input data 531, 532, and 533.


For example, the text uni-modal input data 531 may be inputted to the text emotion recognizer 521 (or text emotion recognition processor) which performs deep learning by using text as learning data.


The sound uni-modal input data 532 may be inputted, while being used as the speech learning data, to a speech emotion recognizer 522 (or speech emotion recognition processor) that performs deep learning.


The image uni-modal input data 533 including one or more face image data may be inputted, while being used as the image learning data, to a face emotion recognizer 523 (or face emotion recognition processor) that performs deep learning.


The text emotion recognizer 521 may recognize emotion of the user by recognizing vocabularies, sentence structures, and/or the like included in the sound to text (STT) data converted into text. For example, as more words related to happiness are used or a word expressing a strong degree of happiness is recognized, the probability value for the happiness emotion class may be recognized higher than the probability value for other emotion class. Alternatively, the text emotion recognizer 521 may directly output happiness which is the emotion class corresponding to the recognized text as the emotion recognition result.


The text emotion recognizer 521 may also output a text feature point vector along with an emotion recognition result.


The speech emotion recognizer 522 may extract the feature points of the input speech data. The speech feature points may include tone, volume, waveform, etc. of speech. The speech emotion recognizer 522 may determine the emotion of the user by detecting a tone of speech or the like.


The speech emotion recognizer 522 may also output the emotion recognition result and the detected speech feature point vectors.


The face emotion recognizer 523 may recognize the facial expression of the user by detecting the facial area of the user in the input image data and recognizing facial expression landmark point information which is the feature points constituting the facial expression. The face emotion recognizer 523 may output the emotion class corresponding to the recognized facial expression or the probability value for each emotion class, and also output the facial feature point (facial expression landmark point) vector.



FIG. 6 is a diagram for explaining emotion recognition according to an embodiment of the present invention, and illustrates components of a facial expression.


Referring to FIG. 6, a facial expression landmark point may be an eyebrow 61, an eye 62, a cheek 63, a forehead 64, a nose 65, a mouth 66, a jaw 67, and/or the like.


The landmark points (61-67) in FIG. 6 are exemplary and the types and numbers may be changed.


For example, if only a small number of facial expression landmark points having a strong characteristic such as the eyebrow 61, the eye 62, and the mouth 66 may be used, or a facial expression landmark point having a large degree of change may be used when a specific expression is created for each user.


The face emotion recognizer 523 (or face emotion recognition processor) may recognize the facial expression based on position and shape of the facial expression landmark points (61-67).


The face emotion recognizer 523 may include the artificial neural network that has achieved deep learning with image data containing at least a part of the facial expression landmark points (61-67), thereby recognizing the facial expression of the user.


For example, when the user opens the eyes 62 and opens the mouth 66 widely, the face emotion recognizer 523 may determine the emotion of the user as happiness among the emotion classes or may output the emotion recognition result having the highest probability of happiness.


The plurality of recognizers (or plurality of recognition processors) for each modal may include an artificial neural network corresponding to input characteristics of the uni-modal input data that are inputted respectively. A multi-modal emotion recognizer 511 may include an artificial neural network corresponding to characteristics of the input data.


For example, the face emotion recognizer 523 for performing image-based learning and recognition may include a Convolutional Neural Network (CNN), the other emotion recognizers 521 and 522 include a deep-network neural network (DNN), and the multi-modal emotion recognizer 511 may include an artificial neural network of a Recurrent Neural Network (RNN).


The emotion recognizer for each modal 521, 522, and 523 may recognize emotion information included in the uni-modal input data 531, 532, and 533 that are inputted respectively, and output emotion recognition results. For example, the emotion recognizer for each modal 521, 522, and 523 may output the emotion class having the highest probability among a certain number of preset emotion classes as the emotion recognition result, or output the probability for emotion class as emotion recognition results.


The emotion recognizer for each modal 521, 522, and 523 may learn and recognize text, speech, and image in each deep learning structure, and derive intermediate vector value composed of feature point vector for each uni-modal.


The multi-modal recognizer 510 may perform multi-modal deep learning with the intermediate vector value of each speech, image, and text.


As described above, since the input of the multi-modal recognizer 510 is generated based on the output of the emotion recognizer for each modal 521, 522, and 523, the emotion recognizer for each modal 521, 522 and 523 may operate as a kind of preprocessor.


The emotion recognizer 74a may use a total of four deep learning models including the deep learning model of three emotion recognizers for each modal 521, 522, 523 and the deep learning model of one multi-modal recognizer 510.


The multi-modal recognizer 510 may include a merger 512 (or hidden state merger) for combining the feature point vectors outputted from the plurality of recognizers for each modal 521, 522, and 523, and a multi-modal emotion recognizer 511 that is learned to recognize emotion information of the user included in the output data of the merger 512.


The merger 512 may synchronize the output data of the plurality of recognizers for each modal 521, 522, and 523, and may combine (vector concatenation) the feature point vectors to output to the multi-modal emotion recognizer 511.


The multi-modal emotion recognizer 511 may recognize the emotion information of the user from the input data and output the emotion recognition result.


For example, the multi-modal emotion recognizer 511 may output the emotion class having the highest probability among a certain number of preset emotion classes as the emotion recognition result, and/or may output a probability value for each emotion class as the emotion recognition result.


Accordingly, the emotion recognizer 74a may output a plurality of uni-modal emotion recognition results and one multi-modal emotion recognition result.


The emotion recognizer 74a may output the plurality of uni-modal emotion recognition results and one multi-modal emotion recognition result as a level (probability) for each emotion class.


For example, the emotion recognizer 74a may output the probability value for emotion classes of surprise, happiness, neutral, sadness, displeasure, anger, and fear, and there may be a higher probability of recognized emotion class as the probability value is higher. The sum of the probability values of seven emotion classes may be 100%.


The emotion recognizer 74a may output the complex emotion recognition result including the respective emotion recognition results 521, 522, and 523 of the plurality of recognizers for each modal and the emotion recognition result of the multi-modal recognizer 511.


Accordingly, the robot 100 may provide emotional interchange user experience (UX) based on emotion recognition results of three uni-modals and one multi-modal.


According to the setting, the emotion recognizer 74a may output the recognition result occupying a majority of the complex emotion recognition results and the recognition result having the highest probability value as the final recognition result. Alternatively, the controller 140 (of the robot 100) that received (or produced) a plurality of emotion recognition results may determine the final recognition result according to a certain criteria.


The emotion recognizer 74a may recognize and manage the emotion of each of the speech (speech tone, etc.), the image (facial expression, etc.), and the text (the content of talk, etc.) as a level. Accordingly, the emotional interchange user experience (UX) may be handled differently for each modal.


Emotion recognition result for each uni-modal (speech, image, text) and multi-modal emotion recognition result may be simultaneously outputted based on a single time point. Emotion can be recognized complexly with speech, image, and text inputted from a single time point, so that contradictory emotion can be recognized for each uni-modal from the multi-modal emotion to determine user's emotional tendency. Accordingly, even if a negative input is received from some modal, the emotional interchange user experience (UX) corresponding to a positive input of the user's real emotional state can be provided by recognizing the overall emotion.


The robot 100 may be equipped with the emotion recognizer 74a or communicate with the server 70 having the emotion recognizer 74a so as to determine the emotion for uni-modal of only the user.


The emotional pattern of only the user can be analyzed and emotion recognition for each modal can be utilized for emotional care (healing).


Emotion methods may have difficulty in analyzing emotion by mapping the emotions into a single emotion in the example of contradictory emotions having different recognition results for each modal of the input data.


However, according to an example embodiment of the present invention, various real-life situations may be provided through a plurality of inputs and outputs.


In order to complement an input recognizer having low performance, the present invention may constitute a recognizer structure in which a plurality of recognizers 511, 521, 522, and 523 complement each other by a plurality of inputs and outputs in a fusion manner.


The emotion recognizer 74a may separate the speech into sound and meaning, and make a total of three inputs including image, speech (sound), and STT from image and speech inputs.


In order to achieve optimum performance for each of the three inputs, the emotion recognizer 74a may have a different artificial neural network model for each input, such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). For example, the image-based recognizer 523 may have a CNN structure, and the multi-modal emotion recognizer 511 may have a long-short-term memory (LSTM) structure. Thus, a neural network customized for each input characteristic can be configured.


The output of the uni-modal recognizer 521, 522, 523 for each input may be the probability value for seven emotion classes and the vector value of feature points expressing the emotion well.


The multi-modal recognizer 510 may not simply calculate the emotion value for the three inputs by a statistical method but combines the vector value of the feature points that express the emotion well through the entire joint layer and the LSTM so as to help improve performance and to cover various cases in real life in such a manner that another recognizer helps the difficult problem that one recognizer has.


For example, even when only a speech is heard from a place where face recognition is difficult, in the emotion recognizer 74a, the speech based recognizer 521, 522 and the multi-modal emotion recognizer 511 may recognize the emotion of the user.


Since the emotion recognizer 74a can recognize the complex emotion state of the user by merging the recognition results of the image, speech, and character data with the multi-modal recognition result, the emotion recognition can be achieved for various situations in real life.


The uni-modal preprocessor 520 may include uni-modal recognizers 521, 522, 523 that recognize and process one uni-modal input data inputted respectively.


Referring to FIG. 5, the uni-modal preprocessor 520 may include a text emotion recognizer 521 (or text emotion recognition processor), a speech emotion recognizer 522 (or speech emotion recognition processor), and a face emotion recognizer 523 (or face emotion recognition processor).


These uni-modal recognizers 521, 522, 523 may be previously learned and secured.



FIGS. 7 to 9 are diagrams for explaining unmodel emotion recognition according to an embodiment of the present invention.



FIG. 7 shows an example of an unmodal learning process of the face emotion recognizer 523.


Referring to FIG. 7(a), an artificial neural network 720 for face emotion recognition may perform deep learning, based on an image-based input data 710.


The image-based input data 710 may be video data, and the artificial neural network 720 may learn by video data or may perform learning by a plurality of image data extracted from the video data.



FIG. 7(a) shows an example of learning by extracting five representative images 715, but embodiments are not limited thereto.


The artificial neural network 720 (for the face emotion recognizer 523) may be a Convolutional Neural Network (CNN) or the like which is frequently used for image-based learning and recognition.


As described above, the CNN artificial neural network 720 having an advantage in image processing may be learned to recognize emotion by receiving input data including a user's face.


Referring to FIG. 7(b), when a face image 725 is inputted, the artificial neural network 720 (for the face emotion recognizer 523) may extract a feature point such as a facial expression landmark point of the inputted face image 725, and may recognize emotion on the user's face.


The emotion recognition result 730 outputted by the artificial neural network 720 (for the face emotion recognizer 523) may be any one emotion class selected from among surprise, happiness, sadness, displeasure, anger, fear, and neutrality. Alternatively, the emotion recognition result 730 may include probability value for each emotion class such as surprise x %, happiness x %, sadness x %, displeasure x %, anger x %, fear x %, and neutrality x %.


As described with reference to FIG. 5, since input of the multi-modal recognizer 510 is generated based on the output of the emotion recognizer for each modal 521, 522, and 523, the emotion recognizer for each modals 521, 522, and 523 may serve as preprocessor.


Referring to FIG. 7(c), the face emotion recognizer 523 may output not only the emotion recognition result 730, but also a hidden state 740, which is a feature point vector extracted based on the inputted face image.



FIG. 8 illustrates an unmodal learning process of the text emotion recognizer 521.


Referring to FIG. 8(a), an artificial neural network 820 for text emotion recognition may perform deep learning based on a text-based input data 810.


The text-based input data 810 may be STT data that is acquired by converting speech uttered by the user into text, and the artificial neural network 820 may perform learning by using STT data or other text data.


The artificial neural network 820 (for the text emotion recognizer 521) may be one of the deep neural networks DNN that performs deep learning.


Referring to FIG. 8(b), when the text data 825 is inputted, the artificial neural network 820 (for the text emotion recognizer 521) may extract a feature point of the inputted text data 825, and recognize the emotion expressed in the text.


The emotion recognition result 830 outputted by the artificial neural network 820 (for the text emotion recognizer 521) may be any one emotion class selected from among surprise, happiness, sadness, displeasure, anger, fear, and neutrality. Alternatively, the emotion recognition result 830 may include probability value for each emotion class such as surprise x %, happiness x %, sadness x %, displeasure x %, anger x %, fear x %, and neutrality x %.


The text emotion recognizer 521 may also serve as a preprocessor for the multi-modal recognizer 510. Referring to FIG. 8(c), the text emotion recognizer 521 may output not only an emotion recognition result 830, but also a hidden state 840, which is the feature point vector extracted based on the inputted text data.



FIG. 9 shows an example of a uni-modal learning process of the speech emotion recognizer 522.


Referring to FIG. 9(a), an artificial neural network 920 for emotion recognition may perform deep learning based on a speech-based input data 910.


The speech-based input data 910 may be data including sound of a speech uttered by a user, or may be a sound file itself or a file in which a preprocess, such as noise removing from the sound file, has been completed.


The artificial neural network 920 may perform learning to recognize emotion from the speech-based input data 910.


The artificial neural network 920 (for the speech emotion recognizer 522) may be one of the deep neural networks DNN that performs deep learning.


Referring to FIG. 9(b), when a sound data 925 is inputted, the artificial neural network 920 (for the speech recognition recognizer 522) may extract a feature point of the inputted sound data 925, and may recognize the emotion expressed in the sound.


An emotion recognition result 930 outputted by the artificial neural network 920 (for the speech emotion recognizer 522) may be any one emotion class selected from among surprise, happiness, sadness, displeasure, anger, fear, and neutrality. Alternatively, the emotion recognition result 930 may include a probability value for each emotion class such as surprise x %, happiness x %, sadness x %, displeasure x %, anger x %, fear x %, and neutrality x %.


The speech emotion recognizer 522 may also serve as a preprocessor of the multi-modal recognizer 510. Referring to FIG. 9(c), the speech emotion recognizer 522 may output not only the emotion recognition result 930, but also output a hidden state 940, which is a feature point vector extracted based on the inputted sound data.



FIG. 10 is a diagram for explaining multi-modal emotion recognition according to an embodiment of the present invention. FIG. 11 is a diagram illustrating emotion recognition result according to an embodiment of the present invention. Other embodiments and configurations may also be provided.


Referring to FIG. 10, the emotion recognizer 74a provided in the robot 100 or the server 70 may receive a text uni-modal input data 1011 including contents of a speech uttered by the user, a sound uni-modal input data 1012 including sound of the speech uttered by the user, and an image uni-modal input data 1013 including the face image of the user.


The emotion recognizer 74a may receive the moving image data (including the user), and the modal separator 530 may divide contents of the audio data included in the input data into the text uni-modal input data 1011 converted into a text data, and the sound uni-modal input data 1012 of audio data such as sound tone, magnitude, height, and/or the like, and may extract the image uni-modal input data 1013 including the user's face image from the moving image data.


Preprocess of the uni-modal input data 1011, 1012, 1013 may be performed.


For example, in the preprocess operations 1051, 1052, 1053, a process of removing noise included in the text, speech, and image uni-modal input data 1011, 1012, 1013 or extracting and converting data to be suitable for emotion recognition.


When the preprocess is completed, the uni-modal recognizers 521, 522, 523 may recognize the emotion from the uni-modal input data 1011, 1012, 1013 inputted respectively, and may output the emotion recognition result.


The uni-modal recognizers 521, 522, and 523 may output the feature point vector extracted based on the uni-modal input data 1011, 1012, and 1013 inputted respectively to the multi-modal recognizer 510.


The merger 512 of the multi-modal recognizer 510 may combine the feature point vectors (Vector Concatenation) and output to the multi-modal emotion recognizer 511 (or multi-modal engine).


The multi-modal emotion recognizer 511 may perform emotion recognition with respect to the multi-modal input data based on the three uni-modal input data 1011, 1012, and 1013.


The multi-modal emotion recognizer 511 may include an artificial neural network that previously performed deep-learning by multi-modal input data.


For example, the multi-modal emotion recognizer 511 may include recurrent neural networks having a circulation structure in which the current hidden state is updated by receiving the previous hidden state. Since the related data is inputted to the multi-modal, it may be advantageous to use the recurrent neural network in comparison with other artificial neural networks having independent input and output. In particular, the multi-modal emotion recognizer 511 may include a long short term memory (LSTM) that improved the performance of the recurrent neural network.


The emotion recognizer 74a provided in the robot 100 or the server 70 may have a plurality of deep-learning structures.


In the emotion recognizer 74a, the three uni-modal recognizers 521, 522, and 523 and the one multi-modal emotion recognizer 511 may form a hierarchical neural network structure. The multi-modal emotion recognizer 511 may include an artificial neural network other than the recurrent neural network to constitute a hierarchical neural network autonomously.


The emotion recognizer 74a may output an emotion recognition result 1090 (or emotion).


The emotion recognizer 74a may recognize the emotion of the user as a level (probability) for each of seven types of emotion classes in a uni-modal (speech/image/text) and a multi-modal.


The emotion recognizer 74a may recognize emotion for each of four types of modal of the inputted speech, image, text, speech+image+text of the user, and thus can help in accurate interaction with the user.


The output of the three inputs of the uni-modal (speech/image/text) may be a feature vector that expresses an emotion recognition value for each input and the emotion well.


The feature point vector that expresses emotion well may be combined in a fusion scheme using Fully Connetted Layer and Long Short Term Memory (LSTM). Thus, three uni-modal inputs may be combined to recognize emotion.



FIG. 11 shows an example of recognized emotion result values.


Referring to FIG. 11, the uni-modal emotion recognition result for each uni-modal input data of speech, image, and text may be outputted as displeasure, neutrality, and happiness, respectively.


The multi-modal emotion recognition result obtained by performing emotion recognition after combining feature point vectors of speech, image, and text may be outputted as a probability value for each emotion class such as displeasure 50% and happiness 43%.


More preferably, the uni-modal emotion recognition result may also be outputted as the probability value for each emotion class.


The emotion recognizer 74a may improve recognition performance by using information on not only image and sound but also text.


Even if specific uni-modal input data is insufficient, complementary recognizers 511, 521, 522, and 523 may be constituted by recognizing through other uni-modal input data and multi-modal input data.


Various emotions can be recognized by a combination of four types of information in total.



FIG. 11 shows the state of recognizing the input 1011, 1012, 1013 shown in FIG. 10, and shows emotion recognition result where the face of the user is a smiley face, but the user speaks a negative vocabulary.


As described above, human emotions are difficult to define as a single emotion, and contradictory or complex emotion that has facial expression and word that are contradictory may occur frequently in real life environment.


In research, the emotion recognition result may be derived from only a single emotion. However, in the example of the contradictory emotional state in which facial expression and word are contradictory as shown in FIG. 11, if only a single emotion is mapped, the possibility of false recognition may increase.


However, in the emotion recognition method according to an example embodiment of the present invention, various combinations can be achieved through a total of four emotion probability values including the output for three inputs and the finally combined output value.


The emotion recognizer 74a may recognize a complex emotion state of the user by complementarily using and integrating image, speech, and text data. Accordingly, the emotion recognizer 74a may recognize the emotion in various situations in real life.


Since the emotion recognizer 74a according to example embodiments of the present invention may determine complex emotion state, there is a high possibility that the emotion recognizer 74a can be utilized in a psychotherapy robot of user. For example, even if negative emotion is recognized from the user, the robot 100 including the emotion recognizer 74a may provide an emotion care (therapy) service with positive emotion expression.



FIG. 12 is a diagram for explaining an emotion recognition post-processing according to an example embodiment of the present invention. FIG. 13 is a diagram for explaining an emotional interchange user experience of a robot according to an example embodiment of the present invention. Other embodiments and configurations may also be provided.


Referring to FIG. 12, when a complex emotion recognition result 1210 includes two or more recognition results that do not match, the emotion recognizer 74a may include a post-processor 1220 for outputting a final emotion recognition result according to a certain criteria.


The robot 100 according to an example embodiment of the present invention may include the post-processor 1220.


The robot 100 may include the emotion recognizer 74a including the post-processor 1220 or may include only the post-processor 1220 without including the emotion recognizer 74a.


According to the setting, when the complex emotion recognition result 1210 includes two or more recognition results that are not matched, the post-processor 1220 may output the emotion recognition result matching the emotion recognition result of the multi-modal recognizer 511 among the emotion recognition results of the recognizers for each modal 521, 522, 523 as the final emotion recognition result.


In the example of FIG. 12, since the output ‘displeasure’ of the text emotion recognizer matches the ‘displeasure’ having the highest probability value among the emotion recognition result of the multi-modal recognizer 511, the post-processor 1220 may output the ‘displeasure’ as the final emotion recognition result.


Alternatively, when the complex emotion recognition result 1210 includes two or more recognition results that do not match, the post-processor 1220 may output the contradictory emotion including two emotion classes among the complex emotion recognition result 1210 as the final emotion recognition result.


In this example, the post-processor 1220 may select two emotion classes having the highest probability among the emotion recognition results of the multi-modal recognizer 511 as the above mentioned contradictory emotion.


In the example of FIG. 12, the contradictory emotion including the ‘displeasure’ and ‘happiness’ emotion classes may be outputted as the final emotion recognition result.


The robot 100 according to an example embodiment of the present invention may include the emotion recognizer 74a to recognize emotion of the user. Alternatively, the robot 100 may communicate with the server 70 having the emotion recognizer 74a to receive the emotion recognition result of the user.


The robot 100 may include the communication unit 190 for transmitting moving image data including a user to the server 70 and receiving a result of complex emotion recognition including a plurality of emotion recognition results of the user from the server 70, and the sound output unit 181 for uttering a question for checking the emotion of the user by combining two or more recognition results that are not matched when the complex emotion recognition result includes two or more recognition results that are not matched.


As in various examples of FIG. 13, if there is an emotion recognition result corresponding to the contradictory emotion including two or more contradictory recognition results, the robot 100 may ask (or utter) a question about the contradictory emotion to the user.


For example, the robot 100 may combine the two emotion classes and ask (or utter) a question for checking the emotion of the user.


Additionally, when the complex emotion recognition result received from the server 70 includes two or more recognition results that do not match, the robot 100 may include the post-processor 1220 for outputting a final emotion recognition result according to a certain criteria.


When the complex emotion recognition result includes two or more recognition results that do not match, the post-processor 1220 may output the contradictory emotion including two emotion classes among the complex emotion recognition result, as the final emotion recognition result.


The post-processor 1220 may select two emotion classes having the highest probability among the complex emotion recognition result, as the contradictory emotion.


The user may interact with the robot 100 while answering the question of the robot 100, and the satisfaction of the user with respect to the robot 100 that understands and interacts with his/her emotion may be increased.


Additionally, even if a negative emotion is recognized from the user, the robot 100 may provide an emotion care (therapy) service through a positive emotion expression.


According to an example embodiment, the user may perform a video call using the robot 100, and the emotion recognizer 74a may recognize emotion of the video call counterpart based on the received video call data.


That is, the emotion recognizer 74a may receive the video call data of the video call counterpart and may output the emotion recognition result of the video call counterpart.


Emotion recognition may be performed in the server 70 having the emotion recognizer 74a. For example, the user may perform a video call using the robot 100, and the server 70 may receive the video call data from the robot 100 and transmit the emotion recognition result of the user included in the received video call data.



FIG. 14 is a flowchart illustrating an operation method of an emotion recognizer according to an example embodiment of the present invention. Other embodiments and operations may also be provided.


Referring to FIG. 14, when data is inputted (S1410), the emotion recognizer 74 according to the embodiment of the present invention may generate a plurality of uni-modal input data based on the input data (S1420). For example, the modal separator 530 may separate input data to generate a plurality of uni-modal input data (S1420).


Each of the recognizers for each modal 521, 522, 523 may recognize the emotion of the user from a corresponding uni-modal input data (S1430).


The recognizers for each modal 521, 522, 523 may output the emotion recognition result and the feature point vector of the uni-modal input data.


The feature point vectors outputted by the recognizer for each modal 521, 522, 523 may be merged in the merger 512, and the multi-modal emotion recognizer 511 may perform emotion recognition with respect to the merged multi-modal data (S1450).


The emotion recognizer 74 according to an example embodiment may output a complex emotion recognition result including emotion recognition results of the recognizer for each modal 521, 522, 523 and an emotion recognition result of the multi-modal emotion recognizer 511.


As described above, the emotion recognizer 74 according to the example embodiment may constitute four deep learning-based emotion recognition models of uni-modal and multi-modal, thereby recognizing the emotion of user inputted at a single point of time in uni-modal, while complexly recognizing the emotion in multi-modal.


The emotion recognizer 74 may output the uni-modal emotion recognition result and the multi-modal emotion recognition result by the level (probability) for each emotion class.


Accordingly, a specific emotion feedback of the user only may be achieved by recognizing emotion of each of speech, image, and text, and the multi-modal emotion recognition result that synthesized speech, image, and text comprehensively may also be synthetically utilized.


According to at least one embodiment, a user emotion may be recognized and an emotion-based service may be provided.


According to at least one embodiment, the emotion of the user can be more accurately recognized by using the artificial intelligence learned by a deep learning.


According to at least one embodiment, a plurality of emotion recognition results may be outputted, and the emotion recognition results may be combined and used in various manners.


According to at least one embodiment, a talking with the user may be achieved based on a plurality of emotion recognition results, so that user's feeling can be shared and the emotion of the user can be recognized more accurately.


According to at least one embodiment, the emotion of the user can be recognized more accurately by performing the unimodal and multi-modal emotion recognition separately and complementarily using a plurality of emotion recognition results.


According to at least one embodiment, it is possible to recognize a complex emotion, thereby improving the satisfaction and convenience of the user.


The emotion recognizer and the robot and the robot system including the emotion recognizer are not limited to the configuration and the method of the above described embodiments, but the embodiments may be variously modified in such a manner that all or some of the embodiments are selectively combined.


The method of operating the robot and the robot system according to an example embodiment of the present invention can be implemented as a code readable by a processor on a recording medium readable by the processor. The processor-readable recording medium includes all kinds of recording apparatuses in which data that can be read by the processor is stored. Examples of the recording medium that can be read by the processor include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage apparatus, and/or the like, and may also be implemented in the form of a carrier wave such as transmission over the Internet. In addition, the processor-readable recording medium may be distributed over network-connected computer systems so that code readable by the processor in a distributed fashion can be stored and executed.


Embodiments have been made in view of the above problems, and provide an emotion recognizer capable of recognizing user emotion, and a robot and a server including the same.


Embodiments may provide an emotion recognition method that can more accurately recognize a user's emotion by using artificial intelligence learned by deep learning.


Embodiments may provide an emotion recognizer capable of recognizing user emotion and providing an emotion based service, and a robot and a server including the same.


Embodiments may provide an emotion recognizer capable of outputting a plurality of emotion recognition results, and combining and using the emotion recognition results in various manners, and a robot and a server including the same.


Embodiments may provide an emotion recognizer capable of interacting with a user by communicating with a user based on a plurality of emotion recognition results, and a robot and a server including the same.


Embodiments may provide an emotion recognizer capable of individually recognizing uni-modal and multi-modal emotion recognition and of using a plurality of recognition results complementarily, and a robot and a server including the emotion recognizer.


Embodiments may provide an emotion recognizer capable of recognizing complex emotion, and a robot and a server including the same.


In order to achieve the above and other objects, an emotion recognizer, a robot including the same, and a server including the same according to an aspect of the present invention can acquire data related to the user, recognize emotion information based on the acquired data related to the user, and provide an emotion-based service.


In order to achieve the above or other objects, an emotion recognizer according to an aspect of the present invention may be provided in a server or a robot.


In order to achieve the above or other objects, an emotion recognizer according to an aspect of the present invention is learned to recognize emotion information by a plurality of unimodal inputs and a multimodal input based on the plurality of unimodal inputs, and outputs the complex emotion recognition result including the emotion recognition result for each of the plurality of unimodal inputs and the emotion recognition result for the multimodal input, thereby recognizing the user's emotion more accurately.


In order to achieve the above or other objects, an emotion recognizer according to an aspect of the present invention may further include a modal separator for separating input data by each uni-modal to generate the plurality of uni-modal input data, thereby generating a plurality of necessary input data from the input data.


The plurality of uni-modal input data may include image uni-modal input data, speech uni-modal input data, and text uni-modal input data that are separated from moving image data including the user, and the text uni-modal input data may be data acquired by converting a speech separated from the moving image data into text.


The plurality of recognizers for each modal may include an artificial neural network corresponding to input characteristic of uni-modal input data inputted respectively, thereby enhancing the accuracy of individual recognition results. In addition, the multimodal recognizer may include recurrent neural networks.


The multi-modal recognizer may include a merger for combining feature point vectors outputted by the plurality of recognizers for each modal, and a multi-modal emotion recognizer learned to recognize the emotion information of the user contained in output data of the merger.


The emotion recognition result of each of the plurality of recognizers for each modal and the emotion recognition result of multimodal recognizer may include a certain number of probabilities for each of preset emotion classes.


In order to achieve the above or other objects, an emotion recognizer or a robot according to an aspect of the present invention may further include a post-processor for outputting a final emotion recognition result according to a certain criteria, when the complex emotion recognition result includes two or more recognition results that do not match.


The post-processor outputs an emotion recognition result that matches the emotion recognition result of the multi-modal recognizer among the emotion recognition results of the recognizers for each modal, as the final emotion recognition result, when the complex emotion recognition result includes two or more recognition results that do not match.


The post-processor may output a contradictory emotion including two emotion classes among the complex emotion recognition result, as the final emotion recognition result, when the complex emotion recognition result includes two or more recognition results that do not match. In this case, the post-processor may select two emotion classes having a highest probability among the emotion recognition result of the multi-modal recognizer as the contradictory emotion.


In order to achieve the above or other objects, a robot according to an aspect of the present invention may include the above-described emotion recognizer.


In addition, in order to achieve the above and other objects, a robot according to an aspect of the present invention can recognize emotion of a video call counterpart.


In order to achieve the above or other objects, a robot according to an aspect of the present invention may include a communication unit configured to transmit moving image data including a user to a server, and receive a complex emotion recognition result including a plurality of emotion recognition results of the user from the server; and a sound output unit configured to utter a question for checking an emotion of user by combining two or more recognition results that do not match, when the complex emotion recognition result includes the two or more recognition results that do not match.


In the example where the emotion recognizer outputs the contradictory emotion including two emotion classes among the complex emotion recognition result as a final emotion recognition result, in order to achieve the above or other objects, the robot according to an aspect of the present invention can speak with the user by asking a question for checking the emotion of the user by combining the classes.


In order to achieve the above or other objects, a server according to an aspect of the present invention may include the above described emotion recognizer.


The server can receive the video call data from the robot and transmit the emotion recognition result of the user contained in the received video call data.


It will be understood that when an element or layer is referred to as being “on” another element or layer, the element or layer can be directly on another element or layer or intervening elements or layers. In contrast, when an element is referred to as being “directly on” another element or layer, there are no intervening elements or layers present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.


It will be understood that, although the terms first, second, third, etc., may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another region, layer or section. Thus, a first element, component, region, layer or section could be termed a second element, component, region, layer or section without departing from the teachings of the present invention.


Spatially relative terms, such as “lower”, “upper” and/or the like, may be used herein for ease of description to describe the relationship of one element or feature to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation, in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “lower” relative to other elements or features would then be oriented “upper” relative to the other elements or features. Thus, the exemplary term “lower” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


Embodiments of the disclosure are described herein with reference to cross-section illustrations that are schematic illustrations of idealized embodiments (and intermediate structures) of the disclosure. As such, variations from the shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Thus, embodiments of the disclosure should not be construed as limited to the particular shapes of regions illustrated herein but are to include deviations in shapes that result, for example, from manufacturing.


Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.


Any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments.


Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.

Claims
  • 1. An emotion recognition device comprising: an uni-modal preprocessor configured to include a plurality of recognition processors each corresponding to a different one of a plurality of modals, and learned to recognize emotion information of a user contained in uni-modal input data; anda multi-modal recognizer configured to merge output data from each of the plurality of recognition processors, and to be learned to recognize the emotion information of the user contained in the merged data,wherein the emotion recognition device is to output a complex emotion recognition result that includes a plurality of emotion recognition results each corresponding to a different one of the plurality of recognition processors and an emotion recognition result of the multi-modal recognizer.
  • 2. The emotion recognition device of claim 1, further comprising a modal separator for separating input data into a plurality of uni-modal input data each being uni-modal, and to provide the plurality of uni-modal input data to the uni-modal preprocessor.
  • 3. The emotion recognition device of claim 2, wherein the plurality of uni-modal input data comprises image uni-modal input data, speech uni-modal input data, and text uni-modal input data that are separated from moving image data that includes the user.
  • 4. The emotion recognition device of claim 3, wherein the text uni-modal input data is data obtained by converting a speech, separated from the moving image data, into text.
  • 5. The emotion recognition device of claim 1, wherein the plurality of recognition processors each separately include an artificial neural network corresponding to input characteristic of uni-modal input data inputted respectively.
  • 6. The emotion recognition device of claim 1, wherein the multi-modal recognizer comprises: a merger for combining feature point vectors separately outputted by the plurality of recognition processors based on the corresponding modal; anda multi-modal emotion recognizer learned to recognize the emotion information of the user based on output data of the merger.
  • 7. The emotion recognition device of claim 1, wherein the emotion recognition result of each separate one of the plurality of recognition processors includes a probability for each of preset emotion classes.
  • 8. The emotion recognition device of claim 1, further comprising a post-processor for outputting a final emotion recognition result according to a certain criteria, when the complex emotion recognition result is based on two or more of the emotion recognition results that do not match.
  • 9. The emotion recognition device of claim 8, wherein the post-processor outputs, as the final emotion recognition result, an emotion recognition result that matches the emotion recognition result of the multi-modal recognizer from among the emotion recognition results of the recognition processors, when the complex emotion recognition result is based on two or more of the emotion recognition results that do not match.
  • 10. The emotion recognition device of claim 8, wherein the post-processor outputs, as the final emotion recognition result, a contradictory emotion that includes two emotion classes among the complex emotion recognition result, when the complex emotion recognition result is based on two or more of the emotion recognition results that do not match.
  • 11. The emotion recognition device of claim 10, wherein the post-processor selects, as the contradictory emotion, two emotion classes having a highest probability among the emotion recognition result of the multi-modal recognizer.
  • 12. A robot comprising: a communication device configured to transmit to a server, moving image data including a user, the server including an emotion recognition device that is learned to recognize emotion information of the user included in input data, and the communication device to receive, from the server, a complex emotion recognition result that includes a plurality of emotion recognition results of the user; andan output device configured to output an audio or visual display for determining an emotion of the user based on two or more of the emotion recognition results that do not match, when the complex emotion recognition result is based on the two or more of the emotion recognition results that do not match.
  • 13. The robot of claim 12, further comprising a post-processor for outputting a final emotion recognition result according to a certain criteria, when the received complex emotion recognition result is based on the two or more of the emotion recognition results that do not match.
  • 14. The robot of claim 13, wherein the post-processor outputs a contradictory emotion that includes two emotion classes among the complex emotion recognition result, as the final emotion recognition result, when the complex emotion recognition result is based on the two or more of the emotion recognition results that do not match.
  • 15. The robot of claim 14, wherein the post-processor selects, as the contradictory emotion, two emotion classes having a highest probability among the complex emotion recognition result.
  • 16. The robot of claim 12, wherein the server comprises: an uni-modal preprocessor configured to include a plurality of recognition processors each corresponding to a different one of a plurality of modals, and learned to recognize emotion information of a user contained in uni-modal input data; anda multi-modal recognizer configured to merge output data from each of the plurality of recognition processors, and to be learned to recognize the emotion information of the user contained in the merged data,wherein the server transmits, to the robot, a plurality of emotion recognition results each corresponding to a different one of the plurality of recognition processors I and a complex emotion recognition result based on the emotion recognition result of the multi-modal recognizer.
  • 17. A server comprising: a communication device configured to receive, from a robot, moving image data including a user, and transmit, to the robot, a complex emotion recognition result that includes a plurality of emotion recognition results; andan emotion recognition device configured to include an uni-modal preprocessor and a multi-modal recognizer, the uni-modal preprocessor configured to include a plurality of recognition processors each corresponding to a different one of a plurality of modals, and learned to recognize emotion information of a user contained in uni-modal input data, and the multi-modal recognizer configured to merge output data from each of the plurality of recognition processors, and be learned to recognize the emotion information of the user contained in the merged data, and to output a complex emotion recognition result that includes a plurality of emotion recognition results each corresponding to a different one of the plurality of recognition processors and an emotion recognition result of the multi-modal recognizer.
  • 18. The server of claim 17, wherein, through the communication device, video call data is received from the robot and emotion recognition result of the user included in the received video call data is transmitted to the robot.
  • 19. The server of claim 17, wherein the emotion recognition device includes a modal separator for separating input data into a plurality of uni-modal input data each being uni-modal, and to provide the plurality of uni-modal input data to the uni-modal preprocessor.
  • 20. The server of claim 17, wherein the emotion recognition device includes a post-processor for outputting a final emotion recognition result according to a certain criteria, when the complex emotion recognition result is based on two or more of the emotion recognition results that do not match.
Priority Claims (1)
Number Date Country Kind
10-2018-0110500 Sep 2018 KR national