The present technology particularly relates to an information processing device, an information processing terminal, an information processing method, and a program capable of performing conversation with realistic feeling.
A so-called remote conference in which a plurality of remote participants hold a conference using a device such as a PC is performed. By starting a web browser or a dedicated application installed in the PC and accessing an access destination designated by the URL allocated for each conference, a user who knows the URL can participate in the conference as a participant.
The participant's voice collected by the microphone is transmitted to a device used by another participant via the server to output from a headphone or a speaker. Furthermore, a video showing the participant imaged by the camera is transmitted to a device used by another participant via the server and displayed on a display of the device.
As a result, each participant can have a conversation while looking at the faces of another participant.
It is difficult to hear the voices when a plurality of participants speak at the same time.
In addition, since the voice of the participant is only output in a planar manner, it is not possible to feel a sound image or the like, and it is difficult to obtain the sense that the participant exists from the voice.
The present technology has been made in view of such a situation, and an object thereof is to enable conversation with realistic feeling.
An information processing device according to one aspect of the present technology includes: a storage unit that stores HRTF data corresponding to a plurality of positions based on a listening position; and a sound image localization processing unit that performs a sound image localization process based on the HRTF data corresponding to a position, in a virtual space, of a participant participating in a conversation via a network and sound data of the participant.
An information processing terminal according to one aspect of the present technology includes: a sound reception unit that receives sound data of a participant who is an utterer obtained by performing a sound image localization process, the sound data being transmitted from an information processing device that stores HRTF data corresponding to a plurality of positions based on a listening position and performs the sound image localization process based on the HRTF data corresponding to a position, in a virtual space, of the participant participating in a conversation via a network and sound data of the participant, and outputs a voice of the utterer.
In one aspect of this technology, HRTF data corresponding to a plurality of positions based on a listening position are stored; and a sound image localization process is performed based on the HRTF data corresponding to a position, in a virtual space, of a participant participating in a conversation via a network and sound data of the participant.
In one aspect of this technology, sound data of a participant who is an utterer obtained by performing a sound image localization process are received, the sound data being transmitted from an information processing device that stores HRTF data corresponding to a plurality of positions based on a listening position and performs the sound image localization process based on the HRTF data corresponding to a position, in a virtual space, of the participant participating in a conversation via a network and sound data of the participant are received, and a voice of the utterer is output.
Hereinafter, modes for carrying out the present technology will be described. The description will be given in the following order.
<<Configuration of Tele-communication System>>
The Tele-communication system in
Another device such as a smartphone or a tablet terminal including a sound input device such as a microphone and a sound output device such as a headphone or a speaker may be used as the client terminal. In a case where it is not necessary to distinguish between the client terminals 2A to 2D, the client terminal is appropriately referred to as a client terminal 2.
The users A to D are users who participate in the same conference. Note that the number of users participating in the conference is not limited to four.
The communication management server 1 manages a conference held by a plurality of users who have a conversation online. The communication management server 1 is an information processing device that controls transmission and reception of voices between the client terminals 2 and manages a so-called remote conference.
For example, as indicated by an arrow A1 in the upper part of
The communication management server 1 transmits the sound data of the user A to each of the client terminals 2B to 2D as indicated by arrows A11 to A13 in the lower part of
Similarly, in a case where another user has made an utterance, the sound data transmitted from the client terminal 2 used by the uttering user is transmitted to the client terminal 2 used by the listening user via the communication management server 1.
The communication management server 1 manages the position of each user in the virtual space. The virtual space is, for example, a three-dimensional space virtually set as a place where a conference is held. The position in the virtual space is represented by three-dimensional coordinates.
In the example of
During the conference, on the screen of the client terminal 2 used by each user, as illustrated in
In the example of
For example, the position of each user is automatically set by the communication management server 1 when the user participates in the conference. The position in the virtual space may be set by the user himself/herself by moving the participant icon on the screen of
The communication management server 1 has HRTF data that is data of a head-related transfer function (HRTF) representing sound transfer characteristics from a plurality of positions to a listening position when each position in the virtual space is set as the listening position. The HRTF data corresponding to a plurality of positions based on each listening position in the virtual space is prepared in the communication management server 1.
The communication management server 1 performs a sound image localization process using the HRTF data on the sound data so that the voice of the uttering user can be heard from the position of the uttering user in the virtual space for each listening user to transmit the sound data obtained by performing the sound image localization process.
The sound data transmitted to the client terminal 2 as described above is sound data obtained by performing the sound image localization process in the communication management server 1. The sound image localization process includes rendering such as vector based amplitude panning (VBAP) based on positional information, and binaural processing using HRTF data.
That is, the voice of each uttering user is processed in the communication management server 1 as the sound data of the object audio. For example, L/R two-channel channel-based audio data generated by the sound image localization process in the communication management server 1 is transmitted from the communication management server 1 to each client terminal 2, and the voice of the uttering user is output from headphones or the like provided in the client terminal 2.
By performing the sound image localization process using the HRTF data according to the relative positional relationship between the position of the listening user and the position of the uttering user, each of the listening users feels that the voice of the uttering user is heard from the position of the uttering user.
When the user A whose position P1 is set as the position in the virtual space is focused on as the listening user, the voice of the user B is heard from the near right by performing the sound image localization process based on the HRTF data between the position P2 and the position P1 with the position P2 as the sound source position as indicated by the arrow in
Furthermore, the voice of the user C is heard from the front by performing the sound image localization process based on the HRTF data between the position P3 and the position P1 with the position P3 as the sound source position. The voice of the user D is heard from the far right by performing the sound image localization process based on the HRTF data between the position P4 and the position P1 with the position P4 as the sound source position.
The same applies to a case where another user is a listening user. For example, as illustrated in
As described above, in the communication management server 1, the sound data for each listening user is generated according to the positional relationship between the position of each listening user and the position of the uttering user, and is used for outputting the voice of the uttering user. The sound data transmitted to each of the listening users is sound data that is different in how the uttering user is heard according to the positional relationship between the position of each of the listening users and the position of the uttering user.
For example, the user A wearing the headphone and participating in the conference listens to the voices of the users B to D whose sound images are localized at the near right position, the front position, and the far right position, respectively, and has a conversation. As described with reference to
Note that, as will be described later, background sounds such as bird chirping and BGM are also output based on sound data obtained by the sound image localization process so that the sound image is localized at a predetermined position.
The sound to be processed by the communication management server 1 includes not only the utterance voice but also sounds such as an environmental sound and a background sound. Hereinafter, in a case where it is not necessary to distinguish the types of respective sounds, a sound to be processed by the communication management server 1 will be simply described as a sound. Actually, the sound to be processed by the communication management server 1 includes sound of a type other than a voice.
Since the voice of the uttering user is heard from the position corresponding to the position in the virtual space, the listening user can easily distinguish between the voices of the respective users even in a case where there is a plurality of participants. For example, even in a case where a plurality of users makes utterances at the same time, the listening user can distinguish between the respective voices.
Furthermore, since the voice of the uttering user can be felt stereoscopically, the listening user can obtain the feeling that the uttering user exists at the position of the sound image from the voice. The listening user can have a realistic conversation with another user.
<<Basic Operation>>
Here, a flow of basic operations of the communication management server 1 and the client terminal 2 will be described.
<Operation of Communication Management Server 1>
The basic process of the communication management server 1 will be described with reference to a flowchart of
In Step S1, the communication management server 1 determines whether the sound data has been transmitted from the client terminal 2, and waits until it is determined that the sound data has been transmitted.
In a case where it is determined in Step S1 that the sound data has been transmitted from the client terminal 2, in Step S2, the communication management server 1 receives the sound data transmitted from the client terminal 2.
In Step S3, the communication management server 1 performs a sound image localization process based on the positional information about each user and generates sound data for each listening user.
For example, the sound data for the user A is generated such that the sound image of the voice of the uttering user is localized at a position corresponding to the position of the uttering user when the position of the user A is used as a reference.
Furthermore, the sound data for the user B is generated such that the sound image of the voice of the uttering user is localized at a position corresponding to the position of the uttering user when the position of the user B is used as a reference.
Similarly, the sound data for another listening user is generated using the HRTF data according to the relative positional relationship with the uttering user with the position of the listening user as a reference. The sound data for respective listening users is different data.
In Step S4, the communication management server 1 transmits sound data to each listening user. The above processing is performed every time sound data is transmitted from the client terminal 2 used by the uttering user.
<Operation of Client Terminal 2>>
The basic process of the client terminal 2 will be described with reference to the flowchart of
In Step S11, the client terminal 2 determines whether a microphone sound has been input. The microphone sound is a sound collected by a microphone provided in the client terminal 2.
In a case where it is determined in Step S11 that the microphone sound has been input, the client terminal 2 transmits the sound data to the communication management server 1 in Step S12. In a case where it is determined in Step S11 that the microphone sound has not been input, the process of Step S12 is skipped.
In Step S13, the client terminal 2 determines whether sound data has been transmitted from the communication management server 1.
In a case where it is determined in Step S13 that the sound data has been transmitted, the communication management server 1 receives the sound data to output the voice of the uttering user in Step S14.
After the voice of the uttering user has been output, or in a case where it is determined in Step S13 that the sound data has not been transmitted, the process returns to Step S11, and the above-described processing is repeatedly performed.
<<Configuration of Each Device>>
<Configuration of Communication Management Server 1>
The communication management server 1 includes a computer. The communication management server 1 may include one computer having the configuration illustrated in
A CPU 101, a ROM 102, and a RAM 103 are connected to one another by a bus 104. The CPU 101 executes a server program 101A and controls the overall operation of the communication management server 1. The server program 101A is a program for realizing a Tele-communication system.
An input/output interface 105 is further connected to the bus 104. An input unit 106 including a keyboard, a mouse, and the like, and an output unit 107 including a display, a speaker, and the like are connected to the input/output interface 105.
Furthermore, a storage unit 108 including a hard disk, a nonvolatile memory, or the like, a communication unit 109 including a network interface or the like, and a drive 110 that drives a removable medium 111 are connected to the input/output interface 105. For example, the communication unit 109 communicates with the client terminal 2 used by each user via the network 11.
In the communication management server 1, an information processing unit 121 is implemented. The information processing unit 121 includes a sound reception unit 131, a signal processing unit 132, a participant information management unit 133, a sound image localization processing unit 134, an HRTF data storage unit 135, a system sound management unit 136, a 2 ch mix processing unit 137, and a sound transmission unit 138.
The sound reception unit 131 causes the communication unit 109 to receive the sound data transmitted from the client terminal 2 used by the uttering user. The sound data received by the sound reception unit 131 is output to the signal processing unit 132.
The signal processing unit 132 appropriately performs predetermined signal process on sound data supplied from the sound reception unit 131 to output sound data obtained by performing the signal process to the sound image localization processing unit 134. For example, the process of separating the voice of the uttering user and the environmental sound is performed by the signal processing unit 132. The microphone sound includes, in addition to the voice of the uttering user, an environmental sound such as noise in a space where the uttering user is located.
The participant information management unit 133 causes the communication unit 109 to communicate with the client terminal 2 or the like, thereby managing the participant information that is information about the participant of the conference.
As illustrated in
The user information is information about a user who participates in a conference set by a certain user. For example, the user information includes a user ID and the like. Other information included in the participant information is managed in association with, for example, the user information.
The positional information is information representing the position of each user in the virtual space.
The setting information is information representing contents of setting related to the conference, such as setting of a background sound to be used in the conference.
The volume information is information representing a sound volume at the time of outputting a voice of each user.
The participant information managed by the participant information management unit 133 is supplied to the sound image localization processing unit 134. The participant information managed by the participant information management unit 133 is also supplied to the system sound management unit 136, the 2 ch mix processing unit 137, the sound transmission unit 138, and the like as appropriate. As described above, the participant information management unit 133 functions as a position management unit that manages the position of each user in the virtual space, and also functions as a background sound management unit that manages the setting of the background sound.
The sound image localization processing unit 134 reads and acquires the HRTF data according to the positional relationship of each user from the HRTF data storage unit 135 based on the positional information supplied from the participant information management unit 133. The sound image localization processing unit 134 performs a sound image localization process using the HRTF data read from the HRTF data storage unit 135 on the sound data supplied from the signal processing unit 132 to generate sound data for each listening user.
Furthermore, the sound image localization processing unit 134 performs a sound image localization process using predetermined HRTF data on the data of the system sound supplied from the system sound management unit 136. The system sound is a sound generated by the communication management server 1 and heard by the listening user together with the voice of the uttering user. The system sound includes, for example, a background sound such as BGM and a sound effect. The system sound is a sound different from the user's voice.
That is, in the communication management server 1, a sound other than the voice of the uttering user, such as a background sound or a sound effect, is also processed as the object audio. A sound image localization process for localizing a sound image at a predetermined position in the virtual space is also performed on the sound data of the system sound. For example, the sound image localization process for localizing a sound image at a position farther than the position of the participant is performed on the sound data of the background sound.
The sound image localization processing unit 134 outputs sound data obtained by performing the sound image localization process to the 2 ch mix processing unit 137. The sound data of the uttering user and the sound data of the system sound as appropriate are output to the 2 ch mix processing unit 137.
The HRTF data storage unit 135 stores HRTF data corresponding to a plurality of positions based on respective listening positions in the virtual space.
The system sound management unit 136 manages a system sound. The system sound management unit 136 outputs the sound data of the system sound to the sound image localization processing unit 134.
The 2 ch mix processing unit 137 performs a 2 ch mix process on the sound data supplied from the sound image localization processing unit 134. By performing the 2 ch mix process, channel-based audio data including the components of an audio signal L and an audio signal R of the uttering user's voice and the system sound, respectively, is generated. The sound data obtained by performing the 2 ch mix process is output to the sound transmission unit 138.
The sound transmission unit 138 causes the communication unit 109 to transmit the sound data supplied from the 2 ch mix processing unit 137 to the client terminal 2 used by each listening user.
<Configuration of Client Terminal 2>
The client terminal 2 is configured by connecting a memory 202, a sound input device 203, a sound output device 204, an operation unit 205, a communication unit 206, a display 207, and a sensor unit 208 to a control unit 201.
The control unit 201 includes a CPU, a ROM, a RAM, and the like. The control unit 201 controls the entire operation of the client terminal 2 by executing a client program 201A. The client program 201A is a program for using the Tele-communication system managed by the communication management server 1. The client program 201A includes a transmission-side module 201A-1 that executes a transmission-side process and a reception-side module 201A-2 that executes a reception-side process.
The memory 202 includes a flash memory or the like. The memory 202 stores various types of information such as the client program 201A executed by the control unit 201.
The sound input device 203 includes a microphone. The sound collected by the sound input device 203 is output to the control unit 201 as a microphone sound.
The sound output device 204 includes a device such as a headphone or a speaker. The sound output device 204 outputs the voice or the like of the conference participant based on the audio signal supplied from the control unit 201.
Hereinafter, a description will be given on the assumption that the sound input device 203 is a microphone as appropriate. Furthermore, a description will be given on the assumption that the sound output device 204 is a headphone.
The operation unit 205 includes various buttons and a touch panel provided to overlap the display 207. The operation unit 205 outputs information representing the content of the user's operation to the control unit 201.
The communication unit 206 is a communication module complying with wireless communication of a mobile communication system such as 5G communication, a communication module complying with a wireless LAN, or the like. The communication unit 206 receives radio waves output from the base station and communicates with various devices such as the communication management server 1 via the network 11. The communication unit 206 receives information transmitted from the communication management server 1 to output the information to the control unit 201. Furthermore, the communication unit 206 transmits the information supplied from the control unit 201 to the communication management server 1.
The display 207 includes an organic EL display, an LCD, or the like. Various screens such as a remote conference screen are displayed on the display 207.
The sensor unit 208 includes various sensors such as an RGB camera, a depth camera, a gyro sensor, and an acceleration sensor. The sensor unit 208 outputs sensor data obtained by performing measurement to the control unit 201. The user's situation is appropriately recognized based on the sensor data measured by the sensor unit 208.
In the client terminal 2, an information processing unit 211 is realized. The information processing unit 211 includes a sound processing unit 221, a setting information transmission unit 222, a user situation recognition unit 223, and a display control unit 224.
The information processing unit 211 includes a sound reception unit 231, an output control unit 232, a microphone sound acquisition unit 233, and a sound transmission unit 234.
The sound reception unit 231 causes the communication unit 206 to receive the sound data transmitted from the communication management server 1. The sound data received by the sound reception unit 231 is supplied to the output control unit 232.
The output control unit 232 causes the sound output device 204 to output a sound corresponding to the sound data transmitted from the communication management server 1.
The microphone sound acquisition unit 233 acquires sound data of the microphone sound collected by the microphone constituting the sound input device 203. The sound data of the microphone sound acquired by the microphone sound acquisition unit 233 is supplied to the sound transmission unit 234.
The sound transmission unit 234 causes the communication unit 206 to transmit the sound data of the microphone sound supplied from the microphone sound acquisition unit 233 to the communication management server 1.
The setting information transmission unit 222 generates setting information representing contents of various settings according to a user's operation. The setting information transmission unit 222 causes the communication unit 206 to transmit the setting information to the communication management server 1.
The user situation recognition unit 223 recognizes the situation of the user based on the sensor data measured by the sensor unit 208. The user situation recognition unit 223 causes the communication unit 206 to transmit information representing the situation of the user to the communication management server 1.
The display control unit 224 causes the communication unit 206 to communicate with the communication management server 1, and causes the display 207 to display the remote conference screen based on the information transmitted from the communication management server 1.
<<Use Case of Sound Image Localization>>
A use case of sound image localization of various sounds including utterance voices by conference participants will be described.
<Grouping of Uttering Users>
In order to facilitate listening to a plurality of topics, each user can group uttering users. The grouping of the uttering users is performed at the predetermined timing such as before a conference starts using a setting screen displayed as a GUI on the display 207 of the client terminal 2.
The setting of the group on the group setting screen is performed, for example, by moving the participant icon by dragging and dropping.
In the example of
The uttering user whose participant icon has been moved to the rectangular region 301 is a user belonging to Group 1, and the uttering user whose participant icon has been moved to the rectangular region 302 is a user belonging to Group 2. A group of uttering users is set using such a screen. Instead of moving the participant icon to the region to which the group is allocated, the group may be formed by overlapping a plurality of participant icons.
The group setting information that is setting information representing the group set using the group setting screen of
In a case where a microphone sound is transmitted from the client terminal 2 as indicated by arrows A2 and A3, the communication management server 1 performs the sound image localization process using HRTFs different between respective groups. For example, the sound image localization process using the same HRTF data is performed on the sound data of the uttering users belonging to the same group so that sounds are heard from different positions between respective groups.
The sound data generated by the sound image localization process is transmitted to and output from the client terminal 2 used by each listening user as indicated by an arrow A4.
Note that, in
As illustrated on the left side of
The control process of the communication management server 1 related to grouping of uttering users will be described with reference to a flowchart of
In the control process of the communication management server 1, description of contents overlapping with the contents described with reference to
In Step S101, the participant information management unit 133 (
In Step S102, the sound reception unit 131 receives the sound data transmitted from the client terminal 2 used by the uttering user. The sound data received by the sound reception unit 131 is supplied to the sound image localization processing unit 134 via the signal processing unit 132.
In Step S103, the sound image localization processing unit 134 performs a sound image localization process using the same HRTF data on the sound data of the uttering users belonging to the same group.
In Step S104, the sound transmission unit 138 transmits the sound data obtained by the sound image localization process to the client terminal 2 used by the listening user.
In the case of the example of
For example, the user can easily hear each topic by setting a group for users having a conversation on the same topic.
For example, in the default state, no group is created, and participant icons representing all users are laid out at equal intervals. In this case, the sound image localization process is performed such that the sound images are localized at positions spaced apart at an equal distance according to the layout of the participant icons on the group setting screen.
<Sharing of Positional Information>
The information about the position in the virtual space may be shared among all the users. In the example described with reference to
In this case, each user sets his/her position at the predetermined timing such as before the conference starts using a setting screen displayed as a GUI on the display 207 of the client terminal 2.
The three-dimensional space displayed on the position setting screen of
For example, in the default state, a vacant position in the virtual space is automatically set as the position of each user. A plurality of listening positions may be set, and the position of the user may be selected from the listening positions, or an any position in the virtual space may be selected.
The positional information representing the position in the virtual space set using the position setting screen in
In a case where the microphone sound is transmitted from the client terminal 2 as indicated by arrows A13 and A14, the communication management server 1 performs the sound image localization process using the HRTF data according to the positional relationship between the listening user and each uttering user based on the shared positional information.
The sound data generated by the sound image localization process is transmitted to and output from the client terminal 2 used by the listening user as indicated by an arrow A15.
In a case where the position of the head of the listening user is estimated as indicated by an arrow A16 based on the image captured by the camera provided in the client terminal 2, head tracking of the positional information may be performed. The position of the head of the listening user may be estimated based on sensor data detected by another sensor such as a gyro sensor or an acceleration sensor constituting the sensor unit 208.
For example, in a case where the head of the listening user rotates rightward by 30 degrees, the positions of the respective users are corrected by rotating the positions of all the users leftward by 30 degrees, and the sound image localization process is performed using the HRTF data corresponding to the corrected position.
The control process of the communication management server 1 related to sharing of positional information will be described with reference to a flowchart of
In Step S111, the participant information management unit 133 receives the positional information representing the position set by each user. The positional information is transmitted from the client terminal 2 used by each user in response to the setting of the position in the virtual space. In the participant information management unit 133, the positional information transmitted from the client terminal 2 is managed in association with the information about each user.
In Step S112, the participant information management unit 133 manages the positional information about each user as sharing information.
In Step S113, the sound reception unit 131 receives the sound data transmitted from the client terminal 2 used by the uttering user.
In Step S114, the sound image localization processing unit 134 reads and acquires the HRTF data according to the positional relationship between the listening user and each uttering user from the HRTF data storage unit 135 based on the shared positional information. The sound image localization processing unit 134 performs a sound image localization process using the HRTF data on the sound data of the uttering user.
In Step S115, the sound transmission unit 138 transmits the sound data obtained by the sound image localization process to the client terminal 2 used by the listening user.
With the above processing, in the client terminal 2 used by the listening user, the sound image of the voice of the uttering user is localized and felt at the position set by each uttering user.
<Setting of Background Sound>
In order to make it easy to hear the voice of the uttering user, each user can change the environmental sound included in the microphone sound to a background sound that is another sound. The background sound is set at the predetermined timing such as before a conference starts using a screen displayed as a GUI on the display 207 of the client terminal 2.
The background sound is set using, for example, a menu displayed on the remote conference screen.
In the example of
Note that, in the default state, the background sound is set to OFF. In this case, the environmental sound from the space where the uttering user is located can be heard as it is.
The background sound setting information that is the setting information representing the background sound set using the screen of
When microphone sounds are transmitted from the client terminal 2 as indicated by arrows A22 and A23, the environmental sound is separated from each microphone sound in the communication management server 1.
As indicated by an arrow A24, a background sound is added (synthesized) to the sound data of the uttering user obtained by separating the environmental sound, and the sound image localization process using the HRTF data according to the positional relationship is performed on each of the sound data of the uttering user and the sound data of the background sound. For example, the sound image localization process for localizing a sound image to a position farther than the position of the uttering user is performed on the sound data of the background sound.
HRTF data different between respective types of background sound (between titles) may be used. For example, in a case where a background sound of bird chirping is selected, HRTF data for localizing a sound image to a high position is used, and in a case where a background sound of wave sound is selected, HRTF data for localizing a sound image to a low position is used. In this manner, the HRTF data is prepared for each type of background sound.
The sound data generated by the sound image localization process is transmitted to and output from the client terminal 2 used by the listening user who has set the background sound as indicated by an arrow A25.
The control process of the communication management server 1 related to setting of the background sound will be described with reference to a flowchart of
In Step S121, the participant information management unit 133 receives the background sound setting information representing the setting content of the background sound set by each user. The background sound setting information is transmitted from the client terminal 2 in response to the setting of the background sound. In the participant information management unit 133, the background sound setting information transmitted from the client terminal 2 is managed in association with the information about the user who has set the background sound.
In Step S122, the sound reception unit 131 receives the sound data transmitted from the client terminal 2 used by the uttering user. The sound data received by the sound reception unit 131 is supplied to the signal processing unit 132.
In Step S123, the signal processing unit 132 separates the sound data of the environmental sound from the sound data supplied from the sound reception unit 131. The sound data, of the uttering user, obtained by separating the sound data of the environmental sound is supplied to the sound image localization processing unit 134.
In Step S124, the system sound management unit 136 outputs the sound data of the background sound set by the listening user to the sound image localization processing unit 134, and adds the sound data as the sound data to be subjected to the sound image localization process.
In Step S125, the sound image localization processing unit 134 reads and acquires the HRTF data according to the positional relationship between the position of the listening user and the position of the uttering user and the HRTF data according to the positional relationship between the position of the listening user and the position of the background sound (the position where the sound image is localized) from the HRTF data storage unit 135. The sound image localization processing unit 134 performs a sound image localization process using the HRTF data for the utterance voice on the sound data of the uttering user, and performs a sound image localization process using the HRTF data for the background sound on the sound data of the background sound.
In Step S126, the sound transmission unit 138 transmits the sound data obtained by the sound image localization process to the client terminal 2 used by the listening user. The above processing is performed for each listening user.
Through the above processing, in the client terminal 2 used by the listening user, the sound image of the voice of the uttering user and the sound image of the background sound selected by the listening user are localized and felt at different positions.
The listening user can easily hear the voice of the uttering user as compared with a case where the voice of the uttering user and an environmental sound such as noise from an environment where the uttering user is present are heard from the same position. Furthermore, the listening user can have a conversation using a favorite background sound.
The background sound may not be added by the communication management server 1 but may be added by the reception-side module 201A-2 of the client terminal 2.
<Sharing of Background Sound>
The setting of the background sound such as the BGM may be shared among all the users. In the example described with reference to
In this case, an any user sets the background sound at the predetermined timing such as before the conference starts using a setting screen displayed as a GUI on the display 207 of the client terminal 2. The background sound is set using a screen similar to the screen illustrated in
In the default state, the sharing of the background sound is turned off. In this case, the voice of the uttering user can be heard as it is without synthesizing the background sound.
The background sound setting information that is setting information representing ON/OFF of sharing of the background sound and the background sound selected in a case where ON of sharing is set is transmitted from the client terminal 2 to the communication management server 1 as indicated by an arrow A31.
When microphone sounds are transmitted from the client terminal 2 as indicated by arrows A32 and A33, the environmental sound is separated from each microphone sound in the communication management server 1. The environmental sound may not be separated.
A background sound is added to the sound data of the uttering user obtained by separating the environmental sound, and the sound image localization process using the HRTF data according to the positional relationship is performed on each of the sound data of the uttering user and the sound data of the background sound. For example, the sound image localization process for localizing a sound image to a position farther than the position of the uttering user is performed on the sound data of the background sound.
The sound data generated by the sound image localization process is transmitted to and output from the client terminal 2 used by each listening user as indicated by arrows A34 and A35. In the client terminal 2 used by each listening user, the common background sound is output together with the voice of the uttering user.
The control process of the communication management server 1 regarding sharing of a background sound will be described with reference to a flowchart of
The control process illustrated in
That is, in Step S131, the participant information management unit 133 receives the background sound setting information representing the setting content of the background sound set by an any user. In the participant information management unit 133, the background sound setting information transmitted from the client terminal 2 is managed in association with the user information about all the users.
In Step S132, the sound reception unit 131 receives the sound data transmitted from the client terminal 2 used by the uttering user. The sound data received by the sound reception unit 131 is supplied to the signal processing unit 132.
In Step S133, the signal processing unit 132 separates the sound data of the environmental sound from the sound data supplied from the sound reception unit 131. The sound data, of the uttering user, obtained by separating the sound data of the environmental sound is supplied to the sound image localization processing unit 134.
In Step S134, the system sound management unit 136 outputs the sound data of the common background sound to the sound image localization processing unit 134 and adds it as the sound data to be subjected to the sound image localization process.
In Step S135, the sound image localization processing unit 134 reads and acquires the HRTF data according to the positional relationship between the position of the listening user and the position of the uttering user and the HRTF data according to the positional relationship between the position of the listening user and the position of the background sound from the HRTF data storage unit 135. The sound image localization processing unit 134 performs a sound image localization process using the HRTF data for the utterance voice on the sound data of the uttering user, and performs a sound image localization process using the HRTF data for the background sound on the sound data of the background sound.
In Step S136, the sound transmission unit 138 transmits the sound data obtained by the sound image localization process to the client terminal 2 used by the listening user.
Through the above processing, in the client terminal 2 used by the listening user, the sound image of the voice of the uttering user and the sound image of the background sound commonly used in the conference are localized and felt at different positions.
The background sound may be shared as follows.
(A) In a case where a plurality of people simultaneously listen to the same lecture in a virtual lecture hall, the sound image localization process is performed so as to localize the speaker's voice far as a common background sound and localize the user's voice close. A sound image localization process such as rendering in consideration of the relationship between the positions of the respective users and the spatial sound effects is performed on the voice of the uttering user.
(B) In a case where a plurality of people simultaneously watch the movie content in a virtual movie theater, the sound image localization process is performed so as to localize the sound of the movie content, which is a common background sound, near the screen. The sound image localization process such as rendering in consideration of the relationship between the position of the seat in the movie theater and the position of the screen selected as the user's seat by each user and the sound effects of the movie theater is performed on the voice of the movie content.
(C) An environmental sound from a space where a certain user is present is separated from a microphone sound and used as a common background sound. In this case, respective users listen to the same sound as the environmental sound from the space in which other users are present together with the voice of the uttering user. As a result, the environmental sound from an any space can be shared by all the users.
<Dynamic Switching of Sound Image Localization Process>
Whether the sound image localization process, which is process of the object audio including rendering and the like, is performed by the communication management server 1 or the client terminal 2 is dynamically switched.
In this case, of the configurations of the communication management server 1 illustrated in
In a case where the setting of the parameter used for the sound image localization process such as the positional information about the listening user is changed during the conference and the change is reflected in the sound image localization process in real time, the sound image localization process is performed by the client terminal 2. By performing the sound image localization process locally, it is possible to make a response to the parameter change quick.
On the other hand, in a case where the parameter setting is not changed for a certain period of time or more, the sound image localization process is performed by the communication management server 1. By performing the sound image localization process by the server, the amount of data communication between the communication management server 1 and the client terminal 2 can be suppressed.
In a case where the sound image localization process is performed by the client terminal 2, the microphone sound transmitted from the client terminal 2 as indicated by arrows A101 and A102 is directly transmitted to the client terminal 2 as indicated by arrow A103. The client terminal 2 serving as the transmission source of the microphone sound is the client terminal 2 used by the uttering user, and the client terminal 2 serving as the transmission destination of the microphone sound is the client terminal 2 used by the listening user.
In a case where the setting of the parameter related to the localization of the sound image, such as the position of the listening user, is changed by the listening user as indicated by an arrow A104, the change in the setting is reflected in real time, and the sound image localization process is performed on the microphone sound transmitted from the communication management server 1.
A sound corresponding to the sound data generated by the sound image localization process by the client terminal 2 is output as indicated by an arrow A105.
In the client terminal 2, a change content of the parameter setting is saved, and information representing the change content is transmitted to the communication management server 1 as indicated by an arrow A106.
In a case where the sound image localization process is performed by the communication management server 1, as indicated by arrows A107 and A108, the sound image localization process is performed on the microphone sound transmitted from the client terminal 2 by reflecting the changed parameter.
The sound data generated by the sound image localization process is transmitted to and output from the client terminal 2 used by the listening user as indicated by an arrow A109.
The control process of the communication management server 1 related to dynamic switching of the sound image localization process will be described with reference to a flowchart of
In Step S201, it is determined whether the parameter setting change has not been made for a certain period of time or more. This determination is made by the participant information management unit 133 based on, for example, information transmitted from the client terminal 2 used by the listening user.
In a case where it is determined in Step S201 that there is a parameter setting change, in Step S202, the sound transmission unit 138 transmits the sound data of the uttering user received by the participant information management unit 133 as it is to the client terminal 2 used by the listening user. The transmitted sound data is object audio data.
In the client terminal 2, the sound image localization process is performed using the changed setting, and sound is output. Furthermore, information representing the content of the changed setting is transmitted to the communication management server 1.
In Step S203, the participant information management unit 133 receives the information, representing the content of the setting change, transmitted from the client terminal 2. After the positional information about the listening user is updated based on the information transmitted from the client terminal 2, the process returns to Step S201, and the subsequent processes are performed. The sound image localization process performed by the communication management server 1 is performed based on the updated positional information.
On the other hand, in a case where it is determined in Step S201 that there is no parameter setting change, a sound image localization process is performed by the communication management server 1 in Step S204. The processing performed in Step S204 is basically similar to the processing described with reference to
The above processing is performed not only in a case where the position is changed but also in a case where another parameter such as the setting of the background sound is changed.
<Management of Sound Effect Setting>
The sound effect setting suitable for the background sound may be stored in a database and managed by the communication management server 1. For example, a position suitable as a position at which a sound image is localized is set for each type of background sound, and the HRTF data corresponding to the set position is stored. Parameters related to another sound effect setting such as reverb may be stored.
In a case where the background sound is synthesized with the voice of the uttering user, in the communication management server 1, the background sound is played back, and as indicated by an arrow A121, the sound image localization process is performed using the sound effect setting such as HRTF data suitable for the background sound.
The sound data generated by the sound image localization process is transmitted to and output from the client terminal 2 used by the listening user as indicated by an arrow A122.
<<Modification>>
Although the conversation held by a plurality of users is assumed to be a conversation in a remote conference, the above-described technology can be applied to various types of conversations as long as the conversation is a conversation in which a plurality of people participates via online, such as a conversation in a meal scene or a conversation in a lecture.
About Program
The above-described series of processing can be executed by hardware or software. In a case where the series of processing is executed by software, a program constituting the software is installed in a computer incorporated in dedicated hardware, a general-purpose personal computer, or the like.
The program to be installed is recorded in the removable medium 111 illustrated in
Note that the program executed by the computer may be a program in which processing is performed in time series in the order described in the present specification, or may be a program in which processing is performed in parallel or at necessary timing such as when a call is made.
Note that, in the application, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in respective housings and connected via a network is a system and one device in which a plurality of modules is housed in one housing is a system.
The effects described in the present identification are merely examples and are not limited, and other effects may be present.
The embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology. Although the headphone or the speaker is used as the sound output device, other devices may be used. For example, a normal earphone (inner ear headphone) or an open-type earphone capable of capturing an environmental sound can be used as the sound output device.
Furthermore, for example, the technique can adopt a configuration of cloud computing in which one function is shared and processed by a plurality of devices in cooperation via a network.
Furthermore, each step described in the above-described flowchart can be executed by one device or can be shared and executed by a plurality of devices.
Furthermore, in a case where a plurality of processes is included in one step, the plurality of processes included in the one step can be executed by one device or can be shared and executed by a plurality of devices.
Example of Combination of Configurations
The present technology can also have the following configurations.
Number | Date | Country | Kind |
---|---|---|---|
2020-152418 | Sep 2020 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/033279 | 9/10/2021 | WO |