The present disclosure relates to an information processing apparatus, an information processing method, and a program.
In recent years, there has been proposed a video conference system that uses the development of communication technologies to allow for conversations between two places in which a plurality of participants are present. Specifically, for example, there are provided a display apparatus, a camera, an MIC, and a speaker in each place. Video and sound data that are respectively captured and picked up in one of the places are output in real time from the display apparatus and speaker installed in the other place.
Regarding such a bi-directional communication technology, for example, Patent Literature 1 below proposes a system capable of preventing an invasion of a user's privacy, and allowing content indicated by content data to be selectively shared when content is shared and conversations are carried out with a communication partner.
In addition, Patent Literature 2 below proposes that it be possible to determine the degree of a request for communication between a user and a communication partner on the basis of state information of the user, and perform comfortable communication that is not inconvenient for each other. With this arrangement, it is possible to prevent the user from receiving an inconvenient call such as a call made by the partner missing the state information, and a compulsory call made by the partner.
Patent Literature 1: JP 5707824B
Patent Literature 2: JP 4645355B
However, in Patent Literature 1 described above, it is possible to selectively share content with a communication partner, but nothing is taken into consideration regarding the distance between spaces such as the distance or interval to the communication partner.
In addition, Patent Literature 2 described above takes proper measures against the timing (i.e., call timing) for connecting spaces, but does not also mention anything about the distance between spaces.
Then, the present disclosure proposes an information processing apparatus, a control method, and a program capable of aurally producing distance in a virtual three-dimensional space by using the space for a connection to a communication partner, and realizing more comfortable communication.
According to the present disclosure, there is proposed an information processing apparatus including: a reception unit configured to receive data from a communication destination; and a reproduction control unit configured to perform control such that sound data of a space of the communication destination is reproduced from a sound output unit in a space of a communication source with an output value in accordance with separation distance between the communication destination and the communication source disposed in a virtual three-dimensional space, the output value being different for each sound source type.
According to the present disclosure, there is proposed an information processing method including, by a processor: receiving data from a communication destination; and performing control such that sound data of a space of the communication destination is reproduced from a sound output unit in a space of a communication source with an output value in accordance with separation distance between the communication destination and the communication source disposed in a virtual three-dimensional space, the output value being different for each sound source type.
According to the present disclosure, there is proposed a program for causing a computer to function as: a reception unit configured to receive data from a communication destination; and a reproduction control unit configured to perform control such that sound data of a space of the communication destination is reproduced from a sound output unit in a space of a communication source with an output value in accordance with separation distance between the communication destination and the communication source disposed in a virtual three-dimensional space, the output value being different for each sound source type.
As described above, according to the present disclosure, it is possible to aurally produce distance in a virtual three-dimensional space by using the space for a connection to a communication partner, and realize more comfortable communication.
Note that the effects described above are not necessarily limitative. With or in the place of the above effects, there may be achieved any one of the effects described in this specification or other effects that may be grasped from this specification.
Hereinafter, (a) preferred embodiment(s) of the present disclosure will be described in detail with reference to the appended drawings. Note that, in this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted.
In addition, description will be made in the following order.
1. Overview of Information Processing Apparatus according to Embodiment of the Present Disclosure
The overview of a telepresence system 1 according to an embodiment of the present disclosure will be described with reference to
Here, the general video chat technology is capable of switching 0 and 1 such as displaying/not displaying video or turning sound on/off (mute) when performing remote communication through video sound channels, but incapable of finely adjusting the connection degree. Therefore, the general video chat technology fails to meet the needs of a user that the user does not wish to constantly connect to a partner with a realistic sense of presence, but wishes to feel the condition of the partner. In addition, it is necessary for a user to manually switch connection states. It also prevents increase in usage frequency from the perspective of operation cost.
In addition, as the case where a telepresence communication apparatus is capable of gradually adjusting the connection degree, it is conceivable, for example, to perform two-dimensional planar filter processing such as blur processing (blurring) or mask processing (blocking) on the living-room video of a partner. However, it is impossible to express the sense of distance such as depth or direction in terms of audio.
Then, in the present embodiment, a virtual three-dimensional space is used to connect spaces, and control the distance between the connected spaces, thereby making it possible to realize more comfortable communication and provide a pleasant connection degree for a user. It is possible to aurally produce the distance between spaces by reproducing audio spaces like three-dimensional spaces. The telepresence system 1 according to the present embodiment disposes and reproduces sound data in virtual three-dimensional space coordinates for each sound source type, or reproduces sound data associated with spaces, thereby enabling the “interval” (which is herein also referred to as “distance”) between the space on a user side and the space on a partner side to be aurally felt. For example, as the living-room space on a partner side comes closer in a virtual three-dimensional space, a minute noise or the partner user's voice in the room on the other side is audible, and it is then possible to carry out natural conversations. In contrast, as the living-room space on the partner side gets farther, the sound volume of noise or voice becomes lower. Instead, the sound volume of given sound data becomes higher as the environment sound of the space (which is herein referred to as “courtyard space”) between the user side and the living-room space on the partner side. This allows the user to feel a pleasant aural interval.
In addition, it is also possible to make the distance between spaces visually felt. For example, an image showing that the video of a communication destination (here, video of a living-room space) is disposed in a virtual three-dimensional space is displayed, thereby making it possible to make a user feel as if the partner were a given distance away.
In addition, the distance between spaces is automatically and continuously optimized in accordance with a user state or the like, thereby making it possible to reduce the load of a user operation.
The telepresence system 1 according to the present embodiment like this includes, as illustrated in
The communication control apparatuses 10A, 10B, and 10C each include an input unit. The communication control apparatuses 10A, 10B, and 10C respectively acquire information of the spaces in which a user A, a user B, and a user C are present, and transmit the information to another communication control apparatus 10 or the processing server 30. In addition, the communication control apparatuses 10A, 10B, and 10C each include an output unit, and output information received from another communication control apparatus 10 or the processing server 30. Note that the example illustrated in
The processing server 30 performs synchronization processing for bi-directional communication between any two or more of the communication control apparatuses 10A to 10C, or performs computation/control or the like of separation distance based on the connection request levels from both. Note that the synchronization processing, or the computation/control or the like of separation distance may be performed in each of the communication control apparatuses 10A, 10B, and 10C, and the telepresence system 1 may be configured to dispense with the processing server 30.
Next, the configuration of a communication control apparatus 10 according to the present embodiment will be described with reference to
As illustrated in
The input unit 101 has a function of receiving space information. For example, the input unit 101 is implemented by a camera 1011, an MIC (abbreviated from a microphone) 1012, and a sensor 1013. A plurality of cameras 1011 may also be included. The plurality of cameras 1011 image the inside of a space (e.g., living room), and acquire captured images. In addition, a plurality of MICs 1012 may be included. The plurality of MICs 1012 pick up the sound in a space, and the environment sound around the space (e.g., next room, corridor, outside of the house, or the like) to acquire sound data. In addition, the sensor 1013 has a function of sensing various kinds of information of the inside of a space or the area around the space. Examples of the sensor 1013 include a temperature sensor, a humidity sensor, an illuminance sensor, a motion sensor, a door opening and closing sensor, and the like.
The space information processing unit 102 acquires various kinds of space information from the input unit 101. The space information processing unit 102 prepares data such that the state determination unit 103 is capable of using the data as a material for state determination, and outputs the data. Preparing data refers to, for example, noise processing, image analysis, object recognition, sound analysis, or the like. Further, the space information processing unit 102 recognizes a user on the basis of the acquired space information. To recognize a user, it is assumed to identify the individual user in that space, or recognize the position (where the user is in the room, or the like), attitude (whether the user is standing, sitting, lying, or the like), emotion (whether the user is happy, sad, or the like), action (the user is cooking dinner, watching television, reading a book, or the like), busyness degree (whether or not the user is busying, or the like) of the user. In addition, the space information processing unit 102 recognizes an environment on the basis of the acquired space information. To recognize an environment, it is assumed to recognize the current time (morning, noon, evening, or midnight), brightness (brightness of the room, or light from a window), temperature, audio (sound picked up in the space), region (place where the space exists), in-order degree (to what extent the room is cleaned up), or the like of the space.
Sound analysis carried out by the space information processing unit 102 will be further described. The space information processing unit 102 according to the present embodiment performs sound source separation for reproducing an audio space (sound image), and creates a sound database by generating audio. For example, the space information processing unit 102 separates, for each sound source, sound data from sound data picked up by an MIC 1012 (e.g., array MIC) provided inside or outside a user side space (e.g., living-room space). Examples of the sound data for each sound source include uttered sound data of each user, footstep data, object sound data (sound of moving furniture, sound of a faucet, metallic sound of tableware, or the like) of each object, environment sound data (outdoor environment sound or the like), or the like. In addition, the space information processing unit 102 not only performs sound source separation, but also analyzes the sound source position (incoming direction or the like) of the separated sound data. A sound source determination can be made, for example, on the basis of the incoming direction of sound, distance, the frequency or characteristics of sound, sound data stored in the sound source determination DB 112, or a captured image taken by the camera 1011. In addition, the space information processing unit 102 stores the sound data subjected to the sound source separation in the sound DB 113 in association with the speaker or event to create a database. The sound data stored in the sound DB 113 is not limited to sound data acquired in real time, but may be sound data that is generated, for example, with an audio generation algorithm or the like. In addition, in the sound DB 113, indoor characteristic sound (e.g., sound of moving furniture, sound of an opening or closing front door, sound of stepping up or down stairs, chime of a clock, or the like) picked up by a gun MIC may be registered in advance.
The space information processing unit 102 outputs the sound data picked up by the MIC 1012, and the sound data subjected to the sound source separation to the transmission information generation unit 111 and the state determination unit 103. In addition, the space information processing unit 102 may replace the sound data picked up by the MIC 1012 with the sound data registered in the sound DB 113 in advance, and output the sound data to the transmission information generation unit 111 and the state determination unit 103. Further, the space information processing unit 102 may extract, from the sound DB 113, the sound data associated with an indoor event (e.g., ON/OFF operation or state change of an apparatus supporting IoT, stepping up or down stairs, opening or closing a door, or the like) sensed by the camera 1011, the MIC 1012, or the sensor 1013 or generate the sound data with a predetermined audio generation algorithm, and output the sound data to the transmission information generation unit 111 and the state determination unit 103.
The state determination unit 103 determines the state of a space or the state of a user (i.e., context of a space serving as a communication source) on the basis of the information acquired and output by the space information processing unit 102. For example, the state determination unit 103 determines the state of a space or the state of a user on the basis of a user recognition result and an environment recognition result of the space information processing unit 102. Note that the context of a space serving as a communication source can include the state of a user, the state of a real space where a user is present, time, a season, weather, a place, or the relationship with a partner user.
The spatial distance control unit 104 has a function of controlling the distance (depth separation distance) between connected spaces in a three-dimensional space. In the present embodiment, information acquired from the space of a communication destination is disposed with appropriate distance in a three-dimensional space to show depth for the connection. This realizes a pleasant connection state. Here, with reference to
The upper part of
The distance to the partner space is adjusted, for example, on the basis of the connection request level of a user and the connection request level of a communication destination user. First, the connection request level of the user is calculated, for example, by the spatial distance control unit 104 on the basis of a determination result (context of the space of a communication source) output from the state determination unit 103. Here,
The connection request level of the communication destination user is transmitted from the communication control apparatus 10 serving as a communication destination via the communication unit 107.
Then, the spatial distance control unit 104 calculates an optimum connection degree on the basis of the calculated connection request level of the user and the received connection request level of the communication destination user. Here,
The operation interface (I/F) 105 receives an operation input from a user, and outputs the operation input to the spatial distance control unit 104 or the 3D courtyard space generation unit 106. This enables a user to optionally set, for example, the “connection request level of the user” or set a scene of a space which will be described next. In addition, operation inputs from a user for various objects disposed in a three-dimensional space are also possible.
The 3D courtyard space generation unit 106 refers to the 3D space between the video of the space of a communication destination which is disposed in a three-dimensional space in accordance with the distance to a communication partner which is set by the spatial distance control unit 104, and the foreground side that is the communication source user side as “3D courtyard space,” and generates the space. In the “3D courtyard space,” information from the communication source or the communication destination is reflected. In addition, it is also possible to dispose a designated object in the “3D courtyard space.” This makes it possible to display a screen that seems to connect to the space of the communication partner through the virtual courtyard. In addition, reflecting the user state of the communication partner, the space state, surrounding information of the partner space, or the like in the courtyard space makes it possible to indirectly recognize the state of the partner.
The transmission information generation unit 111 is capable of adjusting the amount of data transmitted to a communication destination via the communication unit 107. In the present embodiment, the output value of sound data reproduced in a communication destination is controlled in accordance with the distance between a communication source and the communication destination in a three-dimensional space. Accordingly, for example, refraining from transmitting the sound data that is not reproduced in the communication destination makes it possible to reduce the communication cost, and protect the privacy of the user. Specifically, for example, in the case where the distance corresponding to the optimum connection degree set by the spatial distance control unit 104 is long, and the communication source is distant from the communication destination in a three-dimensional space, video displayed in the communication destination is small, and the indoor sound data is not reproduced. Accordingly, the transmission information generation unit 111 generates video of low resolution and outputs the video to the communication unit 107. The transmission information generation unit 111 stops outputting sound data to the communication unit 107.
In addition, in the case where a communication partner side has the sound data corresponding to an event in a space, the transmission information generation unit 111 is also capable of outputting only the data indicating the event to the communication unit 107 and causing the communication unit 107 to transmit the data to the communication destination.
The communication unit 107 connects to another communication control apparatus 10 and the processing server 30 via the network 20, and transmits and receives data. For example, the communication unit 107 transmits space information output from the space information processing unit 102, spatial distance output from the spatial distance control unit 104, and information of a 3D courtyard space output from the 3D courtyard space generation unit 106 to the communication control apparatus 10 serving as a communication destination or the processing server 30. In addition, the communication unit 107 receives the space information, the spatial distance, the information of a 3D courtyard space, and the like received from the communication control apparatus 10 serving as a communication destination or the processing server 30. In the present embodiment, a 3D courtyard space displayed in a communication source and a communication destination and distance in a three-dimensional space can be synchronized. In addition, the communication unit 107 is also capable of receiving information (weather information, news, schedule information, or the like) acquired by the processing server 30 from a relative service server on a network, or directly receiving the information from the relative service server on the network.
The space information generation unit 108 generates space information and sends the space information to the output unit 109 on the basis of a 3D courtyard space generated by the 3D courtyard space generation unit 106 and the video of the space of a communication destination which is received via the communication unit 107. For example, the space information generation unit 108 generates space image information obtained by combining the video of the space of a communication destination which is received via the communication unit 107 with the 3D courtyard space generated by the 3D courtyard space generation unit 106, and performs control such that the space image information is displayed on a display 1091.
In addition, the space information generation unit 108 generates space audio information for reproducing the audio space (sound image) corresponding to spatial distance, and performs control for reproduction by a speaker 1092. For example, the space information generation unit 108 sets the sound volume of the courtyard environment sound corresponding to the 3D courtyard space generated by the 3D courtyard space generation unit 106, and the indoor speech and the indoor noise in the space of a communication destination which is received via the communication unit 107 in accordance with distance D between a communication source space and a communication destination space in a three-dimensional space. Here, with reference to
In the example illustrated in
In addition, the space information generation unit 108 also performs sound image localization processing for sound data of each sound source, thereby making it possible to more efficiently reproduce a three-dimensional audio space.
The output unit 109 has a function of presenting the space information generated by the space information generation unit 108 to the user of a communication source. For example, the output unit 109 is implemented by the display 1091, the speaker 1092, or an indicator 1093.
Here, a configuration example of the speaker 1092 according to the present embodiment will be described. In the present embodiment, in reproducing an audio space, it is possible to present sound by virtual sound source localization technology or the like with a monaural, stereo, or 5.1ch-surround speaker configuration, or the like. In addition, the use of a wavefront synthesis speaker or the like that uses a speaker array makes it possible to accurately localize the sound image of the sound of the partner user or noise in the living-room space of the communication partner, and also reproduce environment sound from the entire reference plane (e.g., wall on which the display 1091 is installed) with plane waves.
In addition, in the present embodiment, a combination of a speaker implemented by a stereo speaker or the like and capable of localizing a sound image, and a flat speaker or the like capable of presenting plane sound that is not localized from the entire reference plane may be employed as the configuration.
The storage unit 110 storages data transmitted and received via the communication unit 107. In addition, in the example illustrated in
As described above, in the present embodiment, the partner space image 41 is disposed with the distance (separation distance) corresponding to a connection degree based on the connection request levels of both a communication source and a communication destination in a three-dimensional space. Sound also changes in accordance with the distance. Here, an output example of the display 1091A and the speaker 1092 of a communication source in which separation distance gradually increases (distance D1 and distance D2 illustrated in
Further, as illustrated in
Note that, although not illustrated, the MIC 1012 can also be installed around the display 1091A.
In addition, in the example illustrated in
In
Next, operation processing according to the present embodiment will be specifically described with reference to
As illustrated in
Next, the communication control apparatus 10A acquires space information through the space information processing unit 102 (step S112), and determines the state of the user A and the state of the space A through the state determination unit 103 (step S115).
Next, the communication control apparatus 10A transmits the space information and the state information from the communication unit 107 to the communication control apparatus 10B (step S118).
Meanwhile, similarly, the communication control apparatus 10B side also acquires space information (step S121) and determines the state of the user B and the state of the space B (step S124). The communication control apparatus 10B side transmits the various kinds of information to the communication control apparatus 10A (step S127).
Next, the communication control apparatus 10A calculates the connection request level of the user A through the spatial distance control unit 104 (step S130), and transmits the connection request level to the processing server 30 (step S133). The connection request level of the user A may be what is optionally input by the user A, or may be calculated on the basis of a determination result of the state of the user or the state of the space.
Next, the communication control apparatus 10B side similarly calculates the connection request level of the user B through the spatial distance control unit 104 (step S136), and transmits the connection request level to the processing server 30 (step S139).
Next, the processing server 30 adjusts the distance on the basis of the connection request level of the user A and the connection request level of the user B (step S142). That is, the processing server 30 calculates an optimum connection degree on the basis of the connection request level of the user A and the connection request level of the user B. The connection degree can be calculated with the formula 2 described above with reference to
Next, the processing server 30 transmits the calculated distance to each of the communication control apparatuses 10A and 10B (steps S145 and S148).
Next, the communication control apparatuses 10A and 10B use the spatial distance control units 104 to optimally control the spatial distance (steps S151 and S154). Specifically, the spatial distance control unit 104 sets the distance transmitted from the processing server 30 as spatial distance.
Next, the processing server 30 transmits the scene information to each of the communication control apparatuses 10A and 10B (steps S157 and S163). The transmitted scene information may be information of a scene selected by the user A or the user B, or information of a scene automatically decided by the processing server 30.
Next, the communication control apparatus 10A uses, through the 3D courtyard space generation unit 106, the scene information transmitted from the processing server 30, the space information received in step S127 above, the state determination information to generate a 3D courtyard space (step S160). In addition, in the case where relevant information (weather information, illuminance of the partner space, state of cookware, schedule information of the partner user, action history, and the like) is transmitted (step S169), the 3D courtyard space generation unit 106 also reflects the relevant information in the 3D courtyard space (step S172).
Meanwhile, similarly, the communication control apparatus 10B side also generates a 3D courtyard space (step S166), and reflects the received relevant information in the 3D courtyard space (steps S175 and S178).
Next, the communication control apparatus 10A presents the 3D courtyard space generated by the space information generation unit 108, the partner space image (video of the space B), and the space information including audio information (sound and noise in the space B, and courtyard sound corresponding to the 3D courtyard space) from the output unit (display or speaker) (step S181). Meanwhile, the communication control apparatus 10B side similarly presents the 3D courtyard space, the partner space image (video of the space A), and the space information including audio information (sound and noise in the space A, and courtyard sound corresponding to the 3D courtyard space) from the output unit (display or speaker) (step S184).
The 3D courtyard space and each partner space image described above can be synchronized by the processing server 30, and displayed on each display at the same timing with the same sense of distance. In addition, the courtyard sound corresponding to the 3D courtyard space can also be reproduced by each speaker at the same timing with the same sense of distance.
Next, in the case where some information is updated (step S187/Yes), the communication control apparatus 10A repeats the processing from step S112. In addition, in the case where some information is updated (step S190/Yes), the communication control apparatus 10B also repeats the processing from step S121.
Then, once the communication control apparatus 10A is instructed to finish the connection (step S193/Yes), the communication control apparatus 10A checks with the communication control apparatus 10B whether to finish the connection (step S196). Upon receiving permission to finish the connection from the communication control apparatus 10B (step S199), the communication control apparatus 10A disconnects the session (step S202).
The above describes the communication control processing according to the present embodiment. Note that, here, as an example, synchronization processing is performed, an optimum connection degree is calculated, scene information is transmitted, and the like by the processing server 30. However, the present embodiment is not limited thereto. It is also possible for the communication control apparatus 10 to perform these kinds of processing.
Next, sound source separation processing for the sound picked up by the MIC 1012 will be described with reference to
As illustrated in
Next, the space information processing unit 102 associates the sound source position with the sound data, and registers the sound source position and the sound data in the sound DB 113 (step S206). The sound DB 113 may be shared with a communication partner.
Next, the space information processing unit 102 transmits the sound data subjected to the sound source separation along with a determination result to the partner user side (communication destination) via the communication unit 107 (step S212).
Next, sound source reproduction processing performed by the speaker 1092 will be described with reference to
As illustrated in
Next, the space information generation unit 108 instructs the output unit 109 to present the space information (step S306).
Next, the output unit 109 checks the attributes (sound volume, sound source position, effects (such as the presence or absence of directionality)) of a sound source group for reproduction and the corresponding speaker (step S309).
Next, in the case where the type of the corresponding speaker is stereo (step S312/stereo), the output unit 109 outputs the sound data with a predetermined attribute corresponding to the stereo speaker (step S318). Meanwhile, in the case where the type of the corresponding speaker is flat (step S312/flat), the output unit 109 outputs the sound data with a predetermined attribute corresponding to the flat speaker (step S315).
As described above, in an information processing system according to an embodiment of the present disclosure, it is possible to aurally produce distance in a virtual three-dimensional space by using the space for a connection to a communication partner, and realize more comfortable communication.
In addition, a display installed in the space of a communication source is regarded as a window (or a door), and a space image in which the video of the space of a communication partner is disposed is displayed in a place having predetermined distance in a three-dimensional space, thereby making it possible to visually express the distance to the communication partner. Note that the aspect ratio of the display installed on the wall may be implemented in accordance with the dimensions of an actual window or door. In addition, in the case where a display is regarded as a door, the display is disposed such that the lower side of the display is positioned near the floor, thereby making possible to express the presence of the space over the door more realistically.
In addition, control may be performed such that the aspect ratio of the display area of the video of the space of a communication partner which is disposed in a three-dimensional space is the same as the aspect ratio of the display.
The preferred embodiment(s) of the present disclosure has/have been described above with reference to the accompanying drawings, whilst the present disclosure is not limited to the above examples. A person skilled in the art may find various alterations and modifications within the scope of the appended claims, and it should be understood that they will naturally come under the technical scope of the present disclosure.
For example, it is also possible to create a computer program for causing the above-described communication control apparatus 10, or the hardware such as a central processing unit (CPU), a read only memory (ROM), and a random access memory (RAM) built in the processing server 30 to execute the functions of the communication control apparatus 10 or the processing server 30. In addition, there is also provided a computer-readable storage medium having the computer program stored therein.
Further, the effects described in this specification are merely illustrative or exemplified effects, and are not limitative. That is, with or in the place of the above effects, the technology according to the present disclosure may achieve other effects that are clear to those skilled in the art from the description of this specification.
Additionally, the present technology may also be configured as below.
(1)
An information processing apparatus including:
a reception unit configured to receive data from a communication destination; and
a reproduction control unit configured to perform control such that sound data of a space of the communication destination is reproduced from a sound output unit in a space of a communication source with an output value in accordance with separation distance between the communication destination and the communication source disposed in a virtual three-dimensional space, the output value being different for each sound source type.
(2)
The information processing apparatus according to (1), further including:
a distance control unit configured to control virtual separation distance between the communication source and the communication destination in the three-dimensional space, in which
the distance control unit controls the separation distance in accordance with a connection degree adjusted on a basis of a connection request level from the communication source and a connection request level from the communication destination.
(3)
The information processing apparatus according to (2), in which
the connection request level is calculated in accordance with a context of a user, the context being determined on a basis of space information.
(4)
The information processing apparatus according to any one of (1) to (3), in which the sound data of the space of the communication destination is sound data received by the reception unit from the communication destination, or sound data extracted from a predetermined database on a basis of the data received by the reception unit from the communication destination.
(5)
The information processing apparatus according to any one of (1) to (4), in which
the reproduction control unit performs control such that uttered sound data and object sound data of the space of the communication destination increase and environment sound in the space of the communication destination decreases as the space of the communication destination is closer to the space of the communication source disposed in the three-dimensional space, and performs control such that the uttered sound data and the object sound data decrease and the environment sound increases as the space of the communication destination is more distant from the space of the communication source disposed in the three-dimensional space.
(6)
The information processing apparatus according to (5), in which
the reproduction control unit performs sound image localization control such that the uttered sound data and the object sound data of the sound data are reproduced at corresponding sound image positions, and performs control such that the environment sound is reproduced from a whole of a reference plane of the space of the communication source.
(7)
The information processing apparatus according to any one of (1) to (6), further including:
a sound source separation unit configured to perform sound source separation for sound data acquired from the space of the communication source; and
a transmission unit configured to transmit data including the sound data subjected to the sound source separation to the communication destination, the data being acquired from the space of the communication source.
(8)
The information processing apparatus according to any one of (1) to (7), further including:
a generation unit configured to generate space image information in which an image corresponding to the space of the communication destination is disposed at a predetermined position corresponding to the separation distance in the three-dimensional space; and
a display control unit configured to perform control such that the generated space image information is displayed on a display unit in the space of the communication source.
(9)
An information processing method including, by a processor:
receiving data from a communication destination; and
performing control such that sound data of a space of the communication destination is reproduced from a sound output unit in a space of a communication source with an output value in accordance with separation distance between the communication destination and the communication source disposed in a virtual three-dimensional space, the output value being different for each sound source type.
(10)
A program for causing a computer to function as:
a reception unit configured to receive data from a communication destination; and
a reproduction control unit configured to perform control such that sound data of a space of the communication destination is reproduced from a sound output unit in a space of a communication source with an output value in accordance with separation distance between the communication destination and the communication source disposed in a virtual three-dimensional space, the output value being different for each sound source type.
Number | Date | Country | Kind |
---|---|---|---|
2015-242438 | Dec 2015 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | 15778721 | May 2018 | US |
Child | 16676905 | US |