The invention relates to a method and system for facilitating multiuser communication in a Virtual Reality [VR] environment. The invention further relates to a computer program comprising instructions for causing a processor system to perform the method, to a VR device, to a server for hosting the VR environment, to a communication device, and to signalling information for the communication device.
Virtual Reality (VR) involves the use of computer technology to simulate a user's physical presence in a virtual environment. Typically, VR rendering devices, also in the following simply referred to as VR devices, make use of Head Mounted Displays (HMD) to render the virtual environment to the user, although other types of VR displays and rendering techniques may be used as well, including but not limited to holography and Cave automatic virtual environments (recursive acronym CAVE).
It is known to use a VR environment, which is in the context of VR also simply referred to as ‘virtual environment’, for multiuser communication. In such multiuser communication, users may be represented by avatars within the virtual environment, while communicating via voice, e.g., using a microphone and speakers, and/or nonverbal communication. Examples of the latter include, but are not limited to, text-based communication, gesture-based communication, etc. Here, the term ‘avatar’ refers to a graphical representation of the user within the virtual environment, which may include representations as real or imaginary persons, real or abstract objects, etc.
Such VR environment-based multiuser communication is known per se, e.g., from AltspaceVR (http://altvr.com/), Improov (http://www.middlevr.com/improov/), 3D ICC (http://www.3dicc.com/), etc. It is also known to combine a VR environment with video-based communication. For example, it is known from Improov, which is said to be a ‘platform for collaboration in virtual reality’, to use a live camera recording of a user as an avatar in the virtual environment.
The inventors have also considered multiuser communication scenarios in which a local user accesses the virtual environment with a VR device and is recorded via a camera, with the video of the camera being provided to communication devices of remote users which may or may not be VR devices. In the latter case, the remote users may not have direct access to the virtual environment, but instead may be shown the video of the local user while communicating with the local user via voice, text, etc. Here and in the following, the terms ‘local’ and ‘remote’ are used to indicate that the communication takes place between different users who communicate electronically, e.g., via communication data. As such, the terms may, but do not need to, indicate a degree of physical separation of the users, e.g., by being located in different rooms, buildings or places.
When considering the above scenarios, the inventors have recognized that a problem of multiuser communication which combines VR and video is that a remote user, to whom the video of the local user is shown, may not know that he/she is addressed by the communication of the local user. Namely, the same video may be provided simultaneously to several remote users in parallel.
It would be advantageous to obtain multiuser communication which combines VR and video and addresses the abovementioned problem.
The following aspects of the invention may involve detecting communication, or an intent of communication, from the local user to a remote user, and differently generating the communication data for the communication device of the remote user than for the communication devices of other remote users so as to signal whether a particular remote communication device is addressed by the communication.
In accordance with a first aspect of the invention, a method may be provided for facilitating multiuser communication in a Virtual Reality [VR] environment, wherein the multiuser communication may be based on:
In accordance with a further aspect of the invention, a transitory or non-transitory computer-readable medium may be provided comprising a computer program comprising instructions to cause a processor system to perform the method.
In accordance with a further aspect of the invention, a transitory or non-transitory computer-readable medium may be provided comprising signalling information for use by a communication device, wherein the communication device may be configured to render video associated with multiuser communication in a Virtual Reality [VR] environment based on the signalling information and the signalling information may be indicative of whether the communication device is addressed by the multiuser communication in the VR environment.
In accordance with a further aspect of the invention, a system may be provided for facilitating multiuser communication in a Virtual Reality [VR] environment, wherein the multiuser communication may be based on:
In accordance with a further aspect of the invention, a server may be configured as host of a Virtual Reality [VR] environment, wherein the server may comprise at least one of: the first processor and the second processor, of the system.
In accordance with a further aspect of the invention, a Virtual Reality [VR] device may be configured to render a VR environment, wherein the VR device may comprise at least one of: the first processor and the second processor, of the system.
In accordance with a further aspect of the invention, a communication device may be provided which may comprise:
The above measures involve a VR device and a plurality of remote communication devices which may be, but do not need to be, VR devices themselves. These devices may be engaged in a communication session, which may involve the exchange of communication data between devices. The communication session may be associated with the VR environment in that it may represent communication which occurs within the VR environment, such as nonverbal communication between avatars. In this case, the communication data may be an integral part of data which is exchanged between the devices for purpose of participating in the VR environment, and may possibly be routed via one or more servers hosting the VR environment. However, communication data may also be separately transmitted, e.g., in case of voice data which may be directly exchanged between the respective devices.
A camera may be provided which may record the local user when participating in the communication session. For example, the camera may be directed at a face of the local user. The resulting video may, in a conventional scenario, be transmitted to each of the plurality of remote communication devices as part of the communication data between the VR device and a respective remote communication device. Here, the term ‘part of’ may refer to the video being sent in packets which include other types of data which is exchanged during the communication session, but also the video being sent separately, e.g., in the form of a separate video stream. In this respect, it is noted that the video may be modified before or after transmittal by image and/or video processing, e.g., to replace a HMD worn by the local user in the recorded video by synthesized images of his/her eyes, facial expressions, etc. As such, the rendered video may differ from the video originally recorded by the camera.
Communication, or an intent of communication, may be detected between the local user and at least one of the plurality of remote users. Thereby, a target user may be identified of the communication as well as a target communication device, namely the remote communication device of the target user. Such communication, or an intent of communication, may be identified on the basis of the communication data which is exchanged during the communication session. It will be appreciated that many techniques are known and may be advantageously used for identifying communication, or the intent of communication, from communication data. For example, a plurality of microphones may be used to determine the direction of the voice of the local user, which may indicate who is being addressed. Yet another example is that, if all users are represented by avatars within the VR environment, the relative position and/or relative orientation of the avatars may be used to detect such communication, or the intent of communication, between users. In addition or alternatively, voice recognition may be used to detect if a particular user is addressed by name, e.g., “Hey Alex, . . . ”.
Having identified the target communication device, the communication data which is sent to the target communication device is differently generated than the communication data which is sent to the other remote communication devices. Thereby, it is signalled that the target communication device, rather than the other remote communication devices, is addressed by the communication. It is noted that while conceptually the remote user is addressed by the communication of the local user, this results in his/her communication device receiving different communication data and thus being also considered to be ‘addressed by communication’.
The above measures have as effect that the target user, to whom the video of the local user is shown, may know that he/she is addressed by the communication of the local user, and/or that other remote users may know that they are not addressed by the communication of the local user. Thereby, one of the drawbacks of electronic communication is addressed, namely that various cues, which may allow a person to detect whether he/she is addressed, or is to be addressed, by communication, are obfuscated or not available. Such cues may include gaze, posture, relative position and/or relative orientation in real-life three-dimensional space, etc., and may relate to communication already taking place, e.g., in the form of verbal communication, or may be known to be indicative of the intent of communication, e.g., an establishing of eye contact. In particular, such cues may be obfuscated or not available in case the local user wears a HMD as the HMD may obfuscate parts of his/her face. Moreover, in case head tracking and/or motion tracking is used by the VR device, the local user may be positioned and/or oriented away from the camera, which may further obfuscate such cues. By signalling whether a remote communication device is addressed, or is to be addressed, by the communication, these cues may be replaced, e.g., by an explicit signal or by other means. As such, the communication between users participating in the communication session may be more intuitive, less tiring, etc.
It will be appreciated that the target communication device may change during a communication session, and that the local user may address different ones of the remote users during the communication session. In an embodiment, such a change of target user and thus target communication device may be automatically detected, e.g., by periodically detecting communication, or the intent of communication, between the local user and any of the remote users. Thereby, different target communication devices may be identified during the course of a communication session.
In an embodiment, the communication data may be differently generated to effect a different visual rendering by the target remote communication device than by the other remote communication devices. As such, it may be signalled visually by the target communication device that the target user is addressed by the communication of the local user. Additionally or alternatively, it may be signalled visually by the other remote communication devices that the other remote users are not addressed by the communication of the local user. An advantage of such visual signalling may be that such visual signalling is noticeable while not being considered bothersome, e.g., as audio signalling may in some instances be. Also, it may give users a more prolonged or even continuous view of who is or is not addressed than a momentary audio signalling may give. However, this is not a limitation in that the visual signalling may also be presented or signalled discontinuously, e.g., be present only for a limited time when a change of target user occurs, or be presented at time intervals, e.g. every 10 seconds.
In an embodiment, the different visual rendering may comprise:
A graphical indicator may be well suited for visually signalling whether a particular user is addressed by the communication of the local user.
In an embodiment, the graphical indicator may be included as an overlay over the video:
An advantage of including the graphical indicator in the video before transmission is that no separate signalling information is needed, nor needs to be interpreted by the respective remote communication device. An advantage of separately signalling the graphical indicator, or the fact that the graphical indicator is to be overlaid over the video, is that the signalling information may be transmitted separately from the video, e.g., by a separate device or in a separate stream. Another advantage of the latter is that control over the overlay of the graphical indicator over the video is provided to the respective remote communication device.
In an embodiment, the communication system may comprise a further camera configured to record further video of the local user, and the method may further comprise:
It has been recognized by the inventors that one of the reasons that remote users are unable to determine whether they are addressed by the communication of the local user is that they are provided a same video feed of the local user, namely one which typically shows the local user being oriented towards (or away from) the camera, thereby providing each of the remote users the same impression, namely that the local user is oriented towards (or away from) them and thus (not) addressing them.
By providing a further camera which may be physically displaced from the first camera, the local user may be recorded from a different angle. By detecting which of the camera is more aligned with a face direction of the local user, e.g., by using known techniques for detecting face direction, it may be determined which of the recorded videos provides the impression that the local user is facing the viewer, and which of the videos provides the impression that the local user is facing away from the viewer. By providing the former to the target communication device, and providing the latter to the remote communication devices of the other remote users, this problem may be addressed. Namely, the target user may be provided with the impression that the local user faces him/her, while the other remote users may be provided with the impression that the local user faces away. As such, a natural way of signalling that the target user is addressed by the communication of the local user may be established.
In an embodiment, the video of the local user is post-processed after recording. Such post-processing may include the reconstruction of at least part of the face of the local user in the video, which may be hidden or obfuscated by a head mounted display worn by the local user or by another device before such post-processing. Such post-processing may also be differently for the target device than for the other remote communication devices. For example, the video for the target device may be modified to align, or more align, the eyes (gaze) and/or face of the local user with the camera direction, e.g., to create the appearance that the local user is looking into the camera. Additionally or alternatively, the video for the other remote communication devices may be modified to misalign, or more misalign, the eyes (gaze) and/or face of the local user with the camera, e.g., to create the appearance that the local user is looking away from the camera. As such, a natural way of signalling that the target user is addressed by the communication of the local user may be established.
In an embodiment, at least the target user may be represented in the VR environment by an avatar, and the method may further comprise:
A user of a VR device may be immersed in the virtual experience, and may not consider that he/she may face away from the camera. In particular, the camera may be obfuscated from view, e.g., by a HMD being worn by the user. As such, a video may be recorded by the camera which shows the local user at an angle. This may convey to a viewer of the video that he/she is not addressed by the local user. By determining the relative orientation between the camera and a face direction of the local user, e.g., using known techniques for face detection, the VR environment may be adjusted, or its display to the local user, such that the avatar of the target user in the virtual environment is more aligned with the camera. It has been found that the user will naturally face the avatar of the remote user he/she is addressing. As such, the local user may naturally more align his/her face with the camera, without a need for explicit and obtrusive feedback, e.g., messages such as “please face the camera”. It is noted that additionally or alternatively to adjusting the VR environment, or the rendering of the VR environment by the VR device, the camera may be a moveable camera, e.g., mounted on a rail or attached to a drone, and the camera may be moved to more align the camera with the avatar of the target user in the VR environment, thereby more aligning the camera with the face direction of the local user when facing the target user. In general, the static or movable camera may be a pan/zoom/tilt camera.
In an embodiment, the adjusting the VR environment, or the rendering of the VR environment by the VR device, may comprise:
Both options, and the combination of both options, are well suited for more aligning the avatar in the virtual environment with the camera in the physical world.
In an embodiment, each of the plurality of remote users may be represented in the VR environment by a respective one of a plurality of avatars, and the identifying the target user may be performed in the VR environment on the basis of the avatars of the remote users. It has been found that, similarly to the physical world, there exist various cues within the VR environment which indicate with which one of the remote users the local user communicates, or intends to communicate. These cues may relate to the virtual representations of the users in the virtual environment, e.g., their avatars. As previously indicated, such avatars may take any suitable form, including but not limited to a rendering in the virtual environment of a video recording of the respective user. By detecting these cues, it may be more reliably determined with which one of the remote users the local user communicates, or intends to communicate.
In an embodiment, the identifying the target user may comprise at least one of:
The relative positions and/or relative orientations of the avatars in the VR environment may be indicative of which one of the remote users the local user is communicating with, or intents to communicate with. For example, if the avatar or virtual viewpoint of the local user is positioned nearby and/or oriented towards another avatar, it is likely that the local user is communicating with, or intends to communicate with, the remote user of that other avatar. Here, the term ‘virtual viewpoint’ refers to a viewpoint in the virtual environment which is rendered to the local user by the VR device, and may also be referred to as a ‘virtual camera’ recording the view of the local user. Additionally or alternatively, the local user may manually select at least one of the avatars, e.g., for the explicit purpose of indicating which one of the remote users he/she communicates with, or intends to communicate with, or for another purpose.
In an embodiment, the identifying the avatar representing the target user may comprise at least one of:
In an embodiment, the receiving the selection of at least one of the avatars from the local user may comprise:
It will be appreciated by those skilled in the art that two or more of the above-mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful.
Modifications and variations of the stream modifier, the user device, the construction metadata and/or the computer program, which correspond to the described modifications and variations of the method, can be carried out by a person skilled in the art on the basis of the present description.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings,
It should be noted that items which have the same reference numbers in different figures, have the same structural features and the same functions, or are the same signals. Where the function and/or structure of such an item has been explained, there is no necessity for repeated explanation thereof in the detailed description.
The following list of references and abbreviations is provided for facilitating the interpretation of the drawings and shall not be construed as limiting the claims.
The following embodiments may involve detecting communication, or an intent of communication, from the local user to a remote user, and differently generating the communication data for the communication device of the remote user than for the communication devices of other remote users so as to signal whether a particular remote communication device is addressed by the communication.
In the example of
The camera 120 may record the local user 5 in physical space. The resulting video may be transmitted to the remote communication devices of the remote users. As such, the remote users may each be presented with a video of the local user, shown schematically in
This type of illustration is maintained in
As a result of the local user facing the camera 120 in the example of
To address the above situation, it may be detected that the local user 5 communicates, or intends to communicate, with one of the plurality of remote users or a particular subset of the plurality of remote users. For example, it may be detected that the local user 5 is communicating with the second remote user 2, which is shown in
In general, the differently generating of the communication data may involve the following steps. Firstly, it may be detected with whom the local user communicates, or intends to communicate. Examples of such detection will be given with reference to
The VR device 100 and the head mounted display 110 may communicate via data communication 112. For example, the VR device 100 may provide display data to the head mounted display 110, which may cause the head mounted display 110 to display a rendering of the VR environment to the local user 5. Moreover, the VR device 100 may receive sensor data from the head mounted display 110 to enable the VR device 100 to perform head tracking, e.g., on the basis of a measured head rotation or head movement of a user. It is noted that measuring the head rotation or head movement of a user is known per se in the art, e.g., using gyroscopes, cameras, etc. The head rotation or head movement may be measured by the head mounted display 110, e.g., on the basis of the head mounted display 110 comprising a gyroscope. Additionally or alternatively, the head rotation or head movement may be measured by the VR device 100, e.g., by the VR device 100 comprising a camera or camera input connected to an external camera such as the camera 120 recording the user, e.g., using so-termed ‘outside-in’ tracking, or a combination of such approaches.
By way of example,
In the example of
Alternatively, the VR device 100 may directly transmit such different communication data to each of the remote communication devices 160, 162. This is shown in
Examples of signalling information include, but are not limited to the following. For example, a broadcast message may be transmitted in JSON format, e.g., by the VR device or the server, to all remote communication devices, e.g., via Websockets. The message may provide an ‘orchestrationUpdate’ which may notify all participants of communication session of the target user by user name:
Alternatively, the target user may be identified by a user identifier:
Another example is a unicast message in JSON format, which may be transmitted, e.g., by the VR device or the server, to a specific remote communication device indicating whether it is being addressed. The example also shows whether an icon should be shown, and if so, which icon.
Alternatively to ‘beingAddressed’, ‘intendedUser: false/true’ may be used.
Yet another example is that a unicast message in JSON format, which may be transmitted, e.g., by the VR device or the server, to a specific remote communication device indicating whether it is being addressed, and comprising an instruction to switch streams, e.g., to switch to the video provided to the target device to a camera which provides a more aligned view of the local user.
Yet another example is a Session Description Protocol (SDP) message update, which may be transmitted, e.g., from the VR device or the server, to a target communication device, with a new SDP offer in an ongoing session. For example, the target user may be signed via a new SDP attribute ‘intended User’:
v=0
o=alice 2890844526 2890844527 IN IP4 host.example.com
s=
c=IN IP4 host.atlanta.example.com
t=0 0
m=audio 51372 RTP/AVP 0
a=rtpmap:0 PCMU/8000
m=video 0 RTP/AVP 31
a=rtpmap:31 H261/90000
a=intendedUser:false
Alternatively, the existing “inactive” SDP attribute may be used, e.g., as defined by the SDP definition (https://tools.ietf.org/html/rfc4566#section-5.14):
v=0
o=alice 2890844526 2890844527 IN IP4 host.example.com
s=
c=IN IP4 host.atlanta.example.com
t=0 0
m=audio 51372 RTP/AVP 0
a=rtpmap:0 PCMU/8000
m=video 0 RTP/AVP 31
a=rtpmap:31 H261/90000
a=inactive
The room/device detector 250 may be configured to discover the physical location and orientation of actuators and sensors in a room, e.g., cameras, microphones, VR headsets, eligible for usage in an A/V communications session. Such detection may be provided by, e.g., network based discovery, e.g., using network protocols such as DLNA, multicast DNS, SAP, to establish the availability of devices. Additionally or alternatively, the environment may be scanned, e.g., using one or more cameras 120 to detect devices using content analysis algorithms. The cameras may be stationary, e.g., part of a laptop or TV, or mobile, e.g., a camera comprised in a smartphone or a VR headset. Additionally or alternatively, a combination of network-based discovery and scanning may be used, e.g., using the sensory input from a discovered device, e.g., a camera or microphone, to analyze its location and orientation in the physical environment, for example using pose estimation. Additionally or alternatively, the physical location and orientations may be manually configured by the user. Besides establishing their position and orientation, the room/device detector 250 may be configured to determine the device capabilities, e.g., in the form of supported media features, and their settings, e.g., whether the devices in the room are eligible for use in the NV communications session. The room/device detector 250 may output the result of the above discovery or detection to the session orchestrator 200, e.g., in the form of detection data 252, which may comprise any of the above information encoded in a structured format, such as but not limited to a JSON message or XML description. Examples of detection data include, but are not limited to the following JSON message:
The user tracker 240 may be configured to track the position and/or viewing direction of the user in the physical space so as to adjust his/her viewpoint in the virtual environment, and may output the tracked position and/or viewing direction in the form of tracking data 244 to the session orchestrator 200. The tracking data 244 may comprise the position and/or viewing direction of the user, e.g., in the form of an encoding of the position and/or viewing direction in a structured format. Examples of tracking data include, but are not limited to the following JSON message:
Such tracking may involve an external device, e.g., the camera 120, or one or more sensors integrated into a user device, e.g., a smart phone or the VR device 100 itself, or a combination thereof. In the example of
The session orchestrator 200 may be configured to analyze the input provided by the aforementioned modules to detect whom the VR user is addressing, and to signal this to the remote communication devices 160 of the remote users. The output of the session orchestrator 200 may be a configuration 212 or stream to a renderer 210, e.g., to cause the renderer 210 to render the VR environment to the local user. The renderer 210 may be configured to render and/or populate the virtual environment with graphical representations of the other users, possible using virtual objects such as displays which show a video feed of the respective user, etc. Other output of the session orchestrator 200 may be signalling included in communication data 150, 152 provided to the remote communication devices 160, 162.
Although not shown explicitly, the embodiments of
In addition to the examples of
Another example is that if all communication devices transmit video of their respective users, and all of these videos are displayed to the respective users, e.g., in respective windows arranged side-by-side or on virtual displays in the VR environment, the text or graphical indicator may be overlaid over the video of the target user to indicate to the other remote users who the target user is. Yet another example is that if a video of the local user is obtained showing the local user sideways, e.g., using multiple cameras as described with reference to
To address this problem, a further camera 124 may be provided which may record a further video of the local user, as shown in
Having identified the more aligned video and the less aligned video, the more aligned video may be included in the communication data for the target remote communication device, and the less aligned video may be included in the communication data for the other remote communication devices. This is illustrated in
It is noted that
In particular,
It will be appreciated that the mechanism shown in
As an alternative to enabling the local user 5 to manually rotate the VR environment 10, or the avatars contained therein, such rotation may also be automatically be performed, namely in order to align the target user in the VR environment with the camera in physical space. Namely, as shown in
It will be appreciated that the local user 5 may move in the VR environment in multiple ways. For example, as also illustrated in
In general, the system 300 and the communication device 400 may each be embodied as, or in, a device or apparatus. The device or apparatus may comprise one or more (micro)processors which execute appropriate software. The processors of the system and the communication device may be embodied by one or more of these (micro)processors. Software implementing the functionality of the system or the communication device may have been downloaded and/or stored in a corresponding memory or memories, e.g., in volatile memory such as RAM or in non-volatile memory such as Flash. Alternatively, the processors of the system or the communication device may be implemented in the device or apparatus in the form of programmable logic, e.g., as a Field-Programmable Gate Array (FPGA). Any input and/or output interfaces may be implemented by respective interfaces of the device or apparatus, such as a network interface. In general, each unit of the system or the communication device may be implemented in the form of a circuit. It is noted that the system or the communication device may also be implemented in a distributed manner, e.g., involving different devices or apparatuses. For example, the distribution of the system or the communication device may be in accordance with a client-server model.
In general, it will be appreciated that the method or system may be configured to dynamically detect which remote user the local user is communicating with, or intends to communicate with. As such, the described differently generating of the communication data may be adjusted over time, e.g., in response to the local user addressing another remote user. For example, the signalling information may be sent to different ones of the remote communication devices in response to such a change, and/or different signalling information may be generated, etc. Moreover, although the embodiments have been described with reference to the local user addressing a single remote user, the local user may also address a subset of the plurality of remote users. The communication data may thus be differently generated for the subset of remote users than for those remote users which do not belong to the subset.
In general, the video of the local user may be post-processed after recording but before transmission to the remote communication devices, e.g., by the camera, the VR device, a server, etc. Such post-processing may include the reconstruction of at least part of the face of the local user in the video, which may be hidden or obfuscated by a head mounted display worn by the local user or by another device before such post-processing. For that purpose, techniques known per se in the art of video processing may be used, e.g., as described in the paper ‘Real-time expression-sensitive HMD face reconstruction’ by Burgos-Artizzu et al, Siggraph Asia 2015. Such post-processing may also be differently for the target device than for the other remote communication devices. For example, the video for the target device may be modified to align, or more align, the eyes (gaze) and/or face of the local user with the camera direction, e.g., to create the appearance that the local user is looking into the camera. Additionally or alternatively, the video for the other remote communication devices may be modified to misalign, or more misalign, the eyes (gaze) and/or face of the local user with the camera, e.g., to create the appearance that the local user is looking away from the camera. For that purpose, techniques known per se in the art of video processing may be used, e.g., as described in the paper ‘Eye Gaze Correction with a Single Webcam Based on Eye-Replacement’ by Yalun Qin et al, ISVC 2015. Additionally or alternatively, correction data representing or being indicative of such a correction may be signalled to the remote communication devices so as to enable the remote communication devices to effect the correction. For example, video data of a ‘corrected’ face of the local user, e.g., having more aligned eyes, may be signalled to the target device to enable the target device to overlay the corrected face over the video of the local user. Instead of being video data, this correction data may also have a different form, e.g., static image data, or by the correction data specifying parameters for video processing to be performed by a remote communication device so as to locally effect the ‘correction’ of the local user's eyes (gaze) and/or face.
Although the embodiments have been described with respect to the video of one user (e.g., a ‘local’ user), the techniques may also be applied to the video of other, or even all users involved in the multiuser communication (e.g., the ‘remote’ users).
The data processing system 1000 may include at least one processor 1002 coupled to memory elements 1004 through a system bus 1006. As such, the data processing system may store program code within memory elements 1004. Further, processor 1002 may execute the program code accessed from memory elements 1004 via system bus 1006. In one aspect, data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that data processing system 1000 may be implemented in the form of any system including a processor and memory that is capable of performing the functions described within this specification.
Memory elements 1004 may include one or more physical memory devices such as, for example, local memory 1008 and one or more bulk storage devices 1010. Local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive, solid state disk or other persistent data storage device. The processing system 1000 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from bulk storage device 1010 during execution.
Input/output (I/O) devices depicted as input device 1012 and output device 1014 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, for example, a microphone, a keyboard, a pointing device such as a mouse, or the like. Examples of output devices may include, but are not limited to, for example, a monitor or display, speakers, or the like. Input device and/or output device may be coupled to data processing system either directly or through intervening I/O controllers. A network adapter 1016 may also be coupled to data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to said data and a data transmitter for transmitting data to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with data processing system 1000.
As shown in
In one aspect, for example, data processing system 1000 may represent a system for facilitating multiuser communication. In that case, application 1018 may represent an application that, when executed, configures data processing system 1000 to perform the various functions described herein with reference to ‘system for facilitating multiuser communication’. In another aspect, data processing system 1000 may represent the server, the VR device or the remote communication device. In that case, application 1018 may represent an application that, when executed, configures data processing system 1000 to perform the various functions described herein with reference to ‘server’, ‘VR device’ and ‘remote communication device’.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Number | Date | Country | Kind |
---|---|---|---|
16186141.4 | Aug 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/071552 | 8/28/2017 | WO | 00 |