Example embodiments relate to outputting virtual content, for example outputting virtual content representing a virtual scene which may comprise one or more audio sources.
The term extended reality (XR) is sometimes used to refer to a range of technologies and methods involving virtual content which may be visual and/or aural content. Common examples are virtual reality (VR), augmented reality (AR) and mixed reality (MR). VR may refer to rendering a virtual scene in terms of video and/or audio content through a user device such as a VR headset or a handheld device, wherein the virtual scene may be updated based on user movement. AR is similar, but involves output of overlaid virtual content to augment a view of a real-world space seen through a user device such as glasses, goggles or even the camera of a handheld device. Thus, a user may be able to view the real-world environment around them, augmented or supplemented with virtual content that may be provided based on their position. The virtual content may comprise multimedia content such as pictures, photographs, video, diagrams, textual information and aural content, to give some examples. MR is similar to AR, but may be considered different in that some content is inserted into the real-world space at anchor points to give the illusion that the content is part of the real environment.
In some cases, a user may explore virtual content, e.g. a virtual scene, using six-degrees-of-freedom (6DoF) in which both rotational and translational movement of the user or user device allows the user to move around, e.g. behind, virtual objects in the scene.
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
According to a first aspect, there is described an apparatus, comprising means for: providing data representing one or more acoustic properties of a virtual scene, the virtual scene being for output to a user device associated with a user and comprising one or more audio sources at respective locations; identifying, based on a position of the user with respect to the one or more audio sources and the one or more acoustic properties of the virtual scene, one or more audio sources not meeting a predetermined criterion; providing, via a user interface associated with the user device, one or more indicators respectively corresponding to the one or more identified audio sources; and responsive to selection of one of the one or more indicators, changing the user position in the virtual scene so that the user is closer to the corresponding audio source.
The identifying means may be configured to identify an audio source of the one or more audio sources not meeting the predetermined criterion by: estimating or measuring one or more acoustic effects at the user position from sounds emitted from the audio source; and identifying one or more adverse acoustic effects that are greater than, or are above a predetermined threshold with respect to, sounds received directly at the user position from the audio source.
The one or more adverse acoustic effects may comprise one or more of reverberation, reflection, diffusion and attenuation.
The data representing the one or more acoustic properties of the virtual scene may comprise data representing one or more geometric elements in the virtual scene and, associated with the one or more geometric elements, a respective set of one or more acoustic parameters.
The one or more geometric elements may comprise one or more of: size of the virtual scene, shape of the virtual scene, boundaries of the virtual scene and objects within the virtual scene.
The set of one or more acoustic parameters may comprise one or more of: reverberation parameters, dry and wet ratio parameters and material parameters.
The apparatus may further comprise means for receiving data indicating a subset of the one or more audio sources in the virtual scene to prioritize, and wherein the identifying means is configured to identify one or more audio sources not meeting the predetermined criterion from said subset.
The apparatus may further comprise means for providing data representing one or more acoustic properties of a real-world space in which the user consumes or will consume the virtual scene, and wherein the identifying means is configured to identify the one or more audio sources not meeting the predetermined criterion based also on the one or more acoustic properties of the real-world space.
The data representing the one or more acoustic properties of the real-world space may comprise a listener space description format (LSDF) file.
The data representing the one or more acoustic properties of the virtual scene may comprise an encoder input format (EIP) file.
The user interface may comprise a graphical user interface (GUI) displaying the one or more indicators on a display screen of the user device.
The one or more indicators may comprise a graphical representation of the corresponding audio source and/or a direction of the corresponding audio source with respect to the user position.
The GUI may display a plurality of indicators on the display screen of the user device and respective directions of the corresponding audio sources with respect to the user position, wherein responsive to selection of one of the plurality of indicators, the GUI may update to display the other indicator(s) and their updated respective position(s) and/or an option to return to the previous position in the virtual scene.
The means for changing the position of the user may be configured such that the user moves to be adjacent the corresponding audio source.
The virtual scene may comprise an extended reality, XR, virtual scene comprising visual content corresponding to the audio sources.
The user device may comprise an XR headset including a display screen and a set of headphones.
According to a second aspect, there is described a method comprising: providing data representing one or more acoustic properties of a virtual scene, the virtual scene being for output to a user device associated with a user and comprising one or more audio sources at respective locations; identifying, based on a position of the user with respect to the one or more audio sources and the one or more acoustic properties of the virtual scene, one or more audio sources not meeting a predetermined criterion; providing, via a user interface associated with the user device, one or more indicators respectively corresponding to the one or more identified audio sources; and responsive to selection of one of the one or more indicators, changing the user position in the virtual scene so that the user is closer to the corresponding audio source.
Identifying an audio source of the one or more audio sources not meeting the predetermined criterion may comprise: estimating or measuring one or more acoustic effects at the user position from sounds emitted from the audio source; and identifying one or more adverse acoustic effects that are greater than, or are above a predetermined threshold with respect to, sounds received directly at the user position from the audio source.
The one or more adverse acoustic effects may comprise one or more of reverberation, reflection, diffusion and attenuation.
The data representing the one or more acoustic properties of the virtual scene may comprise data representing one or more geometric elements in the virtual scene and, associated with the one or more geometric elements, a respective set of one or more acoustic parameters.
The one or more geometric elements may comprise one or more of: size of the virtual scene, shape of the virtual scene, boundaries of the virtual scene and objects within the virtual scene.
The set of one or more acoustic parameters may comprise one or more of: reverberation parameters, dry and wet ratio parameters and material parameters.
The method may further comprise receiving data indicating a subset of the one or more audio sources in the virtual scene to prioritize, and wherein the identifying means is configured to identify one or more audio sources not meeting the predetermined criterion from said subset.
The method may further comprise providing data representing one or more acoustic properties of a real-world space in which the user consumes or will consume the virtual scene, and wherein identifying the one or more audio sources not meeting the predetermined criterion may be based also on the one or more acoustic properties of the real-world space.
The data representing the one or more acoustic properties of the real-world space may comprise a listener space description format (LSDF) file.
The data representing the one or more acoustic properties of the virtual scene may comprise an encoder input format (EIP) file.
The user interface may comprise a graphical user interface (GUI) displaying the one or more indicators on a display screen of the user device.
The one or more indicators may comprise a graphical representation of the corresponding audio source and/or a direction of the corresponding audio source with respect to the user position.
The GUI may display a plurality of indicators on the display screen of the user device and respective directions of the corresponding audio sources with respect to the user position, wherein responsive to selection of one of the plurality of indicators, the GUI may update to display the other indicator(s) and their updated respective position(s) and/or an option to return to the previous position in the virtual scene.
Changing the position of the user may comprise moving the user to be adjacent the corresponding audio source.
The virtual scene may comprise an extended reality, XR, virtual scene comprising visual content corresponding to the audio sources.
The user device may comprise an XR headset including a display screen and a set of headphones.
According to a third aspect, there is provided a computer program product comprising a set of instructions which, when executed on an apparatus, is configured to cause the apparatus to carry out the method of any preceding method definition.
According to a fourth aspect, there is provided a non-transitory computer readable medium comprising program instructions stored thereon for performing a method, comprising: providing data representing one or more acoustic properties of a virtual scene, the virtual scene being for output to a user device associated with a user and comprising one or more audio sources at respective locations; identifying, based on a position of the user with respect to the one or more audio sources and the one or more acoustic properties of the virtual scene, one or more audio sources not meeting a predetermined criterion; providing, via a user interface associated with the user device, one or more indicators respectively corresponding to the one or more identified audio sources; and responsive to selection of one of the one or more indicators, changing the user position in the virtual scene so that the user is closer to the corresponding audio source.
The program instructions of the fourth aspect may also perform operations according to any preceding method definition of the second aspect.
According to a fifth aspect, there is provided an apparatus comprising: at least one processor; and at least one memory including computer program code which, when executed by the at least one processor, causes the apparatus to: provide data representing one or more acoustic properties of a virtual scene, the virtual scene being for output to a user device associated with a user and comprising one or more audio sources at respective locations; identify, based on a position of the user with respect to the one or more audio sources and the one or more acoustic properties of the virtual scene, one or more audio sources not meeting a predetermined criterion; provide, via a user interface associated with the user device, one or more indicators respectively corresponding to the one or more identified audio sources; and responsive to selection of one of the one or more indicators, change the user position in the virtual scene so that the user is closer to the corresponding audio source.
The computer program code of the fifth aspect may also perform operations according to any preceding method definition of the second aspect.
Example embodiments will now be described, by way of example only, with reference to the accompanying drawings.
In the description and drawings, like reference numerals refer to like elements throughout.
Example embodiments relate to an apparatus, method and computer program for outputting (alternatively “rendering”) virtual content. Virtual content may represent a virtual scene which may comprise one or more audio sources as well as, optionally, one or more video objects which correspond to the one or more audio sources. For example, a virtual scene may comprise a group of musicians wherein each musician may be represented by respective video content and audio content corresponding to sounds emitted by a particular musician or musicians at a given time.
Example embodiments are therefore related to the field of extended reality (XR) and example embodiments focus on, but are not limited to, virtual reality (VR) and augmented reality (AR) applications. AR applications may also cover mixed reality (MR) applications although the former term will be used herein.
Users may consume virtual content by means of a user device. For VR, the user device may comprise a VR headset which may usually comprise a set of headphones, or equivalents such as earphones, earbuds or other forms of audio output transducers, and a set of video screens for output of the audio and video content respectively.
For AR, the user device may comprise a set of glasses, goggles or even use a camera of a handheld device to enable overlay of virtual content onto a real-world space that the user perceives at a given time. Other forms of user device may be used. In use, a user of the user device may explore the virtual scene represented by the virtual content by various means, including by changing position in the real-world space, which may also be referred to as a consumption space. The position of the user, which may include the orientation and/or spatial position of the user, may be used by a rendering engine of an XR system to output a particular portion of the virtual scene in terms of audio and/or video, which will change as the user changes position. As such, the user can choose which parts of the virtual scene they wish to hear and/or see by physical movement.
In some cases, a user may explore a virtual scene using six-degrees-of-freedom (6DoF) in which both rotational and translational movement of the user or user device allows the user to move around, e.g. behind, virtual objects in the scene. In terms of audio content, it will be appreciated that what the user hears will likely change as the user moves within the virtual content, e.g. due to some audio sources getting closer and other audio sources getting further away.
An audio source as described herein may be considered any virtualised entity, e.g. a virtual object, which emits sound perceivable by a user.
Audio sources within a virtual scene may not be localizable, or even discoverable, due to acoustic effects within the virtual scene and possibly also due to acoustic effects within the real-word space, i.e. the consumption space. By “localizable” it is meant that the user can perceive where in the virtual scene a particular audio source is, e.g. based on where the audio source's emitted sounds come from and possibly the volume. By “discoverable” it is meant that the user can be aware of that the sound source exists.
Example acoustic effects include, but are not limited to, those of reverberation, early and late reflection, diffraction and occlusion. These acoustic effects may not affect video content but, where there is a sound emitted from an audio source that, for example, cannot be seen, but which the user may nevertheless like to locate in the virtual scene, some acoustic effects may prevent or hinder the user's ability to properly know that the audio source exists and/or where it is coming from. These may be termed adverse acoustic effects which detract from the immersive user experience and may require the user to laboriously explore the virtual scene in order to find particular audio sources. The issue can be particularly problematic in complex virtual scenes, e.g. involving many audio sources and/or complex scenery.
Example embodiments may avoid or alleviate such issues by changing the user's position with respect to the virtual scene so that the user is moved closer to an audio source identified as being hard to localize or discover. In this way, the user may find the audio source quicker and more efficiently because the user is moved closer to it, which may mean the user is moved adjacent to it with no other audio source(s) in-between. This may be referred to as “teleporting.” Where there is video content corresponding to the audio source, the change in user position may be reflected in the video content, i.e. so that the user also sees a visual representation of the audio source at a closer distance.
As described herein, audio sources within a virtual scene may be encoded in any suitable format, e.g. stereoscopic or spatial, with example formats for the latter including multi-channel mixes such as 5.1 or 7.1+4, Ambisonics, parametric spatial audio (e.g. metadata-assisted spatial audio (MASA)), object-based audio or any combination thereof.
Determining a user's position when exploring a virtual space in 6DoF can be performed in real-time or near real-time using one or more known methods.
For example, at least part of a user's position may correspond to the position of the user's head. In this sense, the term “position” may refer to orientation, i.e. a first orientation of the user's head is a first position and a second, different orientation of the user's head is a second position. The term may also refer to spatial position within a real-world space to account for translational movements of the user. The position of the user may be determined using one or more known tracking methods, such as by use of one or more cameras which identify facial features in real-time, or by use of inertial sensors (gyroscopes/accelerometers) within a user device, satellite positioning systems such as by use of a Global Navigation Satellite System (GNSS) receiver, positioning receivers based on signals from WiFi access points and/or other methods of position determination.
The system 100 may comprise a server 110, a media player 120, a network 130 and a user device 140 associated with a user 150.
The server 110 may be connected to the media player 120 by means of the network 130 for sending data, e.g., a bitstream of data representing a virtual scene comprising audio data and video data, to the media player 120. As will be explained below, other data such as metadata associated with the audio data may also be provided. The metadata may, for example, represent one or more acoustic properties of the virtual scene at a given time or over a period of time. The server 110 may send audio data, video data and possibly the other data to the media player 120 responsive to one or more data requests sent by the media player 120. For example, the media player 120 may transmit to the server 110 an indication of a position associated with a user of the media player 120, and the server may process and transmit back to the media player audio data and video data responsive to the received position, which may be performed in real-time or near real-time. This may be by means of any suitable streaming data protocol. Alternatively, or additionally, the server 110 may provide one or more files representing audio data, video data and the other data to the media player 120 for storage and processing thereat. At the media player 120, the audio data and video data may be processed, rendered and output to the user device 140 associated with the user 150.
The user device 140 may comprise any suitable device and, in this example, comprises a head-worn VR device which includes a set of headphones 160 and a display system 170 comprised of first and second display screens. In example embodiments, the user device 140 may comprise head tracking sensors for indicating to the media player 120, using any suitable method, a current position of the user 150, e.g., one or both of the orientation and spatial position of the user, in order to determine how the virtual scene represented by the audio and video data is to be rendered to the set of headphones 160 and the display system 170.
In some embodiments, the media player 120 may comprise part of the set of user device 140. In other embodiments, the media player 120 may comprise a physically separate system such as a personal computer (PC), a laptop, a games console or a wearable computing device.
The network 130 may be any suitable data communications network including, for example, one or more of a radio access network (RAN) whereby communication is via one or more base stations, a WiFi network whereby communications is via one or more access points, or a short-range network such as one using the Bluetooth® or Zigbee protocol.
As mentioned above, the server 110 may transmit other data, e.g. metadata, which may represent one or more acoustic properties of the virtual scene. This other data may be considered an audio scene description which gives a content creator freedom in terms of how the audio content will be rendered at a rendering engine, such as at the media player 120 shown in
A position of a user 210 is indicated in relation to first to fourth audio sources 220, 230, 240, 250 within the virtual scene 200 to indicate respective perceived directions of said audio sources with respect to the user at a given time. A first audio source 220 may comprise a first instrumentalist, a second audio source 230 may comprise a second instrumentalist, a third audio source 240 may comprise a first singer and a fourth audio source 250 may comprise a second singer. A virtual partition 260 may also be in the virtual scene 200.
EIF data or similar may accompany the audio and video data representing the virtual scene 200. The EIF data may be processed by a rendering engine, such as the media player 120, to determine how the audio data of the virtual content is or will be rendered. The EIF data may define, for example, one or more geometric elements in the virtual scene and, associated with the one or more geometric elements, a respective set of one or more acoustic parameters. The geometric elements may comprise one or more of the size of the virtual scene, the shape of the virtual scene, one or more boundaries of the virtual scene and one or more objects within the virtual scene, including the first to fourth audio sources 220, 230, 240, 250. The list is non-exhaustive. The set of one or more acoustic parameters may comprise one or more of reverberation parameters (e.g. RT60 parameters), dry and wet ratio parameters and material parameters. This list is non-exhaustive.
Excerpts of example EIF data including geometric and acoustic parameters are given in Appendices A and B. Appendix A is an excerpt that indicates that the virtual scene is cube-shaped with certain dimensions and also reverberation parameters for different frequencies at two positions. Appendix B is an excerpt that indicates that the virtual scene comprises both a frequency independent material, e.g. marble, and a frequency-dependent material, e.g. upholstery, with frequency dependent characteristics thereof. Other geometric and acoustic parameters may also be defined.
Note that, other than audible volume, content creators have little or no barriers in terms of how the audio content can be defined and it may be that the virtual scene, when rendered based on the EIF data, is an exaggerated, amplified or even attenuated version of original audio content prior to encoding at the server 110.
Returning to
To illustrate generally,
In
In
In
A first operation 402 may comprise providing data representing one or more acoustic properties of a virtual scene.
The term “providing” may mean “receiving”, “generating”, “rendering” or a combination thereof.
EIF data is an example of data representing one or more acoustic properties of a virtual scene. Another example, appropriate for AR applications, is mentioned below.
Additionally, or alternatively to the providing of EIF data (or similar), data representing one or more acoustic properties of the virtual scene may comprise a rendered version of the virtual scene, which rendering may be performed at, for example, a receiving device such as the media player 120 referenced above. As such, the term “providing” may in this context mean “rendering.”
A second operation 404 may comprise identifying, based on a position of the user with respect to the one or more audio sources and the one or more acoustic properties of the virtual scene, one or more audio sources not meeting a predetermined criterion, e.g. a localization or discoverability criterion. In some embodiments, there may be more than one criterion, i.e. a set of criteria, although the former term will be used herein.
In embodiments where a rendered version of the virtual scene is provided as part of the first operation 402, one or more measurement operations may be performed using the rendered version of the virtual scene. That is, acoustic measurements may be performed or estimated at the user position in relation to the rendered version of the virtual scene, effectively to understand what the user will or is hearing at a given time and whether or not this meets the predetermined criterion. This may be performed by placement of one or more virtual microphones corresponding to the position of the user.
A third operation 406 may comprise providing, via a user interface associated with the user device, one or more indicators respectively corresponding to the one or more identified audio sources.
A fourth operation 408 may comprise, responsive to selection of one of the one or more indicators, changing the user position in the virtual scene so that the user is closer to the corresponding audio source.
In some example embodiments, in respect of the fourth operation 408, the user position may be changed such that, as well as being closer to the corresponding audio source, there are no virtual objects, e.g. occluding objects, between the user position and the corresponding audio source.
In some example embodiments, for example where one or more of the audio sources are dynamic, i.e. moving, the second operation 404 may further take into account the position of the dynamic object at the current time.
Example embodiments therefore enable a user effectively to be teleported closer to an audio source of interest, which audio source has been flagged to them because it is difficult to localize or discover from the user's current position.
In some example embodiments, identifying an audio source not meeting the predetermined criterion may comprise, for each audio source: estimating one or more acoustic effects that will be experienced at the user position from sounds emitted from the audio source and then identifying one or more adverse acoustic effects greater than, or are above a predetermined threshold with respect to, sounds received directly from that audio source.
Adverse acoustic effects may be any acoustic effect, such as one or more of reverberation, reflection, diffusion and attenuation, which individually or collectively reduce the user's ability to perceive the direction from a where a sound originates or the existence of the audio source.
With reference again to
In some embodiments, it may be that the predetermined criterion is not met even if one or more adverse acoustic effects do not dominate over directly-received sounds. For example, if one or more adverse acoustic effects are greater a predetermined threshold, e.g. if levels of reflected sounds are above 75% of the level of directly-received sounds, then the audio source may not meet the predetermined criterion. In such cases, the reflected sounds may still reduce the user's ability to localize the relevant audio source. Different respective predetermined threshold may be used for different adverse acoustic effects.
Acoustic properties of the virtual scene may comprise one or more geometric elements in the virtual scene and, associated with the one or more geometric elements, a respective set of one or more acoustic parameters. Geometric elements may comprise one or more of: size of the virtual scene, shape of the virtual scene, boundaries of the virtual scene and objects within the virtual scene. The set of one or more acoustic parameters may comprise one or more of: reverberation parameters, dry and wet ratio parameters and material parameters.
In some example embodiments, data may be received by the identifying means indicating a subset of audio sources in the virtual scene which are to be prioritized. These may be audio sources of particular interest to the user or ones which may be considered important by the content creator. In the latter, the one or more prioritized audio sources may be signalled in the transmitted bitstream. What is a prioritized audio source may also be determined dynamically at the rendering engine based on prior interactions of the user in the virtual scene or even other virtual scenes, for example what audio sources or types of audio sources the user has moved close to and/or looked-at for longer than a predetermined period. It may be that the second operation 404 is performed only in relation to such prioritized audio sources and not others.
In relation to AR applications, another operation may comprise providing data representing one or more acoustic properties of the real-world space in which the user consumes the virtual scene. This may comprise part of the first operation 402, be performed in parallel to the first operation or may be performed prior or subsequent to the first operation.
In this case, the second operation 404 may involve identifying the one or more audio sources based also on the one or more acoustic properties of the real-world space. This form of “other data” or metadata may be referred to as real-world description data and a proposed form is known as the Listener Space Description Format (LSDF) which also forms a proposal for the MPEG audio encoding working group. Similar to EIF data, LSDF data may define one or more geometric elements in the real-world space and, associated with the one or more geometric elements, a respective set of one or more acoustic parameters. Geometric elements may comprise one or more of: size of the real-world space, shape of the real-world space, boundaries of the real-world space and objects within the real-world space. The set of one or more acoustic parameters may comprise one or more of: reverberation parameters, dry and wet ratio parameters and material parameters. LSDF data, or similar, is usually not known to the content creator and hence may be provided to the rendering engine based on where the user intends or is currently using their user device, e.g. in a particular room for which LSDF data has been generated based on observations and/or measurements. For example, the user device or rendering engine associated with the user device may comprise one or more sensors, e.g. cameras, for measuring or estimating the one or more geometric elements and acoustic parameters.
The server 502 may provide a set of audio content 510 representing a virtual scene comprising one or more audio sources. The server may also provide metadata 512 which may be data representing one or more acoustic properties of the virtual scene, e.g. EIF data, which may or may not be in compressed form. The server may also provide a set of video content 514 corresponding to the audio content 510. The audio content 510, the metadata 512 and the video content 514 may be encoded by an encoder 516 to produce a content bitstream 520 for sending to the rendering engine 504 over the network 506. The content bitstream 520 may be an MPEG-I 6DoF content bitstream although other protocols may be suitable.
The rendering engine 504 may comprise one or more renderers 530 which may receive the content bitstream 520 and decode it for output to the user device 508 which may be based on the user position data at a given time.
Based on the decoded metadata 512 and the user position (and possibly based also on real-world acoustic properties data 550, e.g. LSDF data, if provided for an AR application) an analysis and audio source identification module 534 may be configured to identify one or more audio sources that do not meet the predetermined criterion mentioned above.
If one or more audio sources are so identified, the ability to move or teleport the user closer to the one or more identified audio sources may be signalled to the user device 508 by an indication module 536. This signalling may first involve the indication module 536 presenting an option to the user via the user device 508 to determine if the user wishes to receive indication(s) of the identified audio sources. If so, or alternatively, the signalling may involve the indication module 536 presenting one or more indicators respectively corresponding to the one or more identified audio sources.
The one or more indicators may, for example, be presented on a graphical user interface (GUI) which displays the indicators on a display screen of the user device 508, possibly overlaying the current view of video content corresponding to the virtual scene. The indicators may be displayed close to one corner or any suitable place that does not occlude interesting or significant video content. For example, the indication module 536 may display on the GUI a graphical representation of the corresponding audio source and/or a direction of the corresponding audio source relative to the user's position, e.g. to the left, centre or right hand side of the user position. Examples will be described below.
Responsive to a selection of one of the indicators, indicated by reference numeral 538, a position change module 540 may cause a change in user position in the virtual scene so that the user is closer to the corresponding audio source, i.e. the user perceives the effect of being teleported closer to the corresponding audio source. In cases where the GUI displays a plurality of indicators and respective directions of the corresponding audio sources relative to the user position, the GUI may then update after teleporting the user to display other ones of the plurality of indicators and their updated respective position(s). Alternatively, or additionally, an option to return to the previous position may be provided. These indicators/options may be similarly selectable to cause a corresponding position change by the position change module 540.
Selection may be by any suitable method, e.g. a voice command, a gesture or a touch input.
Referring to
The virtual scene 1000 may comprise first to fourth acoustic environments (AEs) 1002, 1004, 1006, 1008 separated as shown by acoustic couplings (ACs) which may represent virtual walls that attenuate, to some extent, sounds from one AE reaching an adjacent AE. The first to fourth AEs 1002, 1004, 1006, 1008 may represent different “rooms” of the virtual scene 1000 which may, for example, reflect different types or genres of music based on the types of audio source within the different room. A user may explore in 6DoF the different rooms in the normal way, i.e. by movement, but may also wish to teleport between rooms by means of example embodiments described herein. In the shown example, a first AC 1010 separates the first and second AEs, a second AC 1012 separates the second and third AEs, a third AC 1014 separates the third and fourth AEs and a fourth AC 1016 separates the fourth and first AEs. In the shown example, the first AE 1002 comprises one audio source, the second AE 1004 comprises zero audio sources, the third AE 1006 comprises three audio sources 1018, 1020, 1022 and the fourth AE 1008 comprises one audio source. A user position 1030 is indicated within the virtual scene 1000 to indicate respective perceived directions of said audio sources with respect to the user at a given time. In accordance with example embodiments, it will be seen that the three audio sources 1018, 1020, 1022 in the third AE 1006 are identified as not meeting the predetermined criterion, e.g. because there are two ACs 1010, 1012 between the user position 1030 and said audio sources. As such, and with reference to
Example embodiments therefore enable users to quickly and efficiently know when certain audio sources cannot be localized or discovered and provides a convenient way to teleport to a position closer to a selected one of said audio sources.
Names of network elements, protocols, and methods are based on current standards. In other versions or other technologies, the names of these network elements and/or protocols and/or methods may be different, as long as they provide a corresponding functionality. For example, embodiments may be deployed in 2G/3G/4G/5G networks and further generations of 3GPP but also in non-3GPP radio networks such as WiFi.
A memory may be volatile or non-volatile. It may be e.g. a RAM, a SRAM, a flash memory, a FPGA block ram, a DCD, a CD, a USB stick, and a blue ray disk.
As disclosed herein, a reference to “means” may comprise at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the performance of one or more operations of the means. A means may also comprise circuitry, hardware, software, firmware or a combination thereof.
If not otherwise stated or otherwise made clear from the context, the statement that two entities are different means that they perform different functions. It does not necessarily mean that they are based on different hardware. That is, each of the entities described in the present description may be based on a different hardware, or some or all of the entities may be based on the same hardware. It does not necessarily mean that they are based on different software. That is, each of the entities described in the present description may be based on different software, or some or all of the entities may be based on the same software. Each of the entities described in the present description may be embodied in the cloud. Implementations of any of the above described blocks, apparatuses, systems, techniques or methods include, as non-limiting examples, implementations as hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. Some embodiments may be implemented in the cloud.
It is to be understood that what is described above is what is presently considered the preferred embodiments. However, it should be noted that the description of the preferred embodiments is given by way of example only and that various modifications may be made without departing from the scope as defined by the appended claims.
| Number | Date | Country | Kind |
|---|---|---|---|
| 22154919.9 | Feb 2022 | EP | regional |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/EP2023/050686 | 1/13/2023 | WO |