The present application relates to method and apparatus for augmented reality rendering adaptation, but not exclusively for method and apparatus for augmented reality rendering adaptation for 6 degrees-of-freedom rendering.
Augmented Reality (AR) applications (and other similar virtual scene creation applications such as Mixed Reality (MR) and Virtual Reality (VR)) where a virtual scene is represented to a user wearing a head mounted device (HMD) have become more complex and sophisticated over time. The application may comprise data which comprises a visual component (or overlay) and an audio component (or overlay) which is presented to the user. These components may be provided to the user dependent on the position and orientation of the user (for a 6 degree-of-freedom application) within an Augmented Reality (AR) scene.
Scene information for rendering an AR scene typically comprises two parts. One part is the virtual scene information which may be described during content creation (or by a suitable capture apparatus or device) and represents the scene as captured (or initially generated). The virtual scene may be provided in an encoder input format (EIF) data format. The EIF and (captured or generated) audio data is used by an encoder to generate the scene description and spatial audio metadata (and audio signals), which can be delivered via the bitstream to the rendering (playback) device or apparatus. The scene description for an AR or VR scene is thus specified by the content creator during a content creation phase. In the case of VR, the scene is specified in its entirety and it is rendered exactly as specified in the content creator bitstream.
The second part of the AR audio scene rendering is related to the physical listening space (or physical space) of the listener (or end user). The scene or listener space information may be obtained during the AR rendering (when the listener is consuming the content). Thus there is a fundamental aspect of AR which is different from VR, which means the acoustic properties of the audio scene are known (for AR) only during content consumption and cannot be known or optimized during content creation.
Thus for AR scenes, the content creator bitstream carries information about which audio elements and scene geometry elements correspond to which anchors in the listening space. Consequently, the positions of the audio element positions, reflecting elements, occluding elements, etc. are known only during rendering. Furthermore, the acoustic modeling parameters are known only during rendering.
The position of the audio elements and scene geometry elements is known during rendering time with the help of the “anchors” which are embedded within the listening space description of the listening space, which is obtained during content consumption. The expectation is that the anchors referred to in the content creator bitstream find a corresponding match in the listening space description. This description has been specified in the MPEG Audio group as LSDF (Listener Space Description Format) file as the means for providing listener space information to the renderer. The LSDF file is available only during rendering. Consequently, the derivation of acoustic scene information acoustic modelling parameters is done in the renderer in case of AR scenarios.
There is provided according to a first aspect an apparatus comprising means configured to: obtain at least one audio signal; obtain at least one anchor parameter associated with the at least one audio signal; obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry.
The means configured to obtain information configured to assist in the adaptation may be configured to obtain at least one of: a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene may be configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
Mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The means configured to obtain information configured to assist in the adaptation may be configured to obtain a processor filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The means configured to obtain information configured to assist in the adaptation may be configured to obtain at least one of: an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The means configured to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be configured to obtain an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The means configured to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be configured to obtain a mapping modification processing parameter configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The means configured to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be configured to obtain a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter is configured to control whether the at least one audio element anchor may dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a second aspect there is provided an apparatus comprising means configured to: generate at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry.
The information configured to assist in the adaptation may be at least one of: a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal may be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene is configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
The spatial filtering parameter configured to control mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be the spatial filtering parameter configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The information configured to assist in the adaptation is a processor filtering parameter may be configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The information configured to assist in the adaptation may be at least one of: an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor can dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a third aspect there is provided an apparatus for rendering at least one audio signal within an audio scene, the apparatus comprising means configured to: determine, for the audio scene, at least one audio scene anchor parameter; obtain, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associate the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and render the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The at least one audio scene anchor parameter may be configured to define at least one of: a position within the audio scene; and a number of instances within the audio scene.
The information configured to assist in the adaptation comprises information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter may define a position with respect to the virtual or augmented audio scene geometry.
The information configured to assist in the adaptation may be at least one of: a spatial filtering parameter, wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene may be configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
The apparatus configured to control the mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The information configured to assist in the adaptation may be a processor filtering parameter wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The information configured to assist in the adaptation may be at least one of: an alternative anchor filtering parameter wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter is configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be configured to obtain an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a mapping modification processing parameter wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor can dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a fourth aspect there is provided a method comprising: obtaining at least one audio signal; obtaining at least one anchor parameter associated with the at least one audio signal; obtaining information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry.
Obtaining information configured to assist in the adaptation may comprise obtaining at least one of: a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene may be configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
Mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
Obtaining information configured to assist in the adaptation may comprise obtaining a processor filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
Obtaining information configured to assist in the adaptation may comprise obtaining at least one of: an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
Obtaining information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may comprise obtaining an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
Obtaining information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may comprise obtaining a mapping modification processing parameter configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
Obtaining information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may comprise obtaining a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter is configured to control whether the at least one audio element anchor may dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a fifth aspect there is provided a method comprising: generating at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry.
The information configured to assist in the adaptation may be at least one of: a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal may be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene is configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
The spatial filtering parameter configured to control mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be the spatial filtering parameter configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The information configured to assist in the adaptation is a processor filtering parameter may be configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The information configured to assist in the adaptation may be at least one of: an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor can dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a sixth aspect there is provided a method for rendering at least one audio signal within an audio scene, the method comprising: determining, for the audio scene, at least one audio scene anchor parameter; obtaining, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associating the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and rendering the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The at least one audio scene anchor parameter may be configured to define at least one of: a position within the audio scene; and a number of instances within the audio scene.
The information configured to assist in the adaptation comprises information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter may define a position with respect to the virtual or augmented audio scene geometry.
The information configured to assist in the adaptation may be at least one of: a spatial filtering parameter, wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene may comprise controlling the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
Controlling the mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may comprise controlling a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The information configured to assist in the adaptation may be a processor filtering parameter wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The information configured to assist in the adaptation may be at least one of: an alternative anchor filtering parameter wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter may comprise identifiers identifying at least two candidate anchors within the audio scene and wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise obtaining an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a mapping modification processing parameter wherein associating the at least one anchor parameter with the at least one audio scene anchor parameter may comprise controlling whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor can dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a seventh aspect there is provided an apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: obtain at least one audio signal; obtain at least one anchor parameter associated with the at least one audio signal; obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry.
The apparatus caused to obtain information configured to assist in the adaptation may be caused to obtain at least one of: a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene may be configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
The apparatus caused to control a map at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be caused to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The apparatus caused to obtain information configured to assist in the adaptation may be caused to obtain a processor filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The apparatus caused to obtain information configured to assist in the adaptation may be caused to obtain at least one of: an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The apparatus caused to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be caused to obtain an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The apparatus caused to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be caused to obtain a mapping modification processing parameter configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The apparatus caused to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be caused to obtain a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor may dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to an eighth aspect there is provided an apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: generate at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The information configured to assist in the adaptation may comprise information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter defines a position with respect to the virtual or augmented audio scene geometry.
The information configured to assist in the adaptation may be at least one of: a spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal may be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene is configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
The spatial filtering parameter configured to control mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be the spatial filtering parameter configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The information configured to assist in the adaptation is a processor filtering parameter may be configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The information configured to assist in the adaptation may be at least one of: an alternative anchor filtering parameter configured to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter configured to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and configured to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor can dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a ninth aspect there is provided an apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: determine, for the audio scene, at least one audio scene anchor parameter; obtain, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associate the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and render the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
The information configured to assist in the adaptation may comprise guidance metadata configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
The at least one audio scene anchor parameter may be configured to define at least one of: a position within the audio scene; and a number of instances within the audio scene.
The information configured to assist in the adaptation comprises information configured to define a geometry of a virtual or augmented audio scene and the at least one anchor parameter may define a position with respect to the virtual or augmented audio scene geometry.
The information configured to assist in the adaptation may be at least one of: a spatial filtering parameter, wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may cause the apparatus to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a distance between the one audio element anchor and the at least one anchor within the audio scene; a temporal filtering parameter wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be caused to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene; and a priority list parameter configured wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be caused to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a priority list of candidate mappings.
The spatial filtering parameter configured to control a mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on the distance between the one audio element anchor and the at least one anchor within the audio scene may be configured to control the mapping based on one of: a nearest anchor selection for selecting at least one anchor within the audio scene nearest the at least one audio element anchor; a farthest anchor selection for selecting at least one anchor within the audio scene farthest from the at least one audio element anchor; a maximal spread anchor selection for selecting at least one anchor within the audio scene to distribute the at least one audio element anchor such that they are located with a largest spread with respect to each other; and a user input based anchor selection.
The apparatus configured to control the mapping of at least one audio element anchor to at least one anchor within the audio scene within which the at least one audio signal is to be rendered based on a time difference between the one audio element anchor and the at least one anchor within the audio scene may be configured to control a mapping based on one of: an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene; an earliest anchor selection for selecting an earliest of the at least one anchor within the audio scene with later modifications based on a user movement; a maximal spread anchor selection for selecting the at least one anchor within the audio scene to distribute the one audio element anchors farthest from each other; and a user input based anchor selection.
The information configured to assist in the adaptation may be a processor filtering parameter wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be caused to control a mapping of at least one audio element anchor to at least one anchor within the audio scene based on a renderer processor value.
The information configured to assist in the adaptation may be at least one of: an alternative anchor filtering parameter wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be caused to control a mapping of the at least one audio element anchor to an alternative one of at least one anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; a default position parameter wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter is caused to control a positioning of the at least one audio element anchor within the audio scene where there is no matching label between the at least one audio element anchor and the at least one anchor within the audio scene; and a multiple anchors parameter comprising identifiers identifying at least two candidate anchors within the audio scene and wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be caused to control a mapping to at least one of the candidate anchors within the audio scene based on at least one of the candidate anchors being located within the audio scene.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered wherein the apparatus caused to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be caused to obtain an instance processing parameter configured to control a processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The instance processing parameter may be configured to control processing for one of: all instances of the mapping undergo full auralization processing; only the nearest mapping instance undergoes full auralization processing and the other instances are candidates for cluster processing; and only one mapping instance undergoes extent processing.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a mapping modification processing parameter wherein the means configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter may be configured to control whether a mapping modification or processing of instances of a mapping of the at least one audio element anchor to at least one of the at least one anchor within the audio scene.
The mapping modification processing parameter may be configured to control processing for one of: a change in a number of instances of the mapping is not allowed; a change in a number of instances of the mapping is allowed; an auralization change for elements associated with the instances of the mapping is not allowed; and an auralization change for elements associated with the instances of the mapping is allowed.
The information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered may be a dynamic updating parameter configured to control whether the at least one audio element anchor can dynamically move within the audio scene.
The dynamic updating parameter may be configured to control whether the at least one audio element anchor can dynamically move within the audio scene for one of: infrequent updates with immediate response expected; frequent updates expected and where no filtering is to be applied; a renderer side filtering without look ahead prediction is to be implemented; and a S-curve filtering is to be implemented.
According to a tenth aspect there is provided an apparatus comprising: means for obtaining at least one audio signal; means for obtaining at least one anchor parameter associated with the at least one audio signal; means for obtaining information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to an eleventh aspect there is provided an apparatus comprising: means for generating at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a twelfth aspect there is provided an apparatus comprising: means for determining, for the audio scene, at least one audio scene anchor parameter; means for obtaining, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; means for associating the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and means for rendering the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
According to a thirteenth aspect there is provided a computer program comprising instructions [or a computer readable medium comprising program instructions] for causing an apparatus to perform at least the following: obtain at least one audio signal; obtain at least one anchor parameter associated with the at least one audio signal; obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a fourteenth aspect there is provided a computer program comprising instructions [or a computer readable medium comprising program instructions] for causing an apparatus to perform at least the following: generate at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a fifteenth aspect there is provided a computer program comprising instructions [or a computer readable medium comprising program instructions] for causing an apparatus to perform at least the following: determine, for the audio scene, at least one audio scene anchor parameter; obtain, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associate the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and render the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
According to a sixteenth aspect there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: obtain at least one audio signal; obtain at least one anchor parameter associated with the at least one audio signal; obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a seventeenth aspect there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: generate at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to an eighteenth aspect there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to perform at least the following: determine, for the audio scene, at least one audio scene anchor parameter; obtain, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associate the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and render the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
According to a nineteenth aspect there is provided an apparatus comprising: obtaining circuitry configured to obtain at least one audio signal; obtaining circuitry configured to obtain at least one anchor parameter associated with the at least one audio signal; obtaining circuitry configured to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a twentieth aspect there is provided an apparatus comprising: generating circuitry configured to obtain generate at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a twenty-first aspect there is provided an apparatus comprising: determining circuitry configured to determine, for the audio scene, at least one audio scene anchor parameter; obtain, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associating circuitry configured to associate the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and rendering circuitry configured to render the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
According to a twenty-second aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: obtain at least one audio signal; obtaining circuitry configured to obtain at least one anchor parameter associated with the at least one audio signal; obtaining circuitry configured to obtain information configured to assist in the adaptation of the at least one anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a twenty-third aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: generate at least one bitstream, wherein the bitstream comprises: at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one audio scene anchor parameter with at least one further anchor parameter within an audio scene within which the at least one audio signal is to be rendered.
According to a twenty-fourth aspect there is provided a computer readable medium comprising program instructions for causing an apparatus to perform at least the following: determine, for the audio scene, at least one audio scene anchor parameter; obtain, from at least one further apparatus a bitstream, the bitstream comprising: the at least one audio signal; at least one anchor parameter associated with the at least one audio signal; and information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; associate the at least one anchor parameter with the at least one audio scene anchor parameter based on the information configured to assist in the adaptation of the at least one anchor parameter with the at least one audio scene anchor parameter; and render the at least one audio signal based on the association between the at least one anchor parameter with the at least one audio scene anchor parameter.
An apparatus comprising means for performing the actions of the method as described above.
An apparatus configured to perform the actions of the method as described above.
A computer program comprising program instructions for causing a computer to perform the method as described above.
A computer program product stored on a medium may cause an apparatus to perform the method as described herein.
An electronic device may comprise apparatus as described herein.
A chipset may comprise apparatus as described herein.
Embodiments of the present application aim to address problems associated with the state of the art.
For a better understanding of the present application, reference will now be made by way of example to the accompanying drawings in which:
The following describes in further detail suitable apparatus and possible mechanisms for rendering a consistent adaptation of an augmented (AR) scene experience and providing handling scenarios where a bitstream contains anchor references which may not have a one-to-one correspondence with the anchors in the listening space description.
The concept as discussed further in the embodiments herein is one wherein guidance metadata is included within the bitstream generated by the content provider and which can then be employed for consistent adaptation of anchor references in the bitstream with the anchors in the listening space description.
The result of such embodiments achieve a predictable and consistent rendering experience in potentially different listening environments as well as varying implementations for creating listening space descriptions.
In some embodiments, the guidance metadata is employed to achieve consistent correspondence between the bitstream anchor references and the listening space anchors by implementing filtering criteria. In some embodiments the filtering can be spatial filtering (e.g., nearest), temporal filtering (e.g., earliest available anchor), or a prioritized list of candidate mappings.
In some further embodiments, the filtering criteria is configured to take into account the renderer processing level to account for the permitted audio element addition in the audio rendering pipeline. For example, if the renderer processing level only permits one-to-one audio-to-anchor mapping if the number of elements rendered crosses a predefined threshold number, multiple mappings are not performed even if permitted in the bitstream.
Thus the content creator bitstream permits multiple instances depending on the anchor mappings in the listening space. However, this number is an upper bound which can be constrained by the profile or the permitted number of instances depending on the renderer hardware. In otherwords, in some embodiments, the codec profile or renderer hardware capability can constrain the number of instances spawned due to the rendering adaptation bitstream. As a concrete example, if a content creator bitstream permits “all” anchor mappings in the listening space to be rendered, and 50 instances of the anchor “table” is obtained from the listening space, the current profile may have a budget of 50 audio objects but the audio scene consists of other 40 audio objects. In such a case, the renderer may constrain the number of instances to say 10 for the anchor mapping. In another scenario, the constraint may be due to the renderer hardware specified constraints.
As discussed above a listening space description, information which describes acoustically the physical space within which the user is located may be derived by the AR rendering device.
In some embodiments the listening space is defined by a mesh of listening space faces. The listening space description in some embodiments is obtained during content consumption. This is in contrast to the content creator scene description which is available during content creator bitstream creation and is delivered as part of the 6DoF bitstream to the renderer. The content creator bitstream has hooks (i.e. anchors) to map the scene elements to the listening space during content consumption.
The embodiments described herein provides a method for enabling the consistent adaptation of the content creator specified preferences. This is because the listening space descriptions will vary for every listening space. Depending on the implementation, it is expected that there will be differences in the listening space descriptions provided to the immersive audio renderer. For example
An example of which is shown in
Additionally a content creator may be able to define anchor points which are associated with one or more audio elements and which are configured in the renderer to be mapped to an anchor as defined within the listening space.
For example, below is an example content creator scene description which indicates to the renderer to associate one or more audio elements to the listening space information indicating position of “picture_anchor” as marked in bold.
As can be envisioned, if the listening space description obtained by the renderer contains multiple (or no) anchors with the same reference (“picture_anchor” in the example), the renderer may either render an incomplete or inaccurate audio scene. This can result in a poor, unpredictable and inconsistent rendering experience. This may happen when the AR rendering device determines multiple (or none) suitable anchor positions (multiple pictures in the listening space). It is not clear that which of these positions should be used when placing the object source.
As such the embodiments discussed herein are configured to handle the simple one-to-one mapping 161 as shown in
Anchor references in Listening space:
and the renderer is able to identify listening space description
Additionally the embodiments as discussed herein are configured to produce consistent multiple or incomplete mapping 163 as shown in
Anchor references in Listening space:
and the renderer is able to identify listening space description
In this example a simple one-to-one mapping may fail as the anchor 1 is defined at three locations and there is no mapping from the anchor defined as anchor 3 with the listening space description defined information.
In some embodiments as shown in
Furthermore in some embodiments the encoder input format (EIF) generator 211 is configured to generate anchor reference information. The anchor reference information may be defined in the EIF to indicate that the position of the specified audio elements are to be obtained from the listener space via the LSDF.
In some embodiments the anchor definitions and bitstream structures are indicated without the guidance metadata for AR adaptation.
The structures Anchor( ), AnchorsStruct( ) and ContentCreatorSceneDescriptionStruct( ) are the structures that describe the audio scene information with references to the listening space description.
The ContentCreatorsceneDescriptionstruct( ) structure has the MHAS packet type PACTYP_CCSD, MHASPacketLabel will be the same value as that of the MPEG-H content.
In some embodiments the capture/generator apparatus 201 comprises an audio content generator 213. The audio content generator 213 is configured to generate the audio content corresponding to the audio scene. The audio content generator 213 in some embodiments is configured to generate or otherwise obtain audio signals associated with the virtual scene. For example in some embodiments these audio signals may be obtained or captured using suitable microphones or arrays of microphones, be based on processed captured audio signals or synthesised. In some embodiments the audio content generator 213 is furthermore configured in some embodiments to generate or obtain audio parameters associated with the audio signals such as position within the virtual scene, directivity of the signals. The audio signals and/or parameters 212 can in some embodiments be provided to a suitable (MPEG-1) encoder 217.
Furthermore in some embodiments the capture/generator apparatus 201 comprises a guidance information generator 215. The guidance information generator 215 is configured to generate suitable guidance information metadata 216. The guidance information metadata is configured to assist in the mapping operation in the renderer as described in further detail herein.
In the following, the guidance metadata for consistent adaptation of the 6DoF audio scene is described. The guidance metadata in some embodiments can for example be guidance to handle a alternative anchor references in a prioritized order (e.g., to handle the case of placing an audio object on top of a table if the ground is not visible clearly to the AR device).
In some embodiments the guidance metadata is configured to control the default placement (or mapping) of anchors in the content creator bitstream if there are missing anchors in the listening space description.
Furthermore in some embodiments the guidance metadata comprises information which enables mapping of multiple anchors in the listening space description for the references in the content creator bitstream.
In some embodiments the guidance information may be implemented or inserted within the anchor definition. In other words the anchor definition or structures are modified or enhanced to include additional information in the bitstream to handle the different listening space descriptions received by the 6DoF player and subsequently the renderer.
In some embodiments alternative anchor selection metadata in the content creator bitstream for the anchor references in the listening space description can be as follows.
In some embodiments filtering metadata for anchor selection can be provided in the content creator bitstream for the anchor references in the listening space description with multiple anchors.
In some embodiments the following options are available if spatial_filter_present flag is equal to 1:
In some embodiments the following options available if temporal_filter_present flag is equal to 1:
In some embodiments the guidance comprises default placement metadata for anchors which are missing in the listening space description.
In some embodiments the guidance information is determined based on the anchors being dynamic. In other words the listening space description may consist of anchors which may be dynamic i.e. changing position and potentially properties at a continuous basis. For example, the audio scene may consist of a virtual audio object following a moving anchor.
To enable this the anchor object may also have dynamic update capability. The dynamic update can be a capability already attached to an audio anchor in the content creator bitstream. This enables the 6DoF player to prepare and be ready to receive the dynamic updates regarding the particular anchor during content consumption.
The difference between listening space description update and an anchor which is moving is within the frequency of update. The listening space update relates to the entire listening space and is updated whenever the AR sensing interface of the AR device obtains new information which necessitates creation of a new listening space description. The dynamic updates for a moving anchor can occur at a much higher frequency depending on the position sampling frequency to obtain smooth translation effect without jerkiness or delay.
In some embodiments the guidance information is passed to the encoder 217.
In some embodiments of the implementation, the guidance information can be part of the scene description and delivered as part of the EIF information, in such a case, a separate guidance information may not be required. As such in some embodiments the guidance information generator 215 is incorporated within the EIF generator 211 and the guidance information generated as part of the EIF information.
In some embodiments the storage/distribution apparatus 203 comprises an encoder 217. The encoder is configured to receive the EIF parameters 212, the audio signals/audio parameters 214 and the guidance parameters or information 216 and encode these to generate a suitable bitstream.
The encoder 217 for example can use the EIF parameters 212, the audio signals/audio parameters 214 and the guidance parameters 216 to generate the MPEG-I 6DoF audio scene content which is stored in a format which can be suitable for streaming over the network. The delivery can be in any suitable format such as MPEG-DASH (Dynamic Adaptive Streaming Over HTTP), HLS (HTTP Live Streaming), etc. The 6DoF bitstream carries the MPEG-H encoded audio content and MPEG-I 6DoF bitstream. The content creator bitstream generated by the encoder on the basis of EIF and audio data can be formatted and encapsulated in a manner analogous to MHAS packets (MPEG-H 3D audio stream). The encoded bitstream in some embodiments is passed to a suitable content storage module. For example as shown in
In some embodiments the storage/distribution apparatus 203 comprises a content storage module. For example as shown in
The content storage 219 is configured to store the content (including the EIF derived content creator bitstream with guidance metadata) and provide it to the AR device 207.
In some embodiments the capture/generator apparatus 201 and the storage/distribution apparatus 203 are located in the same apparatus.
In some embodiments the AR device 207 which may comprise a head mounted device (HMD) is the playback device for AR consumption of the 6DoF audio scene.
The AR device 207 in some embodiments comprises at least one AR sensor 221. The at least one AR sensor 221 may comprise multimodal sensors such as visual camera array, depth sensor, LiDAR, etc. The multimodal sensors are used by the AR consumption device to generate information of the listening space. This information can comprise material information, objects of interest, etc. This sensor information can in some embodiments be passed to an AR processor 223.
In some embodiments the AR device 207 comprises a player/renderer apparatus 205. The player/renderer apparatus 205 is configured to receive the bitstream comprising the EIF derived content creator bitstream (with guidance metadata) 220, the AR sensor information and the user position and/or orientation and from this information determine a suitable audio signal output which is able to be passed to a suitable output device, which in
In some embodiments the player/renderer apparatus 205 comprises an AR processor 223. The AR processor 223 is configured to receive the sensor information from the at least one AR sensor 221 and generate suitable AR information which may be passed to the LSDF generator 225. For example, in some embodiments, the AR processor is configured to perform a fusion of sensor information from each of the sensor types.
In some embodiments the player/renderer apparatus 205 comprises a listening space description file (LSDF) generator 225. The listening space description file (LSDF) generator 225 is configured to receive the output of the AR processor 223 and from the information obtained from the AR sensing interface generate the listening space description for AR consumption. The format of the listening space can be in any suitable format. The LSDF creation can use the LSDF format. This description carries the listening space or room information including acoustic properties (e.g., mesh enveloping the listening space including materials for the mesh faces), audio elements or geometry elements of the scene with spatial locations that are dependent on the listening space are referred to as anchors in the listening space description. The anchors may be static or dynamic in the listening space. The LSDF generator is configured to output this listening scene description information to the renderer 235.
In some embodiments the player/renderer apparatus 205 comprises a receive buffer 231 configured to receive the content creator bitstream 220 comprising the EIF information and with guidance metadata. As indicated above the guidance metadata may or may not be separate from the EIF information. The buffer 231 is configured to pass the received data and pass the data to a decoder 233.
In some embodiments the player/renderer apparatus 205 comprises a decoder 233 configured to obtain the encoded bitstream from the buffer 231 and output decoded EIF information and decoded guidance information (with decoded audio data when it is within the same data stream) to the renderer 235. The guidance information may be delivered with or without any compression, in the latter case only a parser is required.
In some embodiments the player/renderer apparatus 205 comprises a renderer 235. The renderer 235 is configured to receive the decoded EIF information and decoded guidance information (with decoded audio data when it is within the same data stream), the listening scene description information and listener position and/or orientation information. The listener position and/or orientation information can be obtained from the AR device configured with suitable listener tracking apparatus and sensors which enable providing accurate listening position as well as orientation. The renderer 235 is further configured to generate the output audio signals to be passed to the output device, as shown in
The renderer 235 is configured to obtain the content creator bitstream (i.e. MPEG-I 6DoF bitstream which carries references to anchors in LSDF) and LSDF (i.e. anchor position in the actual listening space) and then be configured to implement a correspondence mapping such that the anchors in the content creators are mapped to the anchors within the listening space description information.
With respect to
The method may comprise generating or otherwise obtaining guidance information as shown in
Furthermore the EIF information is generated (or obtained) as shown in
The audio data is furthermore obtained (or generated) as shown in
The guidance information, EIF information, and audio data is then encoded as shown in
The encoded data is then store/obtained or transmitted/received as shown in
Additionally the AR scene data is obtained as shown in
From the sensed AR scene data a listening space description (file) information is generated as shown in
Furthermore the listener/user position and/or orientation data can be obtained as shown in
Then spatial audio signals can be rendered based on the audio data, the guidance information, the EIF information, LSDF data and the position and/or orientation data. Specifically the rendering comprises mapping anchor points from EIF information to anchor points in LSDF data based on guidance information as shown in
Having rendered spatial audio signals these can be output to a suitable output device, such as headphones as shown in
The renderer 235 in some embodiments comprises a scene manager/processor 403. The scene manager/processor 403 is configured to receive the parsed EIF and guidance data from the bitstream parser 401, The LSDF parameters 402 and further the interactivity controller output from the interactivity controller 407.
The scene manager/processor 403 in some embodiments comprises a guided adaptation processor 411 which is configured to implement anchor mapping (or generate fused scene representations) based on the parsed EIF and guidance data, the LSDF parameters and controlled by the interactivity controller output. The guided adaptation processor 411 thus attempts to ensure that only the desired (based on content creator guidance metadata) audio elements are instantiated in the scene state.
The output of the guided adaptation processor 411 (and the scene manager/processor 403) is configured such that any subsequent spatial audio signal processing (or auralization pipeline) can be agnostic to the type of AR adaptation required to handle the different variations in the LSDFs which the renderer may receive. Furthermore, the guided adaptation processor 411 is configured to receive inputs for anchor selection.
In some embodiments the renderer 235 and specifically the scene manager/processor 403 is configured to obtain an interactivity controller output from an interactivity controller 407. The interactivity controller 407 is configured to generate a control output based on an input 406. For example the input can be a user selection input from a suitable user interface and can be from the listener. In some embodiments the input can be from a suitable AI module which performs run-time selection based on its own optimization logic or some other method. The interactivity controller output is thus configured to enable a versatile anchor mapping/selection mechanism. The mechanism can be configured in some embodiments to be employed in conjunction to the guidance metadata (e.g., when such a selection method is permitted in the metadata). Further in some embodiments the mechanism can be employed in addition to the explicit guidance metadata in the bitstream. Additionally in some embodiments the mechanism can be employed complementary to the guidance metadata.
The scene management information can then be passed to the audio processor 405 configured to obtain the audio signals, the processed scene information and the listeners position and/or orientation and from these generate the spatial audio signal output. As indicated above the effect of the scene manager/processor 403 is such that any known or suitable spatial audio processing implementation can be employed (the auralisation pipeline being agnostic to the earlier scene processing).
With respect to
With respect to
The anchor information is received or obtained from listening space description as shown in
Additionally the content creator bitstream information specified anchor references are received or obtained as shown in
A one to one match or mapping between content creator bitstream anchor references and listening space anchors is then attempted or performed as shown in
A check is then made to determine whether the one to one match or mapping between content creator bitstream anchor references and listening space anchors is achieved as shown in
Where the step 607 check fails and there is no completed one-to-one match or mapping then a check is performed to determine whether there is any (guidance information) or rendering adaptation metadata in content creator bitstream as shown in
Where the step 609 check fails (there is no guidance information found) then an error is generated (and the rendering controlled based on the error) as shown in
Where the step 609 check passes (there is guidance information) then the rendering adaptation metadata is retrieved from the bitstream, for example the following structures can be obtained: AlternativeAnchorsStruct( ); FilterEnabledAnchorStruct( ); DefaultPlacementAnchorStruct( ) as shown in
Then based on the guidance information (rendering adaptation metadata is retrieved from the bitstream) then the required mappings are determined as shown in
Furthermore the appropriate mappings are then created to obtain the scene state for rendering pipeline instantiation as shown in
Where the step 607 check passes (there is a completed one-to-one match or mapping) or when the guidance information based match(es) or mapping(s) is generated then the scene state for rendering pipeline instantiation is created as shown in
Then the rendering pipeline is instantiated as shown in
Finally the rendering is then started as shown in
With respect to
In some embodiments the device 1400 comprises at least one processor or central processing unit 1407. The processor 1407 can be configured to execute various program codes such as the methods such as described herein.
In some embodiments the device 1400 comprises a memory 1411. In some embodiments the at least one processor 1407 is coupled to the memory 1411. The memory 1411 can be any suitable storage means. In some embodiments the memory 1411 comprises a program code section for storing program codes implementable upon the processor 1407. Furthermore in some embodiments the memory 1411 can further comprise a stored data section for storing data, for example data that has been processed or to be processed in accordance with the embodiments as described herein. The implemented program code stored within the program code section and the data stored within the stored data section can be retrieved by the processor 1407 whenever needed via the memory-processor coupling.
In some embodiments the device 1400 comprises a user interface 1405. The user interface 1405 can be coupled in some embodiments to the processor 1407. In some embodiments the processor 1407 can control the operation of the user interface 1405 and receive inputs from the user interface 1405. In some embodiments the user interface 1405 can enable a user to input commands to the device 1400, for example via a keypad. In some embodiments the user interface 1405 can enable the user to obtain information from the device 1400. For example the user interface 1405 may comprise a display configured to display information from the device 1400 to the user. The user interface 1405 can in some embodiments comprise a touch screen or touch interface capable of both enabling information to be entered to the device 1400 and further displaying information to the user of the device 1400. In some embodiments the user interface 1405 may be the user interface for communicating with the position determiner as described herein.
In some embodiments the device 1400 comprises an input/output port 1409. The input/output port 1409 in some embodiments comprises a transceiver. The transceiver in such embodiments can be coupled to the processor 1407 and configured to enable a communication with other apparatus or electronic devices, for example via a wireless communications network. The transceiver or any suitable transceiver or transmitter and/or receiver means can in some embodiments be configured to communicate with other electronic devices or apparatus via a wire or wired coupling.
The transceiver can communicate with further apparatus by any suitable known communications protocol. For example in some embodiments the transceiver can use a suitable universal mobile telecommunications system (UMTS) protocol, a wireless local area network (WLAN) protocol such as for example IEEE 802.X, a suitable short-range radio frequency communication protocol such as Bluetooth, or infrared data communication pathway (IRDA).
The transceiver input/output port 1409 may be configured to receive the signals and in some embodiments determine the parameters as described herein by using the processor 1407 executing suitable code.
It is also noted herein that while the above describes example embodiments, there are several variations and modifications which may be made to the disclosed solution without departing from the scope of the present invention.
In general, the various embodiments may be implemented in hardware or special purpose circuitry, software, logic or any combination thereof. Some aspects of the disclosure may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the disclosure is not limited thereto. While various aspects of the disclosure may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
As used in this application, the term “circuitry” may refer to one or more or all of the following:
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
The embodiments of this disclosure may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Computer software or program, also called program product, including software routines, applets and/or macros, may be stored in any apparatus-readable data storage medium and they comprise program instructions to perform particular tasks. A computer program product may comprise one or more computer-executable components which, when the program is run, are configured to carry out embodiments. The one or more computer-executable components may be at least one software code or portions of it.
Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD. The physical media is a non-transitory media.
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may comprise one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), FPGA, gate level circuits and processors based on multi core processor architecture, as non-limiting examples.
Embodiments of the disclosure may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
The scope of protection sought for various embodiments of the disclosure is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the disclosure.
The foregoing description has provided by way of non-limiting examples a full and informative description of the exemplary embodiment of this disclosure. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this disclosure will still fall within the scope of this invention as defined in the appended claims. Indeed, there is a further embodiment comprising a combination of one or more embodiments with any of the other embodiments previously discussed.
Number | Date | Country | Kind |
---|---|---|---|
2110129.0 | Jul 2021 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FI2022/050456 | 6/22/2022 | WO |