Embodiments of the present disclosure relate to rendering spatial audio content.
Spatial audio content 50 comprises one or more audio sources, each having a flexible position in an audio space. The audio space can be two or three dimensional.
Sometimes it is desirable to render simultaneously to a listener audio content that comes from two different apparatus.
This can be confusing to the listener if the audio content comprises spatial audio content.
According to various, but not necessarily all, embodiments there is provided an apparatus comprising means for:
In some, but not necessary all, examples the apparatus is configured to render the first audio content as spatial audio content within an audio space, the audio space remaining in a fixed relationship relative to the first apparatus, wherein the audio space is moved in response to tracked movement, relative to the first apparatus, of the head-mounted audio output system.
In some, but not necessary all, examples the apparatus is configured to provide the head-mounted audio output system, wherein the apparatus is a head-mounted apparatus to be worn by the user and is configured for dynamically tracking movement of the user's head.
In some, but not necessary all, examples the apparatus is configured to render the first audio content as spatial audio content within an audio space, wherein the audio space is moved in response to data from the first apparatus tracking movement of the head-mounted audio output system.
In some, but not necessary all, examples the first audio content received is first spatial audio content associated with first spatial audio information defining variable positions of multiple first audio sources, wherein in a first state the apparatus is configured to render the first spatial audio content using the first spatial audio information to produce the multiple first audio sources at the variable positions defined by the first spatial audio information.
In some, but not necessary all, examples the second audio content comprises multiple second audio sources, and wherein in a first state, the apparatus is configured to downmix the second audio content to downmixed content and to render the downmixed content.
In some, but not necessary all, examples in a first state the second audio content is downmixed to a single audio source and rendered as the single audio source.
In some, but not necessary all, examples the apparatus is configured to render the second audio content as spatial audio content within a second audio space, the second audio space remaining in a fixed relationship relative to the second apparatus, wherein the second audio space is moved in response to tracked movement of the head-mounted audio output system.
In some, but not necessary all, examples the apparatus is configured to:
In some, but not necessary all, examples, in the second state, the second audio content is rendered in its native form.
In some, but not necessary all, examples the apparatus is is configured cause switching between the first state and the second state in dependence upon detected user actions.
In some, but not necessary all, examples the apparatus is configured to, in the second state, downmix the first audio content to downmixed audio content and to render the downmixed audio content.
In some, but not necessary all, examples the first audio content comprises multiple audio sources, wherein in the second state, the first audio content is downmixed to a single audio source and rendered as the single audio source.
In some, but not necessary all, examples the second audio content received is second spatial audio content associated with second spatial audio information defining variable positions of multiple second audio sources, wherein in the second state the apparatus is configured to render the second spatial audio content using the second spatial audio information to produce the multiple second audio sources at the variable positions defined by the second spatial audio information.
In some, but not necessary all, examples the second audio content is stereo audio content, wherein the apparatus is configured to, in the second state, render the second audio content as stereo audio content.
According to various, but not necessarily all, embodiments there is provided a method comprising:
According to various, but not necessarily all, embodiments there is provided a computer program comprising program instructions for causing an apparatus to perform at least the following:
According to various, but not necessarily all, embodiments there is provided a system comprising the apparatus, the first apparatus and the second apparatus,
According to various, but not necessarily all, embodiments there is provided examples as claimed in the appended claims.
Some examples will now be described with reference to the accompanying drawings in which:
The following description and figures describe various examples of an apparatus 30 comprising means for:
The following description and figures describe various examples of an apparatus 30 comprising means for:
The apparatus 30 is configured to receive first audio content 12 associated with a first apparatus 10 and to receive second audio content 22 associated with a second apparatus 20.
The apparatus 30 is configured, in first and second states 41, 42, to simultaneously render the first audio content 12 and the second audio content 22 to a user via a head-mounted audio output system 32 which is configured for spatial audio rendering.
In a first state 41, the second audio content 22 is downmixed 60 to downmixed content 62 and that downmixed content 62 is rendered.
In a second state 42, the second audio content 22 is no longer downmixed 60 and the second audio content 22 is rendered without downmixing.
In some examples, the second audio content 22 is rendered as spatial audio content.
There is only a single audio output system 32 shared by the first apparatus 10 and the second apparatus 20. A user is rendered the first audio content 12 only via the audio output system 32 and is rendered the second audio content 22 only via the same audio output system 32. The rendering of the first audio content 12 and the second audio content 22 via the same shared audio output system 32 is simultaneous (contemporaneous).
One or more of the apparatus 10, 20, 30, 32 can be configured to dynamically track movement of a head of a user, dynamically track a gaze direction of a user, or detect a gaze or orientation of a user towards the first apparatus 10 and/or the second apparatus 20. Movement of a head can be measured using sensors at the head, for example accelerometers, gyro meters etc. Movement of a head can be measured at a distance using a camera to capture images and then processing captured images using computer vision. Movement of an eye of a user can be measured at a distance using a camera to capture images and then processing captured images using computer vision.
In some examples, gaze or orientation of a user towards the first apparatus 10 and/or the second apparatus 20 can be used as a condition for switching between first and second states.
In some examples, head-tracking can be performed or assisted by apparatuses the apparatus 10, 20 towards which the user is oriented and can change with user orientation. Thus, in some examples, the first apparatus 10 is configured to track movement of a head of a user and the second apparatus 20 is configured to track movement of a head of a user. When the user 2 is focusing on the first apparatus 10, the apparatus 30 is in the first state 41 and the first apparatus 10 is used for tracking the head of the user 2. When the user 2 is focusing on the second apparatus 20, the apparatus 30 is in the second state 42 and the second apparatus 20 is used for tracking the head of the user 2.
In some but not necessarily all examples, the first audio content 12 is associated with visual content being contemporaneously displayed at the first apparatus 10
In some but not necessarily all examples, the second audio content 22 is associated with visual content being contemporaneously displayed at the second apparatus 20
For simplicity of explanation the examples described relate to two states 41, 42. However, there can be additional states. One or more of these states can also share the characteristic that while there is simultaneous rendering of content from different apparatuses, at most only content from one apparatus is rendered as full multi-source spatial audio content 50 without downmixing.
For simplicity of explanation the examples described relate to two apparatus 10, 20 that provide respective audio content. However, there can be additional apparatus providing additional audio content. While there is simultaneous rendering of content from the different apparatuses, at most only content from one of the multiple apparatuses is rendered as full multi-source spatial audio content 50 without downmixing.
In this example, the audio output system 32 and the apparatus 30 are combined into a single system.
In this example, the audio output system 32 is a head-mounted system. The head-mounted system is configured to be worn by the user 2. It could for example, comprise a set of ear-mounted speaker systems, one 32L for the left ear of a user 2 and one 32R for the right ear of a user 2. The ear-mounted speaker systems 32L, 32R can be provided as in-ear or on-ear or over-ear arrangements. The ear-mounted speaker systems can be a headset, ear pods, etc.
The head-mounted apparatus 30 can be configured for dynamically tracking movement of a head of a user 2. In some examples, the head-mounted apparatus 30 can be configured for dynamically tracking a gaze direction of the user 2. The head-mounted apparatus 30 can, for example, be configured to detect a gaze or orientation of a user 2 towards an apparatus 10, 20 that is providing audio content 12, 22.
In the first state 41, the apparatus 30 is configured to simultaneously render the first audio content 12 and the second audio content 22 to a user 2 via a (head-mounted) audio output system 32 configured for spatial audio rendering 50, where:
In the second state 42, the apparatus 30 is configured to simultaneously render the first audio content 12 and the second audio content 22 to the user 2 via the (head-mounted) audio output system 32 configured for spatial audio rendering 50, where:
In the second state 42, the second audio content 22 is no longer downmixed 60 and the second audio content 22 is rendered without downmixing. In some examples, the second audio content 22 is rendered as spatial audio content.
The switching 43 between the first state 41 and the second state 42 can be dependent upon detected user 2 actions. For example, it can be dependent upon how a user 2 is focusing attention. For example, it can be dependent upon where a user 2 is directing their gaze or their orientation.
For example, if the user 2 starts to focus on the first apparatus 10 or starts to direct their prolonged gaze or orientation towards the first apparatus 10, then the state machine can transition 43 to the first state 41. In some examples the state machine can transition 43 from the second state 42 to the first state 41.
For example, if the user 2 starts to focus on the second apparatus 20 or starts to direct their prolonged gaze or orientation towards the second apparatus 20, then the state machine can transition 43 to the second state 42. In some examples the state machine can transition 43 from the first state 41 to the second state 42.
It is possible for a set of N audio sources Si to be located at N different positions pi in the audio space, where N is one or more. Spatial audio supports positioning such that the number M of possible positions pi for audio sources can be very much greater than the number N of audio sources.
An audio sources Si, can change with time t. An audio sources Si(t) is an audio source that can but does not necessarily vary with time. An audio source Si(t) is a source of audio content and the audio content can but does not necessarily vary with time. An audio source Si(t) is a source of audio content that has intensity and spectral characteristics that can but do not necessarily vary with time. An audio source Si(t) is a source of audio content that can, optionally have certain sound effects such as reverberation, perceived width of audio source etc that can but do not necessarily vary with time.
A position pi of an audio source Si, can vary with time t. A position pi(t) is a position that can but does not necessarily vary with time. The position pi can be a vector position from an origin O that is, it defines distance and direction. The position pi can be defined using any suitable co-ordinate system.
The origin O can, for example, be a fixed position in a real space occupied by the user 2, or, a (moveable) position of the user 2 who can move within the real space occupied by the user 2, or, a (movable) position of one of the apparatus 10, 20, 30, 32 which can move within the real space occupied by the user 2.
In some object-based examples, each audio source Si can have an independently defined position pi(t). The spatial audio content 50 is defined by a set of N positioned audio sources {Si(t), pi(t)}. In scene-based audio representations (e.g. ambisonics or parametric spatial audio (e.g., metadata-assisted spatial audio—MASA)) typically have audio sources that have a position listener is able to detect, but the user can also perceive diffuse sound when listening. Channel-based audio can have independently defined static positions for the channels, and an audio source can then be created by rendering audio content via one or more channels at any given time. There can be more than one audio source present in each channel.
A characteristic of spatial audio content 50 is that different audio sources Si can move through space relative to each other. If the number M of possible positions pi for audio sources is sufficiently high, the different audio sources Si can move continuously through space relative to each other.
Certain characteristics of spatial audio content 50, for example the variable positions pi(t) can be defined using spatial audio information. The spatial audio information can be an integrated part of the spatial audio content 50 or can be separate data. Thus, the spatial audio content 50 is associated with spatial audio information defining variable positions of multiple audio sources.
Audio content 12, 22 that is spatial audio content 50 is associated with spatial audio information defining variable positions of multiple audio sources. The apparatus 30 is capable of rendering the spatial audio content 50 using the spatial audio information to produce the audio source(s) at the variable position(s) defined by the spatial audio information.
Stereo audio content comprises only two audio sources SL, SR which are rendered, respectively, at a left speaker and a right speaker.
Mono audio content comprises only one audio source S which is rendered from one or more speakers. Mono audio content can be spatial audio content.
It is possible, for example as illustrated in
Different algorithms can be used for downmixing. For example, each audio source Si can be assigned to a left channel or a right channel based on its position pi. The audio sources Si assigned to the left channel are combined (e.g. a weighted summation) to form the left audio source SL. The audio sources Si assigned to the right channel are combined (e.g. a weighted summation) to form the right audio source SR. It some examples, it may be desirable to weight the contribution of different audio sources Si differently, for example, based on position, distance, frequency or some other characteristic such as speech analysis or metadata. An audio source can be excluded by using a zero weighting.
The two audio sources SL, SR, can have a fixed position relative to each other and an origin O. The origin O (and the two audio sources SL, SR,) can also have a fixed position relative to a real space occupied by the user 2, or, a fixed position relative to the user 2 who can move within the real space occupied by the user 2, or, a fixed position relative to one of the apparatus 10, 20, 30, 32 which can move within the real space occupied by the user 2.
Different algorithms can be used for downmixing. For example, the audio sources Si can be combined (e.g. a weighted summation) to form the mono audio source S. It some examples, it may be desirable to weight the contribution of different audio sources Si differently, for example, based on position, distance, frequency or some other characteristic such as speech analysis or metadata. An audio source can be excluded by using a zero weighting.
The audio source S can have a fixed position relative to a real space occupied by the user 2, or, a fixed position relative to the user 2 who can move within the real space occupied by the user 2, or, a fixed position relative to one of the apparatus 10, 20, 30, 32 which can move within the real space occupied by the user 2.
Reference is now made to
In these examples, the first audio content 12 is full multi-source spatial audio content 50 without downmixing. The second audio content 22 has multiple audio sources. In the example of
In the first state 41, the first audio content 12 is rendered as native audio content, that is in its native form without downmixing, as spatial audio content 50. The second audio content 22 is downmixed to downmixed content 62 and the downmixed content 62 is rendered, in this example as mono audio content.
In the second state 42, the second audio content 22 is rendered as native audio content, that is in its native form without downmixing. In the example of
The first audio content 12 is first spatial audio content 50 associated with first spatial audio information defining variable positions pi of multiple first audio sources Si. In the first state 41 the apparatus is configured to render the first spatial audio content 50 using the first spatial audio information to produce the multiple first audio sources Si at the variable positions pi defined by the first spatial audio information.
At least one of the first audio content 12 and the second audio content 22 is rendered in its native form in the first state 41. At least one of the first audio content 12 and the second audio content 22 is rendered in its native form in the second state 42.
Referring to
In the first state 41 the apparatus 30 is configured to render the first spatial audio content 50 using the first spatial audio information to produce the multiple first audio sources Si at the variable positions pi defined by the first spatial audio information. Thus, the first audio content 12 is rendered in native form as first spatial audio content 50. The second audio content 22 is downmixed to downmixed content 62 and the downmixed content 62 is rendered, in this example as mono audio content (not as native, spatial audio content 50).
In the second state 42 the apparatus is configured to render the second spatial audio content 50 using the second spatial audio information to produce the multiple second audio sources Sj at the variable positions pj defined by the second spatial audio information. Thus, the second audio content 22 is rendered in native form as second spatial audio content 50. The first audio content 12 is downmixed to downmixed content 62 and the downmixed content 62 is rendered, in this example as mono audio content (not as native, spatial audio content 50).
Referring to
In both the first state 41 and the second state 42, the apparatus 30 is configured to render the first spatial audio content 50 using the first spatial audio information to produce the multiple first audio sources Si at the variable positions pi defined by the first spatial audio information. Thus, the first audio content 12 is rendered as first spatial audio content 50 without downmixing.
In the first state 41, the second audio content 22 is downmixed to downmixed content 62 and the downmixed content 62 is rendered, in this example as mono audio content (not as native, stereo audio content).
In the second state 42, the apparatus 30 is configured to render the second spatial audio content 50 in its native form as stereo content.
The first audio content 12, associated with the first apparatus 10, is first spatial audio content associated with first spatial audio information defining variable positions of multiple first audio sources S10_i and the second audio content, associated with the second apparatus 20, is second spatial audio content associated with second spatial audio information defining variable positions of multiple second audio sources S20_j.
In the first state (
In the second state (
In this example, the user 2 is consuming two different spatial audio contents 12, 22 one from his television 10 and one from his tablet 20. The user 2 hears both audio content 12, 22 but how they are rendered and which device is used for headtracking is determined based on which content the user is focusing on.
Headtracking for spatial audio can refer to the rendering of audio content as spatial audio content within an audio space that is fixed in real space and through which the user turns and/or moves. The audio space remains in a fixed relationship relative to the first apparatus, and the audio space is moved in response to tracked movement, relative to the first apparatus, of the head-mounted audio output system. Thus, if a user wearing a headset turns to the right, the audio space defined by the headset is turned to the left by the same amount so that it remains fixed in real space. Headtracking can be performed by the head-mounted audio output system 30 detecting its own movement or by an apparatus 10, 20 detecting movement of the user's head or head-mounted audio output system 30. In the latter case, the apparatus 10, 20 can provide headtracking data to the apparatus 30.
In some examples, when the first audio content 12 is rendered, it is rendered as spatial audio content within a first audio space that remains in a fixed relationship relative to the first apparatus 10 associated with the first audio content 12. The first audio space can be moved in response to tracked movement of the head audio output system so that it remains in a fixed relationship relative to the first apparatus 10.
In some examples, when the second audio content 22 is rendered, it is rendered as spatial audio content within a second audio space that remains in a fixed relationship relative to the second apparatus 20 associated with the second audio content 22. The second audio space can be moved in response to tracked movement of the head-mounted audio output system so that it remains in a fixed relationship relative to the second apparatus 20.
In
The audio content 22 from the tablet 20 is rendered as a mono object S20 from the direction of the tablet 20. The tablet direction can be determined as the direction the tablet 20 was in prior to the user switching his focus to the television 10. This allows the user 2 to be able to consume both audio content 12, 22 simultaneously, without them interfering too much with each other.
When the user 2 switches focus towards the tablet 20, the system renders the audio to the user as shown in
The first audio content 12, associated with the first apparatus 10, is first spatial audio content 50 associated with first spatial audio information defining variable positions of multiple first audio sources S10_i and the second audio content 22, associated with the second apparatus 20, is stereo content.
In the both the first state (
In the first state (
In the second state 42 (
In this example, the user is consuming spatial audio content from his television 10 and stereo content from his tablet 20. The spatial audio content 50 is always rendered to the user 2 as spatial audio with the front-direction set to the apparatus providing the spatial audio, in this case the television 10. In this case, the apparatus providing the spatial audio performs the head-tracking of the user 2 regardless of which apparatus the user 2 is focusing on. This is because the other apparatus 20 may not have head-tracking available (as it is rendering only stereo content) and also that the spatial audio should stay aligned with the spatial audio providing apparatus (front-direction always towards it). The tracking apparatus can send data tracking movement of the head-mounted audio output system 30. When the user 2 is focusing on the apparatus providing the spatial audio content, the audio rendering is done in the same way as in the previous embodiment (
There can be two types of tracking. There is dynamic headtracking which can be used to control spatial audio rendering. This dynamic headtracking can, in some examples, switch between the different apparatuses 10, 20. However, in some examples, a location to which a mono downmix is rendered is based on position tracking of or between the apparatuses 10, 20 that provide the rendered audio content 12, 22. This tracking may not need to switch but can be carried out actively in the background by at least one of the apparatus 10, 20. Each device either performs this secondary tracking or receives information on secondary tracking from the other apparatus 10, 20. While this tracking is not directly audio tracking (headtracking), the result from it can be used in the audio co-rendering to modify the rendering accordingly. For example, the mono source S20 is placed in the position of the second apparatus 20 or the mono source S10 is placed in the position of the first apparatus 10
Thus, in some examples, the location to which a mono downmix S10, S20 is rendered is based on a relative position between the apparatus 10, 20 or the location and/or orientation of one of the apparatus 10, 20. For example, a position of the mono downmix S10 can track with the position of the apparatus 10 (but not the user 2). For example, the position of the mono downmix S20 can track with the position of the apparatus 20 (but not the user 2). This secondary tracking may not need to switch but can be carried out in the background.
The method 100 comprises at block 102, receiving different audio content associated with different apparatuses (e.g. first audio content 12 associated with a first apparatus 10; second audio content 22 associated with a second apparatus 20).
The method 100 comprises at block 104 simultaneously rendering the received different audio content (e.g. the first audio content 12 and the second audio content 22) to a user 2 via a head-mounted audio output system 32 configured for spatial audio rendering 50. At block 104 (first state) the first audio content is rendered as spatial audio content, and the second audio content is downmixed to downmixed content and the downmixed content is rendered.
At block 106 (first state) the second audio content is rendered without downmixing.
As illustrated in
The processor 34 is configured to read from and write to the memory 36. The processor 34 may also comprise an output interface via which data and/or commands are output by the processor 34 and an input interface via which data and/or commands are input to the processor 34.
The memory 36 stores a computer program 35 comprising computer program instructions (computer program code) that controls the operation of the apparatus 30 when loaded into the processor 34. The computer program instructions, of the computer program 35, provide the logic and routines that enables the apparatus to perform the methods illustrated in
The apparatus 30 therefore comprises:
The apparatus 30 therefore comprises:
As illustrated in
Computer program instructions for causing an apparatus to perform at least the following or for performing at least the following:
Computer program instructions for causing an apparatus to perform at least the following or for performing at least the following:
The computer program instructions may be comprised in a computer program, a non-transitory computer readable medium, a computer program product, a machine-readable medium. In some but not necessarily all examples, the computer program instructions may be distributed over more than one computer program.
Although the memory 36 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/dynamic/cached storage.
Although the processor 34 is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable. The processor 34 may be a single core or multi-core processor.
References to ‘computer-readable storage medium’, ‘computer program product’, ‘tangibly embodied computer program’ etc. or a ‘controller’, ‘computer’, ‘processor’ etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
As used in this application, the term ‘circuitry’ may refer to one or more or all of the following:
I hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g. firmware) for operation, but the software may not be present when it is not needed for operation.
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
The blocks illustrated in the
Where a structural feature has been described, it may be replaced by means for performing one or more of the functions of the structural feature whether that function or those functions are explicitly or implicitly described.
As used here ‘module’ refers to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user.
The above-described examples find application as enabling components of: automotive systems; telecommunication systems; electronic systems including consumer electronic products; distributed computing systems; media systems for generating or rendering media content including audio, visual and audio visual content and mixed, mediated, virtual and/or augmented reality; personal systems including personal health systems or personal fitness systems; navigation systems; user interfaces also known as human machine interfaces; networks including cellular, non-cellular, and optical networks; ad-hoc networks; the internet; the internet of things; virtualized networks; and related software and services.
The term ‘comprise’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use ‘comprise’ with an exclusive meaning then it will be made clear in the context by referring to “comprising only one . . . ” or by using “consisting”.
In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term ‘example’ or ‘for example’ or ‘can’ or ‘may’ in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples. Thus ‘example’, ‘for example’, ‘can’ or ‘may’ refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example.
Although examples have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the claims.
Features described in the preceding description may be used in combinations other than the combinations explicitly described above.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not.
The term ‘a’ or ‘the’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising a/the Y indicates that X may comprise only one Y or may comprise more than one Y unless the context clearly indicates the contrary. If it is intended to use ‘a’ or ‘the’ with an exclusive meaning then it will be made clear in the context. In some circumstances the use of ‘at least one’ or ‘one or more’ may be used to emphasis an inclusive meaning but the absence of these terms should not be taken to infer any exclusive meaning.
The presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features). The equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way. The equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result.
In this description, reference has been made to various examples using adjectives or adjectival phrases to describe characteristics of the examples. Such a description of a characteristic in relation to an example indicates that the characteristic is present in some examples exactly as described and is present in other examples substantially as described.
Whilst endeavoring in the foregoing specification to draw attention to those features believed to be of importance it should be understood that the Applicant may seek protection via the claims in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not emphasis has been placed thereon.
I/we claim:
Number | Date | Country | Kind |
---|---|---|---|
21201165.4 | Oct 2021 | EP | regional |