Examples of the disclosure relate to audio zooming. Some relate to audio zooming with variable latency.
Audio zooming comprises the amplification of audio sources in a direction with respect to audio sources in other directions. Methods that can be used for audios zooming comprise beamforming, spatial filtering, Machine Learning based methods or any other suitable process.
According to various, but not necessarily all, examples of the disclosure there may be provided an apparatus comprising means for:
The second audio zooming process may provide a higher level of audio zooming than the first audio zooming process.
The first audio zooming process may comprise no audio zooming.
The first audio zooming process may comprise a pass-through mode.
The means may be for using a mix of the first audio zooming process and the second audio zooming process for an audio source that is not within the given distance but is within a further distance of the user device.
The means may be for controlling the combination so that a higher level of audio zooming is applied to audio sources that are further away from the user device.
The means may be for detecting one or more characteristics of the audio source and attenuating the one or more characteristic from non-zoomed audio.
The characteristics may comprise characteristics of human voice.
The means may be for determining an audio source that a user is listening to, wherein determination is based on the user's head position.
The audio zooming may comprise at least one of: beamforming, spatial filtering, machine learning based methods.
According to various, but not necessarily all, examples of the disclosure there may be provided an electronic device comprising an apparatus as claimed in any preceding claim.
According to various, but not necessarily all, examples of the disclosure there may be provided a method comprising:
According to various, but not necessarily all, examples of the disclosure there may be provided a computer program comprising instructions which, when executed by an apparatus, cause the apparatus to perform at least:
While the above examples of the disclosure and optional features are described separately, it is to be understood that their provision in all possible combinations and permutations is contained within the disclosure. It is to be understood that various examples of the disclosure can comprise any or all of the features described in respect of other examples of the disclosure, and vice versa. Also, it is to be appreciated that any one or more or all of the features, in any combination, may be implemented by/comprised in/performable by an apparatus, a method, and/or computer program instructions as desired, and as appropriate.
Some examples will now be described with reference to the accompanying drawings in which:
The figures are not necessarily to scale. Certain features and views of the figures can be shown schematically or exaggerated in scale in the interest of clarity and conciseness. For example, the dimensions of some elements in the figures can be exaggerated relative to other elements to aid explication. Corresponding reference numerals are used in the figures to designate corresponding features. For clarity, all reference numerals are not necessarily displayed in all figures.
Examples of the disclosure relate to audio zooming. Audio zooming can comprise amplifying audio sources that arrive from a particular direction compared to audio sources that do not arrive from that particular direction. This can increase the audibility of the audio sources that have been zoomed for a user compared to the non-zoomed audio. Audio zooming can be useful in scenarios such as a crowded audio scene where there are lots of audio sources but a user wants to focus on a particular audio source, or if the audio source that a user wants to focus on is far away or in any other suitable scenario.
Any suitable process can be used to implement audio zooming. For instance, audio zooming could be implemented using beamforming, spatial filtering, machine learning based methods or any other suitable process or combinations of processes.
Some audio zooming methods can introduce high levels latency into the audio signals. This can be undesirable.
The apparatus 101 can be configured to implement examples of the disclosure. The apparatus 101 can be configured to enable audio zooming according to examples of the disclosure and/or to perform any other suitable functions.
In the example of
As illustrated in
The processor 103 is configured to read from and write to the memory 105. The processor 103 can also comprise an output interface via which data and/or commands are output by the processor 103 and an input interface via which data and/or commands are input to the processor 103.
The memory 105 is configured to store a computer program 107 comprising computer program instructions (computer program code 109) that controls the operation of the apparatus 101 when loaded into the processor 103. The computer program instructions, of the computer program 107, provide the logic and routines that enables the apparatus 101 to perform the methods illustrated in
The apparatus 101 therefore comprises: at least one processor 103; and
As illustrated in
The computer program 107 comprises computer program instructions, which when executed by an apparatus, cause the apparatus 101 to perform at least the following:
The computer program instructions can be comprised in a computer program 107, a non-transitory computer readable medium, a computer program product, a machine readable medium. In some but not necessarily all examples, the computer program instructions can be distributed over more than one computer program 107.
Although the memory 105 is illustrated as a single component/circuitry it can be implemented as one or more separate components/circuitry some or all of which can be integrated/removable and/or can provide permanent/semi-permanent/dynamic/cached storage.
Although the processor 103 is illustrated as a single component/circuitry it can be implemented as one or more separate components/circuitry some or all of which can be integrated/removable. The processor 103 can be a single core or multi-core processor.
References to “computer-readable storage medium”, “computer program product”, “tangibly embodied computer program” etc. or a “controller”, “computer”, “processor” etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
As used in this application, the term “circuitry” can refer to one or more or all of the following:
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
The blocks illustrated in
The apparatus can be provided in an electronic device, for example, a mobile terminal, according to an example of the present disclosure. It should be understood, however, that a mobile terminal is merely illustrative of an electronic device that would benefit from examples of implementations of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure to the same. While in certain implementation examples, the apparatus can be provided in a mobile terminal, other types of electronic devices, such as, but not limited to: mobile communication devices, hand portable electronic devices, wearable computing devices, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, cameras, video recorders, GPS devices and other types of electronic systems, can readily employ examples of the present disclosure. Furthermore, devices can readily employ examples of the present disclosure regardless of their intent to provide mobility.
In the example of
The microphones 207 can comprise any means that can be configured to detect sound signals and convert these into audio signals. The audio signals can comprise electrical signals that are output by the microphones 207. The sound signals that are detected by the microphones 207 can originate from one or more audio sources.
The microphones 207 can be positioned relative to each other so as to provide audio signals that can be used for spatial audio. The audio signals provided by the microphones 207 comprise information about the spatial properties of the sound scene captured by the microphones 207.
In the example of
The electronic device 203 is configured so that the audio signals from the microphones 207, or any other suitable source, can be provided to the controller 101 to enable processing of the audio signals.
The controller 101 can be an apparatus as shown in
The system 201 is configured so that the processed audio signals can be provided from the controller 101 to the peripheral device 205. The processed audio signals could be transmitted via a wireless connection or via any other suitable type of communication link.
The peripheral device 205 can comprise headphones, a headset or any other suitable device that comprises loudspeakers 209. The loudspeakers 209 can comprise any means that can be configured to convert electrical signals to output sound signals. In the example of
In the example of
In the example of
In the example of
The method comprises, at block 301, determining if an audio source is within a given distance of a user device. The user device could be an electronic device 203 or a peripheral device 205 as shown in
At block 303 the method comprises selecting an audio zooming process for the audio source. The audio zooming process is selected based on whether or not the audio source is within the given distance of the user device. If the audio source is within the given distance of the user device then a first audio zooming process is used and if the audio source is not within the given distance of the user device a second audio zooming process is used. For instance, the first audio zooming process can be used if the audio source is close to the user device and the second audio zooming process can be used if the audio source is not close to the user device.
The first and second audio zooming processes are different from each other. The second audio zooming process has a higher latency than the first audio zooming process. The second audio zooming process can provide a higher level of audio zooming than the first audio zooming process. The higher level of audio zooming can increase the audibility of the zoomed audio compared to the non-zoomed audio by a larger amount. In some examples the first audio zooming process could comprise no audio zooming. The first audio zooming process could comprise a pass-through mode.
The respective audio zooming processes can comprise at least one of: beamforming, spatial filtering, machine learning based methods or any other suitable processes or combinations of processes.
In some examples the audio zooming process that is used to process the audio signals can comprise a combination of the first audio zooming process and the second audio zooming process. For instance, an audio source could be located at an intermediate distance from the user device. In such cases the audio source is not close to a user device but it might not be far away either. In such cases the audio sources might not be within the given distance of the user device but could still be within a further distance of the user device. In such cases the way in which the respective audio zooming processes are combined can be controlled so that a higher level of audio zooming is applied to audio sources that are further away from the user device.
In some examples the audio zooming process can comprise attenuating unwanted characteristics from the audio signals. In such examples, the apparatus 101 or controller can be configured to detect one or more characteristics of the audio source and attenuate the one or more characteristics from non-zoomed audio. The characteristics could comprise characteristics of human voices or any other suitable types of audio. The characteristics could comprise frequency bands or any other suitable characteristics.
In some examples the audio zooming can be applied so that it enhances an audio source that is of interest to a user of the user device. For instance, if there is more than one audio source in the area around the user then it can be determined which audio source the user is listening to so that the audio zooming can enhance that audio source. In some examples the audio source that a user is listening to can be determined based on the user's head position.
In the example of
The different audio sources 405 are located at different distances from the user 401 and their user device 403. In this example the first audio source 405-1 is located closest to the user 401, the third audio source 405-3 is located furthest away from the user 401 and the second audio source 405-2 is located at an intermediate distance between the first audio source 405-1 and the third audio source 405-3.
The user device 403 can be configured to determine which of the audio sources 405 the user 401 is interested in. This can be based on the direction that the user 401 is facing or by any other suitable means. The direction that a user 401 is facing can be determined based on the front direction of the headphones or by using any other suitable criteria. In some cases the direction of an audio source 405 can be determined from microphone signals, for instance by using TDOA (time difference of arrival) from multiple microphones. In such cases the audio source 405 of interest can be assumed to be the loudest audio source. In some cases a video preview could be obtained. In such cases the direction of the video zoom can be assumed to be the direction for audio sources of interest 405 and audio zoom can be applied in that direction.
If it is determined that the user is interested in the first audio source 405-1 the user device 403 can be configured to apply audio zooming processes to captured audio signals so that the user 401 can hear the first audio source 405-1 more clearly.
To apply the appropriate audio zooming process the distance between the user device 403 and the first audio source 405-1 can be determined. Any suitable processes or means can be used to determine the distance between the user device 403 and the respective audio sources 405. For instance, Lidar can be used to determine the distance of objects in the direction in which a user 401 is facing. Camera focus distance can be used in some examples. The use of camera focus can be useful for implementations that make use of video preview. In some examples a distance can be estimated from microphone signals alone using methods based on multiple microphone signal coherence. These methods using microphone signals do not always provide high accuracy but would be usable for implementing examples of the disclosure.
In some examples it can be determined if the first audio source 405-1 is close to the user device 403 or not close to the user device 403. This can determine whether or not the first audio source 405-1 is within a given distance of the user device 403.
In this case it would be determined that the first audio source 405-1 is within a given distance of the user device 403 because the first audio source 405-1 is close to the user 401. Close to the user 401 could mean within 5 m of the user 401 or within any other suitable distance range. The distance range at which an audio source 405 is considered to be close to the user 401 can depend upon the type of audio source 405 the volume of the audio source 405, the ambient noise levels and any other relevant factors.
When the audio source 405-1 is close to the user 401 the sound from the audio source 405-1 typically leaks through the user device 403 acoustically and/or through a pass-through mode. This can result in the user 401 hearing the first audio source 405-1 directly and also hearing the amplified audio signals through the user device 403. This can lead to double talk which is annoying for the user 401.
In this case double talk refers to a user 401 hearing the same audio twice with a delay between the two presentations of the audio. In this case the presentation is from real world sounds that the user 401 hears directly from real world audio sources 405, leaking through headphones, or from headphones pass-through (for example transparency mode). The second presentation is from a user device 403 playing audio through the headphones. The headphones could be part of the user device 403 or could be coupled to a user device such as a mobile phone.
To reduce double talk a first audio zooming process is selected for use in processing the audio signals originating from the first audio source 405-1. The first audio zooming process has a low latency so as to reduce the effects of double talk. The low latency could be less than 10 ms or less than 20 ms. The first audio zooming process is indicated by the arrow 407 in
If it is determined that the user is interested in the third audio source 405-3 the user device 403 can be configured to apply audio zooming processes to captured audio signals so that the user 401 can hear the third audio source 405-3 more clearly.
A similar process can be applied to enable the appropriate audio zooming process to be used for the third audio source 405-3. In examples of the disclosure a distance between the user device 403 and the third audio source 405-3 can be determined. As with the case of the first audio source 405-1, it can be determined if the third audio source 405-3 is close to the user device 403 or not close to the user device 403. This can determine whether or not the third audio source 405-3 is within a given distance of the user device 403.
In this case it would be determined that the third audio source 405-3 is not within a given distance of the user device 403 because the third audio source 405-3 is far away from the user 401. Far away from the user 401 could mean being more than 10 m away from the user 401 or above any other suitable distance range. The distance range at which an audio source 405 is considered to be far away from the user 401 can depend upon the type of audio source 405 the volume of the audio source 405, the ambient noise levels and any other relevant factors.
When the audio source 405-3 is far away from the user 401 the sound from the audio source 405-3 will not typically leak through the user device 403 acoustically and/or through a pass-through mode. This means that the user 401 won't hear the third audio source 405-1 directly so will not be affected by double-talk.
As the third audio source 405-3 is not affected by double talk a second audio zooming is selected for use in processing the audio signals originating from the third audio source 405-3. The second audio zooming process is indicated by the arrow 409 in
The second audio zooming process has a high latency because the third audio source 405-3 is not affect by double talk. The high latency could be higher than the latency of the first audio zooming process. The high latency could be more than 50 ms. The higher latency introduced by the second audio zooming process can introduce a delay that is perceptible to the user 401. This would not be problematic because a delay in sounds that are far away is expected by users 401.
The second audio zooming process can introduce a higher level of audio zooming compared to the first audio zooming process. This can provide a larger amplification of the audio from the third audio source 405-3 compared to the first audio source 405-1. The larger amplification might be needed because audio sources 405 that are further away would appear quieter than audio sources 405 that are closer. This larger amplification may need heavier audio processing, which causes more latency to audio processing. That is why the second audio zooming process is used in this case where the third audio source 405-3 is far away from the user 401.
In the example described above there is a binary audio zooming system so that a first audio zoom process is applied if the user 401 is close to the audio source 405 and a second audio zoom process is applied if the user 401 is not close to the audio source 405. In some examples there could be multiple different levels of audio zooming that could introduce different levels of latency based on the distance between the user 401 and the audio source 405. For instance, in the example of
In the example of
In this example the first audio source 405-1 is located close to the user 401 and the third audio source 405-3 is located far away from the user 401. The third audio source 405-3 is located on the other side of the road to the user 401. The road between the user and the third audio source 405-3 can make the third audio source much harder to hear without the use of audio zooming.
If it is determined that the user is interested in the first audio source 405-1, that is close to the user 401, the user device 403 can be configured to apply audio zooming processes to captured audio signals so that the user 401 can hear the first audio source 405-1 more clearly. The audio zooming process that is applied can be a low latency process because the user 401 can hear the first audio source 405-1 leaking through the user device 403 and/or through pass through mode.
In addition to the low latency or pass-through process the audio zooming process can also comprise the attenuation of sounds that are similar to the audio source 405-1 of interest. This can help to remove the effects of double talk.
To enable sounds that are similar to the audio source 405-1 to be attenuated characteristics of the audio source 405-1 can be identified. Audio sources having similar characteristics can then be attenuated from the non-zoomed audio. For instance, in the example of
The electronic device 203 can stream audio to a peripheral device 205. In this example the peripheral device comprises headphones that are being worn by the user. The headphones can be configured to present audio to the user 401.
In the example of
In the example of
However, the audio that is processed using the high latency methods might not be appropriate to playback to the user 401 over the headphones while the video is being captured because this could result in double talk. Therefore, to prevent this the audio that is presented while the video is being captured can be processed with an audio zooming method that is, at least in part, selected based on the distance between the electronic device 203 and the audio source 405 in the video.
In this example there are two audio sources 405-2, 405-3 in front of the user 401. The audio source that is being captured in the video can be determined based on the orientation of the electronic device 203, the zooming of the video capture and/or any other suitable factor.
Once the audio source 405 that is being captured in the video has been determined the distance to that audio source 405 can be determined. The audio zooming process that is to be used in the audio preview can then be selected based on that distance.
Audio sources 405 that are not in the video can be processed using low latency because the user 401 can hear these audio sources 405 leaking through the user device 403 and/or through pass through mode for the audio preview. This can alert the user to the presence of the objects. In the example of
The zoomed audio for the audio sources of interest and the ambient audio can then be mixed to be present to the user 401 for preview with the video images. This can generate a mix that is close to the audio that would be created for the video while enabling the user to be made aware of other audio sources in the vicinity.
Examples of the disclosure therefore provide the benefit that audio zooming can be used for audio scenes having multiple audio sources 405 without introducing a problematic delay.
In the examples shown in
The zoomed audio can be presented alone or the zoomed audio can be mixed with ambient audio. The ambient audio can be processed using low latency processing because the user 401 can hear the ambient audio leaking through the user device 403 and/or through pass through mode. The ambient audio could be direct microphone signals or substantially direct microphone signals. Some stereo or low latency spatial processing, or any other suitable type of processing, can be applied to the microphone signals to obtain the ambient audio.
The controller or apparatus 101 can also be configured to perform additional audio processing that is not described here. Such processing can comprise, automatic gain control, compression, noise cancellation, wind noise cancellation, equalization, or any other suitable type of processing.
The above-described examples find application as enabling components of: automotive systems; telecommunication systems; electronic systems including consumer electronic products; distributed computing systems; media systems for generating or rendering media content including audio, visual and audio visual content and mixed, mediated, virtual and/or augmented reality; personal systems including personal health systems or personal fitness systems; navigation systems; user interfaces also known as human machine interfaces; networks including cellular, non-cellular, and optical networks; ad-hoc networks; the internet; the internet of things; virtualized networks; and related software and services.
The apparatus can be provided in an electronic device, for example, a mobile terminal, according to an example of the present disclosure. It should be understood, however, that a mobile terminal is merely illustrative of an electronic device that would benefit from examples of implementations of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure to the same. While in certain implementation examples, the apparatus can be provided in a mobile terminal, other types of electronic devices, such as, but not limited to: mobile communication devices, hand portable electronic devices, wearable computing devices, portable digital assistants (PDAs), pagers, mobile computers, desktop computers, televisions, gaming devices, laptop computers, cameras, video recorders, GPS devices and other types of electronic systems, can readily employ examples of the present disclosure. Furthermore, devices can readily employ examples of the present disclosure regardless of their intent to provide mobility.
The term ‘comprise’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use ‘comprise’ with an exclusive meaning then it will be made clear in the context by referring to “comprising only one . . . ” or by using “consisting”.
In this description, the wording ‘connect’, ‘couple’ and ‘communication’ and their derivatives mean operationally connected/coupled/in communication. It should be appreciated that any number or combination of intervening components can exist (including no intervening components), i.e., so as to provide direct or indirect connection/coupling/communication. Any such intervening components can include hardware and/or software components.
As used herein, the term “determine/determining” (and grammatical variants thereof) can include, not least: calculating, computing, processing, deriving, measuring, investigating, identifying, looking up (for example, looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (for example, receiving information), accessing (for example, accessing data in a memory), obtaining and the like. Also, “determine/determining” can include resolving, selecting, choosing, establishing, and the like.
In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term ‘example’ or ‘for example’ or ‘can’ or ‘may’ in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples. Thus ‘example’, ‘for example’, ‘can’ or ‘may’ refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example.
Although examples have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the claims.
Features described in the preceding description may be used in combinations other than the combinations explicitly described above.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not.
The term ‘a’, ‘an’ or ‘the’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising a/an/the Y indicates that X may comprise only one Y or may comprise more than one Y unless the context clearly indicates the contrary. If it is intended to use ‘a’, ‘an’ or ‘the’ with an exclusive meaning then it will be made clear in the context. In some circumstances the use of ‘at least one’ or ‘one or more’ may be used to emphasis an inclusive meaning but the absence of these terms should not be taken to infer any exclusive meaning.
The presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features). The equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way. The equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result.
In this description, reference has been made to various examples using adjectives or adjectival phrases to describe characteristics of the examples. Such a description of a characteristic in relation to an example indicates that the characteristic is present in some examples exactly as described and is present in other examples substantially as described.
The above description describes some examples of the present disclosure however those of ordinary skill in the art will be aware of possible alternative structures and method features which offer equivalent functionality to the specific examples of such structures and features described herein above and which for the sake of brevity and clarity have been omitted from the above description. Nonetheless, the above description should be read as implicitly including reference to such alternative structures and method features which provide equivalent functionality unless such alternative structures or method features are explicitly excluded in the above description of the examples of the present disclosure.
Whilst endeavoring in the foregoing specification to draw attention to those features believed to be of importance it should be understood that the Applicant may seek protection via the claims in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not emphasis has been placed thereon.
Number | Date | Country | Kind |
---|---|---|---|
22210560.3 | Nov 2022 | EP | regional |