The present disclosure is generally related to methods and systems for hybrid meeting spaces, more specifically the use of audio in situations where, in a certain meeting space, multiple devices may be connecting to a communications and collaboration platform.
Meeting spaces may be equipped with specific audio peripherals to facilitate hybrid meetings. Room microphones and speakers are for example used to capture and emit sound in the meeting space. By using such equipment, one can ensure a high-quality audio experience that is consistent and reliable for the meeting participants who are in the meeting space.
Currently, the audio signals that are input/output from these peripherals are typically connected to a single instance (login) of a Unified Communications & Collaboration (UC&C) tool. When sound is captured from a room microphone for instance, this signal will thus be sent to a single UC&C instance, regardless of the source of that sound.
Features in the embodiments disclosed herein may provide systems and methods for mediating an audio source during a meeting on a communications and collaboration platform, for example, for a group of participants including multiple participants in a meeting room.
In one example embodiment, the present disclosure describes a system for mediating an audio source for a meeting on a communications and collaboration platform. The system includes an audio source detection module configured to identify at least one active audio source, an instance detection module configured to map the at least one active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance, and an audio control module. The audio control module is configured to determine at least one audio signal device from a plurality of devices to capture audio from the active audio source, and configure an audio system to manipulate an active audio signal from the audio signal device via the mapped instance. In some cases, upon identifying a second active audio source, determining a second mapped instance corresponding to the second active audio source.
In another example embodiment, the present disclosure describes method for mediating an audio source for a meeting on a communications and collaboration platform. The method includes identifying at least one active audio source, mapping the active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance, determining at least one audio signal device from a plurality of devices for the active audio source, configuring an audio system to manipulate an active audio signal from the audio signal device via the mapped instance, and in some cases, upon identifying a second active audio source, determining a second mapped instance corresponding to the second active audio source.
In another example embodiment, the present disclosure describes a system for mediating an audio source for a meeting on a communications and collaboration platform. The system includes an audio source detection module configured to identify at least one active audio source, an instance detection module configured to map the at least one active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance, and an audio control module configured to determine at least one audio signal source from a plurality of devices for the active audio source, and configure an audio system to manipulate an active audio signal from the audio signal source via the mapped instance.
In another example embodiment, the present disclosure describes a method for mediating an audio source for a meeting on a communications and collaboration platform. The method includes identifying at least one active audio source, mapping the active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance, determining at least one audio signal source from a plurality of devices for the active audio source, and configuring an audio system to manipulate an active audio signal from the audio signal source via the mapped instance.
In another example embodiment, the present disclosure describes a system for mediating an audio source for a meeting on a communications and collaboration platform. The system includes an audio source detection module configured to identify at least one active audio source, an instance detection module configured to map the at least one active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance, and an audio control module. The audio control module is configured to determine at least one audio signal device from a plurality of devices to capture audio from the active audio source, and configure an audio system to manipulate an active audio signal from the audio signal device via the mapped instance. The system further includes one or more user devices and one or more peripheral devices coupled to the user devices.
In another example embodiment, the present disclosure describes an electronic meeting tool for mediating an audio source for a meeting on a communications and collaboration platform. The tool includes one or more peripheral devices adapted to couple one or more user devices to the communications and collaboration platform. The peripheral devices are further configured to: receive sensing data from an audio device, manipulate the sensing data, and sending the sensing data to a base unit to determine the audio source.
The accompanying drawings illustrate various embodiments of systems, methods, and embodiments of various other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g. boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. It may be that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Furthermore, elements may not be drawn to scale. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles.
The present disclosure provides a detailed and specific description that refers to the accompanying drawings. The drawings and specific descriptions of the drawings, as well as any specific or alternative embodiments discussed, are intended to be read in conjunction with the entirety of this disclosure. The system and methods may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided by way of illustration only and so that this disclosure will be thorough, complete and fully convey understanding to those skilled in the art.
References are made to the accompanying drawings that form a part of this disclosure and which illustrate embodiments in which the systems and methods described in this specification may be practiced.
Embodiments of the present disclosure will be described more fully hereafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which example embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words “comprising,” “having,” “containing,” and “including,” and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims, the singular forms “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein may be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.
Additionally, the present disclosure may be described herein in terms of functional block components and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. These various operations, functions, or actions may, for example, correspond to software, program code, or program instructions executable by a processor that causes the functions to be performed. Although illustrated as discrete blocks, obvious modifications may be made, e.g., two or more of the blocks may be re-ordered; further blocks may be added; and various blocks may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the desired implementation.
As referenced herein, “communications and collaboration platform” may refer to an application realized by any number of hardware and/or software components configured to enable a group of participants or users to communicate and/or collaborate in real time, near real time, and/or for non-real-time applications (e.g., recordings, transcriptions, etc.). A communications and collaboration platform may provide a range of communication and collaboration tools that allow the group of participants or users to communicate and collaborate in real time or non-real time. On example of communications and collaboration platform may include a Unified Communications & Collaboration (UC&C) tool.
As referenced herein, “instance”, “instance/login” or “instance or login” of or connected to a communications and collaboration platform may refer to any connections/services related to or provided by a communications and collaboration platform. Such a connection/service may relate to, for example to a virtual space, a representation, a presence, an identification, a history, and any associated services provided by the communications and collaboration platform for devices/applications connected to or supported by the communications and collaboration platform. It is to be understood that an instance, an instance/login, or an instance or login described herein may relate to a general source/sink of audio data from a producer/consumer of audio data.
As reference herein, “manipulate” or “manipulating” signals or audio signals may refer to processing and/or routing the signals via an audio system, as well as additional operations such as editing, modifying, transforming, or changing the signals via the audio system.
When there are multiple UC&C instances within a meeting space, it may not be sufficient to route audio signal to a single UC&C instance, regardless of the source of that sound. In a previous solution, any audio signal that is captured or emitted from the audio peripherals will be routed to the single UC&C instance, regardless of the audio source. For example, if a participant logs into the UC&C tool on a personal device within the meeting space, this UC&C instance will not be able to use the audio peripherals. Instead, the user will typically mute this device's microphone and speaker. By doing so however, this user's UC&C instance may not get proper highlighting in a user interface of the UC&C tool because no audio is detected. In addition, since UC&C tools typically prioritize users based on their audio (to ensure people who are talking are in view), when no audio is detected, such prioritization techniques may not work as expected for devices that do not have access to the room peripherals.
For example, as seen in
Another problem with such a solution is the conflict of having multiple audio systems in the same space, which can result in echoes, noise, and distortions. One typical solution from prior solutions to handle this is to disable all-but-1 audio system. When room audio is available, one would typically disable all other systems for example as the room audio system typically offers better quality. This coordination of multiple audio systems is however a frequent source of issues as users are often required to act themselves, and such manual coordination is prone to errors.
Embodiments described herein can solve the above identified problems related to previous solutions. Methods and systems are disclosed herein for mediating an audio source for a hybrid meeting on a communications and collaboration platform. At least one active audio source can be identified. The active audio source can be mapped to one or more instances connected to the communications and collaboration platform to determine a mapped instance. At least one audio signal source from multiple devices can be determined to capture audio from the active audio source. An audio system can be configured to route an active audio signal from the audio signal source via the mapped instance.
In one embodiment, a group of users at the same meeting space can be represented by more than one remote presence (e.g. by means of multiple instances or logins of a communications and collaboration platform). The users can have an audio signal routed through an audio device that is most relevant. This is in contrast to the previous solution to route the audio signal statically to a fixed device. By doing so, in an embodiment, the issue in which the incorrect instance or login is highlighted and prioritized may be mitigated, and the requirement to manually coordinate multiple audio systems may be removed.
In
The meeting platforms 200 and 250 may further include two other participants 202, 203 and 252, 253, respectively, e.g., non-host users/participants are also in the meeting space, each with their own device that has a UC&C instance/login. In an example embodiment, one of the participants 203 and 253 is talking, which is symbolized with the text balloon.
In such a situation, as illustrated in
As seen in
In an embodiment, all the laptops may be configured to use a ‘virtual’ audio system that adapts according to who is speaking. Accordingly, a single process may be offered that may not require users to change settings depending on their role in the meeting. Instead, a fixed configuration may be used and typically no user interactions are needed to configure this. This is an improvement over the prior solutions by being able to identify the proper speaker/participant during a meeting.
The system 300 includes an audio source detection module 305 configured to identify at least one active audio source, for example, in a physical meeting space. The audio source detection module 305 can be configured to receive at least one of audio data, video data, or metadata 301 from multiple devices in the meeting space. The devices may include, for example, devices used by participants in the meeting space such as a portable computing device (e.g., a laptop), audio/video peripherals or devices associated with a portable computing device, room audio/video peripherals (e.g., a room microphone, a room speaker, etc.), a desktop computing device, a base unit of a communications network, etc. The audio source detection module 305 can be configured to determine, based on the received audio data, video data, or metadata 301, an active audio device from the multiple devices for the identified active audio source.
The audio source detection module 305 may include a detection aggregator to aggregate the received audio data, video data, or metadata into a consensus to determine the active audio source. In some cases, the detection aggregator may be a centralized detection aggregator hosted by a base unit or device in the meeting space. In some cases, the detection aggregator may be distributed among at least some of the devices. It is to be understood that a base unit, in various embodiments described herein, such as depicted in
In one example embodiment, the audio source detection module 305 can identify an active audio source when a participant is talking in front of a device. For example, when the participant is using a laptop for attending the meeting, the audio source detection module 305 can detect whether the participant is talking in front of the laptop by analysing a video feed of a webcam of the laptop to estimate whether the person in front of the laptop is talking. The audio source detection module 305 may combine the video feed with an audio feed, for example from the laptop microphone, to increase confidence of the detection. By aggregating results of the different devices (e.g., webcam, laptop microphone, etc.), the audio source detection module 305 can create an audio source detector.
In one example embodiment, the audio source detection module 305 can identify an active audio source by analysing signals from various devices. For example, when participants are using different microphones (e.g., laptop microphones) in a meeting space, the audio source detection module 305 can detect and analyze the signals from the microphones in terms of volume, reverb amount and other audio properties. By analyzing the signals, the audio source detection module 305 can create an additional estimation of which microphone is near to the active audio source (e.g., a speaker).
The system 300 further includes an instance detection module 310 configured to map the active audio source 303 identified by the audio source detection module 305 to one or more instances 325 connected to the communications and collaboration platform to determine a mapped instance 307. In some cases, the mapped instance 307 may be an existing instance of the communications and collaboration platform. In some cases, the mapped instance 307 may be a newly created or activated instance which can be triggered by the detection of the active audio source 303. In an embodiment, an instance can be created or activated when an active audio source is not associated with one of the existing instances, or not associated with an active audio device. For example, in an embodiment, when a participant is detected to be talking without an active device, the system 300 may create a new instance to represent the participant in the communications and collaboration platform.
The instance detection module 310 can determine whether the identified active audio source is associated with the active audio device. When the identified active audio source is associated with the active audio device, the instance detection module 310 can designate one of the instances 325 associated the active audio device as the mapped instance. In some cases, the mapped instance of the communications and collaboration platform is associated with a base unit or room system (e.g., a desktop computing device or other suitable devices connected to the communications and collaboration platform) in the meeting space and connected to a communications network. It is to be understood that a base unit, in various embodiments described herein, such as depicted in
The system 300 further includes an audio control module 315 configured to determine at least one audio signal source from multiple devices in the meeting space for the identified active audio source. The audio control module 315 can further configure a configurable audio system 320 in the meeting space to process and/or route an active audio signal from the determined audio signal source via the mapped instance, upon an identifying of a new active audio source, determine a new mapped instance corresponding to a new active audio source. It is to be understood that when an audio system is configured to process and/or route an active audio signal, the audio system can perform various functions on the active audio signal, including, for example, signal routing, signal processing, signal controlling, etc., for various data signals including, for example, audio stream(s) from device(s) to instance(s), audio stream(s) from instance(s) to device(s), audio stream(s) that are exchanged among devices connected to the audio system, etc.
The audio control module 315 can compare audio signals from at least some of the devices to determine the audio signal source. In some cases, one or more room audio peripherals such as, for example, a room microphone, may be designated as the audio signal source. In some cases, the audio signal source may include an audio device associated with a portable computing device such as, for example, a laptop of a participant. The audio control module 315 may further configure the audio system 320 to mute the audio devices except for the determined audio signal source/device.
As described above,
First functional block 305, referring to an audio source detection module in this embodiment, may be responsible for identifying active audio source(s). In an embodiment, an audio source may be defined at different levels depending on various implementations. For example, an active audio source might be a person or object that makes a sound. An audio source may also refer to a participant in front of an audio device, or a device suitable to capture audio signal(s) from the participant. An audio source may be an audio device, or a device suitable to capture audio signal(s), and may not necessarily be a producer of the audio signal(s). An audio source may be a sub-signal of the signal captured from a device, e.g., the speech from a participant x captured on device y. It is to be understood that the device y may capture other audio as well, such as speech from another participant, background noise, room audio playback, etc.
The audio source detection module 305 may consider various factors to determine whether a peripheral audio device is suitable to capture signal(s) from a participant. For example, the factors may be related to the estimated distance from a sensor of a device (e.g., preferably a nearby sensor rather than a far-away sensor), audio quality (e.g., preferably a sensor that picks up the signal with better quality), sound source versus device relation (e.g., detect person talking sitting in front of the device), etc. Sources of audio may be any utterance of audio signals, audible or inaudible. Even though sometimes the term speech is used, the system 300 is not restricted to speech from one or more participants, and may be used to refer to any type of audio sources.
Second functional block 310, referring to an instance detection module in this embodiment, may be used to map an active audio source to an instance of the communications and collaboration platform, e.g., which UC&C instance/login maps to what audio source. This mapping function can specify to which UC&C instance/login the active audio source is to be routed to. For example, in an embodiment, when participant P1 is speaking in front of a user device (e.g., a laptop) that has UC&C instance/login I1, the instance detection module 310 can map participant P1 to I1 when participant P1 is represented in the UC&C platform with that specific instance/login I1. When participant P2 is speaking, not in front of a user device (e.g., a laptop), the instance detection module 310 may map that participant P2 to a default instance Idef, for example, the instance of the nearest participant, or another type of mapping. When the audio source is, in a certain implementation, a device that has a UC&C instance/login associated with it, the instance detection module 310 may designate the UC&C instance that is running on the device as the mapped instance for the specific “device” audio source. In some cases, the instance detection module 310 may map a certain audio source with no instance. In other words, the audio source may not be ingested to the UC&C platform. It is to be understood that a mapping may not have a one-to-one correspondence. Multiple audio sources may be mapped to a single UC&C instance/login, or multiple UC&C instances/logins may be mapped to a single audio source, in addition to one-to-one mapping of a UC&C instance on an audio source. In an embodiment, the instance detection module 310 may dynamically determine the mapping which may change in real time.
Functional block 315, referring to an audio control module in this embodiment, may be used for configuring the configurable audio system 320 that it is connected to. The audio control module 315 can send instructions 317 to the audio system 320 which can execute actions to enable the identified active audio source to be sent to the determined UC&C instances/logins. The audio control block 315 may also be used for identifying an active audio signal 322 that can be used to represent the active audio source. In some cases, a device that detects the active audio source may be determined as the audio signal source or device, i.e., a device that captures the active audio source. In some cases, the audio control block 315 may determine a different component or device that is more relevant to capture a certain audio signal than the device that detects the active audio source. This identification and mapping between the audio source and the signal source component or device can be static. For example, a high-quality room audio microphone always may be used to capture any audio signal in the room that is sent to a UC&C instance/login. In some cases, the audio control module 315 can dynamically determine the audio signal source based on conditions, priorities and preferences at any given time. In addition, the audio control block 315 may be configured to be aware of the processing and/or routing capabilities of the configurable audio system 320 and may configure these processing and/or routing capabilities depending on the situation at hand. For instance, processing and/or routing components of the audio system 320 can be available to handle echo, reverb or quality enhancements or to (de)multiplex audio signals.
Functional block 320, referring to a configurable audio system in this embodiment, may represents one or more components that are can be used for delivering the relevant (processed) audio signals 322 and control signals 323 to the relevant destinations. A configurable audio system can, without limitation, include signal switchers, signal (de)multiplexers, signal processing components, etc. Each of these components can be configurable and default value can be provided for these configurable components.
A suitable audio system described herein can be configured to have various functionalities such as, for example, dynamic signal routing and processing for routing and processing data signals in general. The audio control module can configure the audio system to process one or more audio signals based on the active audio signal between the instances and the devices connected to the audio system. It is to be understood that the audio signals can be processed and/or routed to adapt to the detected active audio source, and may not be the same as the active audio signal from the active audio source to the mapped instance. That is, the active audio signal may not be delivered to the mapped instance “as-is”. Instead, certain transformations can be done to the signal by suitable signal processing and/or routing. For example, in an embodiment, the audio control module can configure the audio system to filter the active audio signal to obtain a specific voice signal of a participant. In an embodiment, the audio control module can configure the audio system to dynamically route an audio signal from/to one of the instances. For example, an audio system can be configured for echo cancellation at various levels for in/out signals of devices linked to the configurable audio system. The audio control module can configure the audio system to re-route the audio signal from the mapped instance associated with a room speaker based on any suitable echo cancellation logic. It is to be understood that a configurable audio system described herein may perform signal routing and/or processing for various data signals including, for example, audio stream(s) from device(s) to instance(s), audio stream(s) from instance(s) to device(s), audio stream(s) that are exchanged among devices connected to the audio system, etc.
Functional block 325, referring to UC&C instances in this embodiment, may be configured to receive the audio signal(s) from the configurable audio system. UC&C instances refer to instances of a UC&C platform or tool. It is to be understood that a UC&C instance can include any component in hardware and/or software that can provide/generate and/or receive/use audio signals. UC&C tools are one such example via which the audio can be signalled to local and remote participants. The UC&C instances can include any suitable instance of a communications and coloration platform or tool. The disclosure is not intending to be limited to these types of platforms or tools.
As shown at the bottom right of the figure, a physical meeting space 450 is provided in which there is a room microphone 460 available, as indicated by a microphone symbol. Each participant P1 451, P2 452 in the physical meeting space 450 can be associated with an active device or user device 402 (e.g., user device D1, D2). It is to be understood that more than two participants Px may be in the physical meeting space 450 to attend the meeting, and each may have or be associated with an active device Dx 402. Each active device 402 may have a UC&C instance/login 422 associated with it. Exemplary active device 402 may include a user device including a portable computing device such as, for example, a laptop, a mobile phone, a tablet, a desktop computer, a next unit of computing (NUC) device, a head-mounted device, etc. Moreover, even though the following embodiments are presented as distinct embodiment, the learnings, variations, blocks, functions, and observations may be interchangeable between the different embodiments and are not exhaustively repeated throughout the embodiments. The notes and variants of the other embodiments remain valid and applicable for this embodiment.
Still further, even though two participants are represented with an active device, this should not be considered as limiting, and there can be an arbitrary number of active device users or participants. In some cases, there may be at least two participants in the meeting space 450.
The audio source detection block 405 may include two or more speech detection units 404 each being configured to receive sensor data from device sensors 403 of the active devices Dx 402. The device sensors may include, for example, a microphone, a speaker, a camera, etc. The audio source detection block 405 may use audio, video or other modalities or metadata from the devices 402, or from other sources in the meeting space 450 to detect whether a participant associated with the device is talking, i.e., whether the participant in front of one device 402 is the active audio source. The audio source detection block 405 can derive speech detection from audio, video, or metadata or a combination thereof from the active devices. The results can be accompanied with a measure of confidence on the detection. As referenced herein, “confidence” may refer to a probability or a confidence score representing the level of certainty on the detection, which may be based on the strength and consistency of the signal(s) being detected. The audio source detection block 405 may use sensor data of other devices (e.g., devices in the meeting space 405 other than the devices Dx). In addition, the audio detection block 405 may use additional data exchanges between either the other speech or audio detection blocks or other components in the system 400. These additional connections are deemed implementation dependent, and do not alter the embodiment in any way.
The audio detection block 405 may include a detection aggregator 406 that may be used on the individual detection results from speech detection unit 404 to aggregate them into a consensus. In one example embodiment, the amount of simultaneous audio sources may be restricted to one. When more than one device indicates that audio is detected, one of the devices may be selected. The detection aggregator 406 may implement the selection by taking into account confidence information of the individual detections 404, temporal consistency, prior information on feasible audio switching guidelines, etc.
The output signal of the audio source detection block 405 may be an indication of the identified device for which the detection aggregator 406 determines that the participant who is using the device is talking. In an example embodiment, the amount of simultaneous audio sources may be restricted to one, e.g., the one with the highest confidence. In another example embodiment, the amount of simultaneous audio sources may include two or more. For example, when two or more participants are detected to be speaking, instead of using one audio stream for the multiple simultaneous audio sources, the system 400 can demultiplex the detected audio signals from the multiple simultaneous audio sources to have multiple audio streams to be sent to multiple different UC&C instances associated with the audio sources.
When the speech detection unit 404 and/or the detection aggregator 406 determines that no participant who is using the device is talking, a corresponding message such as, e.g., an explicit ‘no-one is talking’ message, can be signalled, or in some cases, no signal is sent to implicitly indicate that no participant is talking. Various signalling mechanisms can be used for suitable implementations.
The audio control block 410 may receive the output signal e.g., a “device is talking” signal from the audio source detection block 405, and map, via a mapping function 411 at mapping block 412, the device Dx to a UC&C instance Iy from one or more UC&C instances/logins 422 of the UC&C tool 420. It is to be understood that the UC&C tool and its associated instances can be other communications and collaboration platforms or tools and associated instances. The audio control block 410 may further identify the signal source(s) for the audio source(s) Dx at block 413, and configure the configurable audio system 415 at block 414 to send a proper audio signal to the mapped UC&C instance/login that represents the participant who is talking via the relevant signalled device. In this embodiment, the mapped UC&C instance/login (e.g., UC&C instances I1 or I2 of the UC&C tool 420) is determined to be directly related to the signalled device (e.g., device D1 or D2 connected to the UC&C tool 420), and the audio signal to be routed to the UC&C instance/login 422 is determined to be the signal from the room microphone 460. The audio control block 410, in this embodiment, can signal the audio signal captured from the room microphone 460 (for example) and route the captured signal to the UC&C instance/login of the device that was identified by the audio source detection block 405. In addition, the audio control block 410 may also configure the configurable audio system 415 when no participant is identified to be talking. In some cases, the audio control block 410 may choose to keep the audio signal routing unchanged when no participant is identified to be talking. In some cases, the audio control block 410 may choose to route the audio signal to a default device instead, or choose to mute the audio signal, etc. These choices are deemed implementation dependent.
The configurable audio system block 415 may allow for audio components thereof to be adapted and configured to various conditions in real time or non-real time. This can include altering audio streams including, e.g., splitting out multiple people talking into separate audio stream that represent individuals talking, altering the quality of the audio signal, combining multiple audio signals, routing audio signals from different sources to different destinations, etc. In this embodiment, the configurable audio system block 415 includes a room microphone controller 416 to control audio signals from the room microphone 460 to be routed to a certain device via a configuration parameter. When the audio control block 410 indicates, for example, that the room microphone audio signal should be sent to the UC&C client on device D1, the configurable audio system 415 may be configured to make the necessary changes to enable this. When the configurable audio system 415 cannot make the change, a corresponding signal can be sent back to the audio control block 410.
The configurable audio system block 415 may communicate with the UC&C instance(s)/login(s) 420 and include an audio rendering and routing unit 417 to allow the UC&C instance(s)/login(s) 420 to use the relevant audio signal(s). In one embodiment, this may involve re-routing audio streams that are exchanged with the UC&C instance(s)/login(s). In another embodiment that may involve adapting the streams to facilitate a desired effect, e.g. adapting the audio stream towards one UC&C instance/login by ingesting silence (e.g., inserting a silence signal or a zero-signal that represents silence) while the audio stream towards a second UC&C instance/login is adapted to represent the non-silent audio signal. The active audio signal can be processed and/or routed by generating audio signals to emulate the active audio signal. To emulate audio streams, the configurable audio system block 415 can generate signals that simulate audio sources and destinations connected to the UC&C platform. For example, the active audio signal can be processed and/or routed to emulate audio streams by inserting a silence signal when no active audio signal is to be routed.
In yet another embodiment, other data communication can be used to facilitate the desired effect of ensuring the proper audio stream(s) are ingested in the proper UC&C instance(s)/login(s) such as applying a (un)mute operation to the UC&C instance(s)/login(s) 420. This embodiment for example allows multiple users/participants in a meeting space to have an active connection to the UC&C instance/login without needing to worry about issues with undesired audio effects (e.g., audio echo or distortion). In addition, the configurable audio system block 415 can route the room audio to the most relevant device automatically which allows the user and UC&C tool to work properly with correct, high-quality room audio.
In the embodiment depicted in
With the presence of additional participants who do not have an active device for themselves, the audio source detection block 505 can identify such a participant talking by either direct or indirect means. An example of a direct means of detection can be the use of a speech identification algorithm that can recognize the identity of the participant talking from the sensors 503 of the devices 502, e.g., using an audio signature of the voice of the participant. An example of an indirect means of detection is to use the inverse of the knowledge of the direct detection of participants who do have an active device 502. For example, the audio source detection block 505 may detect, via the speech detection unit 504, a participant talking with an active device by analysing, using a multimodal analysis, the video that is coming from the device 502 (e.g., from a laptop camera) together with the audio that is coming from the device 502 (e.g., from a laptop microphone) and determine, via the aggregator 506, that the participant in front of the device 502 is actually the participant who is talking. The audio source detection block 505 may detect all active devices 502 in the meeting space 550 to determine that the participant who is speaking has no associated device 502. For example, when a participant is detected to be talking but none of the active devices 502 indicates that the participant in front of the device 502 is talking, the audio source detection block 505 can determine that another participant is talking without an active device. While this example of an indirect detection may not provide identity of the participant who is talking (one can merely identify that a participant is talking who is not using an active device), this can be enough to enable the system 500 to route the proper audio signal to the proper UC&C instance. For example, the system 500 can send the audio of participants without an active device to a default UC&C instance or, alternatively, the system 500 can change the current signal routing in this case and make changes to the configuration of the audio system 515 when a participant is talking who has an active device.
The mapping block 512 can receive the output signal from the audio source detection block 505 regarding the status (active or not) of the detected source(s), and map, via mapping function [Src, Instance] 511, the detected source(s) to specific UC&C instances/logins connected to the UC&C instances/logins tool 520. In this embodiment, one participant, e.g., P3 553, who does not have an active device Dx may not have a UC&C instance. The block 512 can determine what UC&C instance 522, 524 can be used for the audio source (participant 553 in this example), and where to send the audio signals to. As described above, in one variant of the embodiment, audio (e.g., speech) from a participant who is not in front of a device might be sent to UC&C instances/logins 522 associated with one of the existing devices Dx. In another variant, the audio might be sent to an extra UC&C instance 524 that, for example, represents the group rather than the individual (indicated with the dashed box at the end). In yet another variant, the audio might be ignored and not sent to a UC&C instance/login 522 or 524. In yet another variant, the audio system may not be altered when a participant is talking but not in front of a device. The block 512 can have additional inputs that enable configuration of this mapping, predefined or at runtime, so that the configuration of the audio system 515 can change during execution. The mapping function provided by the block 512 can facilitate diverse muting scenarios for instance, where a single participant in the room may be muted, the full meeting space 550 can be muted, or any case in between. The muting can be manual or automatic and can trigger enabling signal processing and/or routing in the configurable audio system 515 via the audio control functional block 510.
The audio control block 510 may take the input from a [Source, Instance] mapping block 512 and translate the input into configuration parameters towards the configurable audio system 515. For example, when the participant in front of a device Dx is talking, the audio control block 510 can identify the signal source(s) for the audio source(s) Dx at block 513, and configure the configurable audio system 515 at block 514 by sending signal(s) to the configurable audio system 515, e.g., sending the audio stream coming from a room microphone 560 to the UC&C instance running on device Dx 502. While one room microphone is illustrated in the figures such as
In an embodiment, the configurable audio system 515, including room microphone controller 516 and audio rendering and routing unit 517, may have similar configurations and functions as the previous embodiments such as shown in
One of the differences of the present embodiment depicted in
In an example embodiment, the audio source detection block 605 can receive sensor data from the device sensors 603 of the devices 602 and detect audio sources based on the received sensor data. The associated microphone 616 of that device Dx 602 can be used as an audio signal device. For instance, when a participant is detected to be talking in front of his/her laptop, the audio control 610 can identify the signal source(s) for the audio source(s) Dx at block 613, and configure the configurable audio system 615 at block 614. In the embodiment of
When a participant is talking who is not using an active device 602, the audio control 614 can decide what audio signal to use to represent this participant in its associated UC&C instance 622. In one case, a default signal source or device is selected for the participant to enable a consistent experience. In another case, the nearest signal source or device can be selected to facilitate better quality. Yet in another case, the existing audio system configuration can be maintained to use and the audio control 614 may not make changes when the participant is talking who does not have an active audio device.
In an embodiment, where the configurable audio system 615 has no room microphone available, the audio devices 616 that are made available by the configurable audio system 615 can be used to optimally deliver the most appropriate (e.g., processed and/or routed by audio rendering and routing unit 617) audio signals to each of the UC&C instances connected to the UC&C tool 620. As with other embodiments, this may involve automatically muting/unmuting device microphones 616, muting/unmuting device speakers, applying processing to audio signals to prevent/reduce echo, reverb or increase quality, apply processing to (de)multiplex audio signals, etc. As mentioned in previous embodiments, UC&C instances 622, 624 connected to the UC&C tool 620 may be available in the active devices 602 or through other means (e.g., virtual instances, instances hosted by other devices, etc.).
The embodiment depicted in
For example, when the audio source detection block 705, including speech detection units 704 and aggregator 706, receives data from device sensors 703, and identifies one active audio source (e.g., one of participants 751, 752, 753), [instance, source] mapping block 712 can map, at block 711, the identified active audio source to an instance connected to the UC&C tool or platform 720. In this example, participant 751 can be mapped to the instance 722 associated with device D1, participant 753 can be mapped to the instance 722 associated with device D1, and participant 752 can be mapped to the room instance 724 associated with the room system 761. The audio control block 710 can determine an audio signal source or device (e.g., room audio 760 in this example), and configure, at block 713, the audio system 715 to route an active audio signal from the room audio 760 towards the mapped instance.
Note that all the shown (functional) diagrams for the embodiments are not device mappings. As such, these diagrams do not restrict to which physical component a certain function is mapped. In addition, a single function might be mapped to multiple components, or multiple functions can be mapped to a single component. In addition, certain functions might be combined to facilitate the anticipated functionality.
In the embodiment of
In the embodiment of
As illustrated in
It is to be understood that the base unit or device 910 can be an arbitrary device that can execute the functionality of the functional blocks 906, 916, 917 and has communication means towards the relevant components. An example base unit is described in U.S. Patent Pub. No. 2021/0191893 (to Renard and Defraef) entitled “Method and system for making functional devices available to participants of meetings” which is incorporated herein by reference.
The base unit 910 can wirelessly connect to the devices Dx to implement at least one of an instance detection module (e.g., 310 in
The detection aggregation node 906 can create a consensus of one or more active sound sources (e.g., when one or more participants 951, 952, 953 are talking) and create a control signal towards the audio rendering and routing block 917. The audio rendering and routing block 917 can generate configuration parameters based on the received control signal to configure the room peripherals, such as the room speaker(s) and room microphone(s), to process and route active audio signal towards certain UC&C instances 922 connected to a UC&C tool or platform.
In one example, the base unit 910 maintains active connections with the UC&C instances Ix 922 and renders relevant audio signals based on the signalled configuration parameters. For example, when participant P2 952 is detected to be talking, the microphone signal towards the UC&C instance I2 922 can be a copy of the signal from the room microphone 960, while the microphone signal towards the UC&C instance I1 922 can be a zero-signal, i.e., a signal that represents silence. As such, the rendering implicitly provides the routing. Alternatively, the signals towards a UC&C instance Ix 922 can effectively be routed based on the configuration parameters. Audio signals can be sent to/from the relevant device 902 when the configuration parameters of the audio rendering and routing block 917 indicate such a case. These are just 2 non-limiting examples of how a configurable audio system can work in a centralized approach with a centralized detection aggregator.
When one of the participants Px 951, 952 is talking who have an active device Dx 902, the system 900a can determine the room peripherals 960, 961 to be used by the relevant UC&C client Ix 922. For example, the room microphone signal can be sent to the active device of the participant who is speaking and/or the room speaker can receive the signal from the relevant Ix/Dx. In some cases, a room microphone 960 and a room speaker 961 can be switched on at the same time for echo cancellation. It is to be understood that it may not be required to switch on the room microphone 960 and the room speaker 961 at the same time, and multiple room peripherals can be separately selected.
When participant P3 953 is talking who does not have an active device Dx 902, the system 900a can use the speech/audio detection capabilities of other sources than using the active devices 902 of other participants 951, 952 (in this example, P1 and P2). For example, when device D1 detects that a participant other than participant P1 951 is talking, and device D2 detects that a participant other than participant P2 952 is talking, and there are only 2 active devices 902, the system 900a can determine that a participant without an active device (in this case, participant 953) is talking. The detection aggregation 906 might send signal to the audio rendering and routing 917 that the room peripheral signal(s) need to be sent to a dedicated instance Ix 922 that is responsible for representing participants 951, 952. In other words, one of the active devices 902 in the meeting space 950 can provide a ‘host’ role to participant 953 without an active device. Alternatively, the detection aggregation 906 might signal to the audio rendering and routing 917 that the nearest active device 902 needs to handle the room peripheral signal(s). Yet alternatively, the detection aggregation 906 might not change the current audio rendering and routing 917 when such a user speaks. In another example, when the system determines that it is participant 953 who is talking without an active device, the system can send the related audio signal(s) to certain UC&C instance(s), which may be an existing one or a newly created/activated one. The system can determine who is effectively talking based on various data such as, for example, voice signatures, exclusion (e.g., it is known that there are 3 people and 2 of them have active devices), external analysis (e.g., by analysing a room camera and correlating with the available audio signals), etc. Also here, these are mere implementation examples and should not be considered limiting. A variant to this embodiment is to detect audio activity on each of the devices Dx 902 without relating the detection to the participant using the device, and to assign the room peripheral(s) 960, 961 to the device 902 with the most relevant activity features. For example, the system 900a can select the device 902 with the strongest audio signal or the audio signal with the least amount of reverb.
As illustrated in the embodiment of
In one example embodiment, the peripheral device(s) 920 may be a processor-enabled device that includes a memory that stores software to be run on the device 902 (i.e., processing device). The virtual audio device block 924 can be implemented by software, hardware or a combination thereof to route audio signals to/from the instance(s) 922. For example, the virtual audio device block 924 may provide the audio signals to/from a configurable audio system from/to the instance(s) 922. The virtual audio device block 924 may be exposed to (or present) the instance(s) 922 as a generic audio device. That is, the peripheral device(s) 920 may present, via the virtual audio device block 924, mediated audio signals to a communications and collaboration platform.
The user device 902 and the peripheral device 920 can process/forward signals (which may be processed) to facilitate an audio source detection. Blocks 904, 926 and 906 of the user device 902 and the peripheral device 920 can cooperate with each other to conduct the audio source detection. For example, the route/process signals block 926 can be implemented by software, hardware or a combination thereof to pass through sensor data of the sensors 903 from the device 902 to the peripheral device(s) 920, via blocks 904 and 926. The device 902 may implement software, hardware or a combination thereof to route/process the sensor data before the sensor data is sent to the peripheral device 920. The route/process signals block 926 of the peripheral device 920 can be implemented by software to route/process the sensor data before sending the sensor data to the base unit 910 or a device similar to the base unit 910.
In another example embodiment, the peripheral device 920 may use generic communication protocols and/or drivers for communication with the device 902, and to implement the virtual audio device block 924 and the route/process signals block 926. In an embodiment, instead of storing software to be run on the device 902, the peripheral device 920 may require no software installation and implementation on the device 902.
It is to be understood that the allocation of functionality between the device 902, the peripheral device 920 (e.g., D11 or D22) and the base unit 910 should not be seen as limiting. The implementation of routing/processing signals in one or more of the peripheral device 920, the base unit 910 and the device 902 may be dependent on desired applications. The processing of various signals can be implemented in the processing device 902, the peripheral device 920 and/or the base unit 910. As such, the signals that are sent between these devices are also implementation dependent.
As illustrated in the embodiment of
In the embodiments depicted in
In the embodiments depicted in
In the embodiment depicted in
According to the system 1000 depicted in
The system 1000 may use the speaker and microphone 1003 of different devices 1002 in different manners. For example, the system 1000 may decide to always use all speakers 1003 on the devices 1002 and only switch between microphones 1003 of different devices 1002. Alternatively, one might switch speakers 1003 along with microphones 1003 depending on detected audio sources. Other alternatives are possible, and this solution is not restricted to the mentioned examples. This remark holds for all other embodiments.
A variant to the embodiment depicted in
In one example embodiment, a peripheral device (e.g., the peripheral device 920 of
In one example embodiment, the peripheral device used for the system 1000 may have its own sensing devices, such as a microphone or camera and may not require access to the sensing devices 1003 on the processing device 1002. This configuration, similar to the configuration illustrated in
In the embodiment depicted in
According to the system 1100 depicted in
It is to be understood that the embodiment depicted in
One feature that may be relevant for all embodiments described herein is related to the device federation of all involved devices. When devices are brought into a meeting space and removed from the meeting space, or are enabled/disabled within the meeting space, the system configuration can require changes in order to optimize the system or to keep the system working.
Device federation can be implemented by the system in various way, with varying levels of automation. In one case, a participant might need to do a manual action to connect to the meeting. In another case, the participant might receive a trigger from the system that enables a one-click connect to the meeting space system. Yet in another case, the system can automatically connect the participant to the meeting based on detected presence, for example, using ultrasound, machine-readable code (e.g., QR code), or similar identification systems. In one example embodiment, an application or other downloadable program may be executed by a processor-enabled device such as, for example, a base unit or a room device, to emit ultrasound signal and detect a participant's presence in the meeting space based on the received ultrasound signal and automatically log the participant into the meeting. In one example embodiment, an application or other downloadable program may be executed by a device (e.g., a participant's portable computing device) to scan a machine-readable code (e.g., QR code) to automatically log into the meeting.
The device view can be built centrally or distributed dependent on various implementations. The system configuration can be updated centrally or distributed, dependent on the implementation.
In some cases, the device federation can change during operation. The device view and system configuration can be adapted to facilitate these changes and ensure an optimal system. In some cases, certain changes to the device federation might not impact the system, or the system may determine to not update directly in order to, for example, group updates and do a single device view and/or system configuration update.
It is also to be understood that the processing flow 1300 can include one or more operations, actions, or functions as illustrated by one or more of blocks 1310, 1320, 1330, 1340, and 1350. These various operations, functions, or actions may, for example, correspond to software, program code, or program instructions executable by one or more processors that cause the functions to be performed. Although illustrated as discrete blocks, obvious modifications may be made, e.g., two or more of the blocks may be re-ordered; further blocks may be added; and various blocks may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the desired implementation. Processing flow 1300 may begin at block 1310.
At block 1310, the system identifies at least one active audio source in the physical meeting space. The system may include an audio source detection module to identify the active audio source in the physical meeting space. The audio source detection module can also determine, based on the received audio data, video data, or metadata, an active audio device for the active audio source. The audio source detection module may include a detection aggregator to aggregate the received audio data, video data, or metadata into a consensus to determine the active audio device. The processing may proceed from block 1310 to block 1320.
For example, in the embodiment depicted in
At block 1320, the system maps the active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance. The system may include an instance detection module to determine the mapped instance. When the active audio source is determined to be associated with the active audio device, the instance detection module can designate one of the instances associated the active audio device as the mapped instance. When the active audio source is determined to be not associated with the active audio device, the instance detection module can designate one of the instances associated with one of the devices in the physical meeting space as the mapped instance. The processing may proceed from block 1320 to block 1330.
For example, in the embodiment depicted in
At block 1330, the system determines at least one audio signal device from a plurality of devices in the physical meeting space for the active audio source. In some cases, room peripheral(s) such as, for example, the room microphones 460, 560, 760 in
At block 1340, the system (e.g., an audio control block of the system) configures an audio system to process an active audio signal from the determined audio signal device via the mapped instance. For example, in the embodiment depicted in
At block 1350, the system can determine a new mapped instance corresponding to a new active audio source upon identifying a new active audio source. In an embodiment, the system can process the corresponding new active audio signal and switches the routing via the new mapped instance. For example, in the embodiment depicted in
As depicted, the computer system 1400 may include a central processing unit (CPU) 1405. The CPU 1405 may perform various operations and processing based on programs stored in a read-only memory (ROM) 1410 or programs loaded from a storage device 1440 to a random-access memory (RAM) 1415. The RAM 1415 may also store various data and programs required for operations of the system 1400. The CPU 1405, the ROM 1410, and the RAM 1415 may be connected to each other via a bus 1420. An input/output (I/O) interface 1425 may also be connected to the bus 1420.
The components connected to the I/O interface 1425 may further include an input device 1430 including a keyboard, a mouse, a digital pen, a drawing pad, or the like; an output device 1435 including a display such as a liquid crystal display (LCD), a speaker, or the like; a storage device 1440 including a hard disk or the like; and a communication device 1445 including a network interface card such as a LAN card, a modem, or the like. The communication device 1445 may perform communication processing via a network such as the Internet, a WAN, a LAN, a LIN, a cloud, etc. In an embodiment, a driver 1450 may also be connected to the I/O interface 1425. A removable medium 1455 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like may be mounted on the driver 1450 as desired, such that a computer program read from the removable medium 1455 may be installed in the storage device 540.
It is to be understood that the processes described with reference to the flowchart of
It is to be understood that the disclosed and other solutions, examples, embodiments, modules, events, functions, and the functional operations described in this document may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus may include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
One skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments. Additionally, while the above has been discussed with respect to methods and systems, it is appreciated that the methods may be stored on non-transitory computer-readable medium having computer-readable instructions, which when executed by a processor, performs the above steps of operation.
Different features, variations and multiple different embodiments have been shown and described with various details. What has been described in this application at times in terms of specific embodiments is done for illustrative purposes only and without the intent to limit or suggest that what has been conceived is only one particular embodiment or specific embodiments. It is to be understood that this disclosure is not limited to any single specific embodiments or enumerated variations. Many modifications, variations and other embodiments will come to mind of those skilled in the art, and which are intended to be and are in fact covered by both this disclosure. It is indeed intended that the scope of this disclosure should be determined by a proper legal interpretation and construction of the disclosure, including equivalents, as understood by those of skill in the art relying upon the complete disclosure present at the time of filing.
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures may be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated may also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated may also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
From the foregoing, it will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting.
It is appreciated that any one of aspects can be combined with each other.
Aspect 1. A system for mediating an audio source for a meeting on a communications and collaboration platform, the system comprising:
Aspect 2. The system of Aspect 1, wherein the audio source detection module is configured to receive at least one of audio data, video data, or metadata from the plurality of devices.
Aspect 3. The system of Aspect 2, wherein the audio source detection module is configured to determine, based on the received audio data, video data, or metadata, an active audio device from the plurality of devices for the active audio source.
Aspect 4. The system of Aspect 2 or 3, wherein the audio source detection module comprises a detection aggregator to aggregate the received audio data, video data, or metadata into a consensus to determine the active audio source.
Aspect 5. The system of Aspect 4, wherein the detection aggregator is a centralized detection aggregator hosted by a base unit of the devices.
Aspect 6. The system of Aspect 4 or 5, wherein at least some of the plurality of devices form a distributed network, and the detection aggregator is hosted by the distributed network.
Aspect 7. The system of any one of Aspects 2-6, wherein the instance detection module is further configured to determine whether the active audio source is associated with the active audio device.
Aspect 8. The system of Aspect 7, wherein when the active audio source is associated with active audio device, the instance detection module is further configured to designate one of the instances associated the active audio device as the mapped instance, and when the active audio source is not associated with the active audio device, the instance detection module is further configured to designate one of the instances as the mapped instance.
Aspect 9. The system of any one of Aspects 1-8, further comprising a device federation module configured to detect a presence of one of the plurality of devices, and automatically connect the device to the communications and collaboration platform.
Aspect 10. The system of any one of Aspects 1-9, wherein the audio control module is configured to compare a plurality of audio signals from at least some of the devices to determine the audio signal source.
Aspect 11. The system of any one of Aspects 1-10, wherein the audio signal source comprises one or more room audio peripherals.
Aspect 12. The system of any one of Aspects 1-11, wherein the audio signal source comprises an audio device connected to a portable computing device.
Aspect 13. The system of any one of Aspects 1-12, wherein the audio control module is to configure the audio system to manipulate one or more audio signals between the instances and the plurality of devices.
Aspect 14. The system of Aspect 13, wherein the audio signals are manipulated based on the active audio signal and are different from the active audio signal.
Aspect 15. The system of any one of Aspects 1-14, wherein the audio control module is to configure the audio system to mute the plurality of devices except for the audio signal source.
Aspect 16. The system of any one of Aspects 1-15, wherein the audio control module is to configure the audio system to dynamically route an audio signal from/to one of the instances.
Aspect 17. The system of Aspect 16, wherein the audio control module is to configure the audio system to re-route the audio signal from the mapped instance associated with a room speaker based on an echo cancellation logic.
Aspect 18. The system of any one of Aspects 1-17, further comprising a base unit wirelessly connecting to a plurality of processing devices to implement at least one of the instance detection module and the audio control module.
Aspect 19. The system of Aspect 18, further comprising one or more room devices connected to the base unit which allows the plurality of processing devices to access to the room devices.
Aspect 20. The system of any one of Aspects 1-19, wherein the audio control module is to configure the audio system to manipulate the active audio signal to adapt to the active audio source.
Aspect 21. The system of Aspect 20, wherein the audio control module is to configure the audio system to filter the active audio signal to obtain a specific voice signal of a participant.
Aspect 22. The system of any one of Aspects 1-21, wherein the mapped instance of the communications and collaboration platform is associated with a desktop device.
Aspect 23. The system of any one of Aspects 1-22, wherein the mapped instance of the communications and collaboration platform is associated with a portable computing device.
Aspect 24. The system of any one of Aspects 1-23, wherein the instance detection module is configured to map the active audio source to one or more instances related to a source or sink of audio data.
Aspect 25. A method for mediating an audio source for a meeting on a communications and collaboration platform for a plurality of participants, the method comprising:
identifying at least one active audio source;
mapping the active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance;
determining at least one audio signal source from a plurality of devices for the active audio source; and
configuring an audio system to manipulate an active audio signal from the audio signal source via the mapped instance.
Aspect 26. The method of Aspect 25, wherein identifying the active audio source further comprises receiving at least one of audio data, video data, or metadata from the plurality of devices.
Aspect 27. The method of Aspect 26, further comprising determining, based on the received audio data, video data, or metadata, an active audio device from the plurality of devices for the active audio source.
Aspect 28. The method of Aspect 27, further comprising aggregating, via a detection aggregator, the received audio data, video data, or metadata into a consensus to determine the active audio device.
Aspect 29. The method of Aspect 27 or 28, wherein mapping the active audio source further comprises determining whether the active audio source is associated with the active audio device.
Aspect 30. The method of Aspect 29, wherein when the active audio source is associated with the active audio device, designating one of the instances associated with the active audio device as the mapped instance, and when the active audio source is not associated with the active audio device, designating one of the instances as the mapped instance.
Aspect 31. The method of any one of Aspects 25-30, further comprising upon identifying a second active audio source, and upon identifying a second active audio source, determining a second mapped instance, and maintaining a routing of the active audio signal from the audio signal device.
Aspect 32. The method of any one of Aspects 25-31, further comprising detecting a presence of one of the plurality of devices, and automatically connecting the device to the communications and collaboration platform.
Aspect 33. The method of any one of Aspects 25-32, further comprising comparing a plurality of audio signals from at least some of the devices to determine the audio signal source.
Aspect 34. The method of any one of Aspects 25-33, wherein the audio signal source comprises one or more room audio peripherals.
Aspect 35. The method of any one of Aspects 25-34, further comprising configuring the audio system to manipulate the active audio signal to adapt to the active audio source.
Aspect 36. The method of any one of Aspects 25-35, further comprising configuring the audio system to mute the plurality of devices except for the audio signal source.
Aspect 37. The method of any one of Aspects 25-36, further comprising identifying a second active audio source, mapping the identified second active audio source to at least one of the instances to determine a second mapped instance.
Aspect 38. The method of Aspect 37, further comprising switching a highlighting of the mapped instance on a user interface of the communications and collaboration platform to the second mapped instance.
Aspect 39. The method of any one of Aspects 25-38, further comprising monitoring a change of the plurality of devices connected to the platform, updating a device view of the meeting according to the change, and updating a configuration of the audio system according to the change.
Aspect 40. The method of any one of Aspects 25-39, further comprising creating or activating an instance for the active audio source when the active audio source is not associated with one of the instances.
Aspect 41. The method of Aspects 40, wherein the instance is created for a participant who is speaking.
Aspect 42. The method of Aspects 41, further comprising manipulating the active audio signal to ensure that only the participant's voice is sent to the mapped instance.
Aspect 43. The method of any one of Aspects 25-42, further comprising dynamically routing an audio signal from the mapped instance.
Aspect 44. The method of any one of Aspects 25-43, wherein the manipulating of the active audio signal further comprises re-routing or re-configuring the active audio signal from the mapped instance.
Aspect 45. The method of any one of Aspects 25-44, wherein the manipulating of the active audio signal further comprises generating audio signals to emulate the active audio signal.
Aspect 46. The method of Aspect 45, wherein the manipulating of the active audio signal further comprises inserting a silence signal when no active audio signal is to be routed.
Aspect 47. The method of any one of Aspects 25-46, further comprising configuring the audio system to manipulate an audio signal routed between the instances and the plurality of devices, wherein the audio signal is manipulated based on the active audio signal.
Aspect 48. The method of Aspect 47, further comprising configuring the audio system to route the audio signal from/to one of the instances.
Aspect 49. A system for mediating an audio source for a meeting on a communications and collaboration platform, the system comprising:
an audio source detection module configured to identify at least one active audio source;
an instance detection module configured to map the at least one active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance; and
an audio control module configured to:
determine at least one audio signal source from a plurality of devices for the active audio source; and
configure an audio system to manipulate an active audio signal from the audio signal source via the mapped instance.
Aspect 50. A method for mediating an audio source for a meeting on a communications and collaboration platform, the method comprising:
identifying at least one active audio source;
mapping the active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance;
determining at least one audio signal source from a plurality of devices for the active audio source; and
configuring an audio system to manipulate an active audio signal from the audio signal source via the mapped instance.
Aspect 51. A system for mediating an audio source for a meeting on a communications and collaboration platform, the system comprising:
an audio source detection module configured to identify at least one active audio source;
an instance detection module configured to map the at least one active audio source to one or more instances connected to the communications and collaboration platform to determine a mapped instance; and
an audio control module configured to:
determine at least one audio signal device from a plurality of devices to capture audio from the active audio source; and
configure an audio system to manipulate an active audio signal from the audio signal device via the mapped instance,
wherein the system further comprises one or more user devices and one or more peripheral devices coupled to the user devices.
Aspect 52. The system of Aspect 51, wherein the peripheral devices are configured to receive sensing data from the user devices.
Aspect 53. The system of Aspect 51 or 52, wherein the peripheral devices each include a sensing device.
Aspect 54. An electronic meeting tool for mediating an audio source for a meeting on a communications and collaboration platform, the tool comprising:
one or more peripheral devices adapted to couple one or more user devices to the communications and collaboration platform,
wherein the peripheral devices are further configured to:
Aspect 55. The system of Aspect 54, wherein the peripheral devices are configured to receive the sensing data from the user devices.
Aspect 56. The system of Aspect 54 or 55, wherein the peripheral devices are configured to present, via a virtual audio device, mediated audio signals to the communications and collaboration platform.
Number | Date | Country | |
---|---|---|---|
63492167 | Mar 2023 | US |