SYSTEM AND METHOD FOR ESTABLISHING CALL AUDIO SHARING USING BLUETOOTH LOW ENERGY AUDIO TECHNOLOGY

Abstract
Embodiments herein provide a method for call sharing between Low Energy (LE) audio headsets. The method includes receiving, by a first LE audio headset connected with a user device over a first Connected Isochronous Group (CIG), a request message for sharing a call receiving at the user device with second LE audio headsets from the user device. The method includes establishing, by the first LE audio headset, a combined call between the first LE audio headset and the at least one second LE audio headset using a second CIG to connect the first LE audio headset with the at least one second LE audio headset.
Description
FIELD

The present disclosure relates to an electronic device, and more specifically to a method and a Low Energy (LE) audio headset for call sharing to other LE audio headsets.


BACKGROUND

Bluetooth special interest group introduces isochronous channels to transfer time bounded data between Low Energy (LE) audio devices. A Connected Isochronous stream (CIS) connection is a connection established by a central device (e.g. smartphone) with a peripheral device (e.g. LE audio headset) using the isochronous channels. In case of Truly Wireless Stereo (TWS) earbud, the central device creates two CIS connections with each of left and right earbuds. Both the CIS connections are part of a group called Connected Isochronous Group (CIG) in which the two CIS connections are synchronized to timing assigned by the central device.



FIG. 1 illustrates a problem in conventional LE audio headsets. Consider, a pair of primary LE audio headsets (30) and a user device (10) (e.g. smartphone) are used by a user-1 (31), whereas a pair of secondary LE audio headsets (300A) is used by a user-2 (41), and a pair of secondary LE audio headsets (300B) is used by a user-3 (51). The user device (10) is connected with the pair of primary LE audio headsets (30) using two CIS connections (CIS1, CIS2) established from the user device (10), where the CIS1 and the CIS2 belong to a primary CIG (20). Consider, that the user device (10) receives the call from a contact named Alice. The pairs of the secondary LE audio headsets (300A-300B) are nearby to the pair of primary LE audio headsets (100). The user-1 (31) wants to add the user-2 (41) and the user-3 (51) in the ongoing call. Even the user-2 (41) and the user-3 (51) have their own LE audio headsets (300A-300B) and are nearby to the primary LE audio headset (30), the primary LE audio headset (30) is not capable to share the call to the LE audio headsets (300A-300B). Hence, the user-1 (31) puts the call on speaker mode in the user device (10) and allows the user-2 (41) and the user-3 (51) to listen/speak to a caller.


Since each user (1-3) is sitting at a varied distance/orientation from the user device (10), putting the call on the speaker mode will induce some problems like users (1-3) may not hear the audio in the call properly. Also, the call will not be audible at the caller end as a lot of noise is also added when the users (1-3) are talking on the speaker call. Hearing aid persons will not be comfortable for talking with the call put on the speaker mode. Also, another person located nearby to the user device (10) can also listen to discussion in the call, which degrades privacy of the call. Thus, it is desired to provide a solution for the aforementioned problems.


SUMMARY

An embodiment of the present disclose relates to a method for call sharing between Low Energy (LE) audio headsets, the method may be executed by a processor and include receiving, by a first LE audio headset connected to a user device, a request message for sharing a call received at the user device with at least one second LE audio headset, wherein the first LE audio headset is connected to the user device over a first Connected Isochronous Group (CIG); and establishing, by the first LE audio headset, a combined call between the first LE audio headset and the at least one second LE audio headset using a second CIG to connect the first LE audio headset with the at least one second LE audio headset.


An embodiment of the present disclosure relates to a Low Energy (LE) audio headset for call sharing. The LE audio headset may include a memory, a processor, and a call sharing controller coupled to the memory and the processor. The call sharing controller may be configured for receiving a request message for sharing a call received at a user device with at least one second LE audio headset, wherein the LE audio headset is connected with the user device over a first Connected Isochronous Group (CIG), and establishing a combined call between the LE audio headset and the at least one second LE audio headset using a second CIG to connect the LE audio headset with the at least one second LE audio headset.


An embodiment of the present disclosure relates to a non-transitory computer-readable storage medium storing instructions for call sharing between Low Energy (LE) audio headsets. The non-transitory computer-readable storage medium storing includes instructions that cause a processor to receive, by a first LE audio headset connected to a user device, a request message for sharing a call received at the user device with at least one second LE audio headset, wherein the first LE audio headset is connected to the user device over a first Connected Isochronous Group (CIG); and establish, by the first LE audio headset, a combined call between the first LE audio headset and the at least one second LE audio headset using a second CIG to connect the first LE audio headset with the at least one second LE audio headset.


Accordingly, the embodiments herein provide the first LE audio headset for call sharing. The LE audio headset includes a call sharing controller, a memory, and a processor, where the call sharing controller is coupled to the memory and the processor. The call sharing is configured for receiving the request message for sharing the call receiving at the user device with the second LE audio headsets from the user device, where the first LE audio headset is connected with the user device over the first CIG. The call sharing is configured for connecting to the second LE audio headsets over the second CIG for establishing the combined call between the first LE audio headset and the second LE audio headsets.


These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments, and the embodiments herein include all such modifications.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present disclosure are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:



FIG. 1 is diagram illustrating a problem in conventional LE audio headsets, according to prior art;



FIG. 2 is a block diagram of a first LE audio headset for sharing a call to other LE audio headsets, according to an embodiment as disclosed herein;



FIG. 3 is a flow diagram illustrating a method for call sharing between the LE audio headsets, according to an embodiment as disclosed herein;



FIG. 4 is a diagram illustrating signaling between a user device, the first LE audio headset, and second LE audio headsets for call sharing based on a trigger at the user device, according to an embodiment as disclosed herein;



FIG. 5 illustrates an example scenario of call sharing based on the trigger at the user device, according to an embodiment as disclosed herein;



FIG. 6 is a diagram illustrating signaling between a user device, the first LE audio headset, and second LE audio headsets for call sharing based on a trigger at the first LE audio headset, according to an embodiment as disclosed herein;



FIG. 7 illustrates an example scenario of call sharing based on the trigger at the first LE audio headset, according to an embodiment as disclosed herein;



FIG. 8 is a flow diagram illustrating a method for call sharing by the user device, according to an embodiment as disclosed herein;



FIG. 9 is a flow diagram illustrating a method for call sharing by the first LE audio headset, according to an embodiment as disclosed herein;



FIG. 10 illustrates a timing diagram of synchronized rendering of incoming audio from a user device in a first CIG event to both first LE audio headset as well as all second LE audio headsets at second CIG synchronization point, according to an embodiment as disclosed herein;



FIG. 11 is a flow diagram illustrating a method for determining the synchronization delay of the second CIG for broadcasting the audio of the call received from the user device, according to an embodiment as disclosed herein;



FIG. 12 illustrates a timing diagram of synchronized rendering of incoming audio from the first LE audio headset device in second CIG event to both second LE audio headsets as well as the user device at second CIG synchronization point, according to an embodiment as disclosed herein;



FIG. 13 is a flow diagram illustrating a method for determining the synchronization delay of the second CIG for broadcasting the audio generated at the first LE audio headset, according to an embodiment as disclosed herein;



FIG. 14 illustrates a timing diagram of mixing of incoming audio from second LE Audio headsets in second CIG event and incoming audio from user device in first CIG event M+1 at first CIG event M+1 synchronization points and render the audio to other devices at CIG event N+1 synchronization point, according to an embodiment as disclosed herein; and



FIG. 15 is a flow diagram illustrating a method for determining the synchronization delay of the second CIG for receiving and broadcasting the audio generated at the user device, and the second LE audio headsets, according to an embodiment as disclosed herein.





DETAILED DISCLOSURE

The principal object of the embodiments herein is to provide a method and a Low Energy (LE) audio headset for call sharing to other LE audio headsets. The proposed method allows a user to share a call from a first (also referred to herein as the primary) LE audio headset of the user to one or more second (also referred to herein as secondary) LE audio headsets of nearby selected members. So that the user needs not to put the call on speaker mode to allow nearby members to speak/listen to a caller. Due to not putting in the speaker mode, the audio of the call will not deliver to a person who the user does not wish to listen/speak in the call, which enhances privacy of the call.


Another object of the embodiments herein is to flawlessly synchronize voice in the call by allowing the first LE audio headset to create a second CIG for a user device (e.g. smartphone), the first LE audio headset, and the second LE audio headsets in the second CIG and share the call to members in the second CIG. The first LE audio headset determines a synchronization delay based on a source of audio in the call and number of members in the second CIG. Also, the first LE audio headset optimizes codec configuration settings and QoS parameters in the second CIG based on the number of second headsets to reduce latency. Thus, the first LE audio headset is able to stream the audio of the call to all members in the second CIG and the caller in a time-synchronized manner based on the synchronization delay.


Another object of the embodiments herein is to preconfigure the second LE audio headsets with the user device. Further, whenever the call is detected, the first LE audio headset shares the call to the preconfigured second LE audio headsets that are nearby to the first LE audio headset immediately based on a user gesture performed on the first LE audio headset, which enhances the user experience.


Another object of the embodiments herein is to assist people with hearing disability by directly transferring the call audio to their proposed method enabled hearing aids, where the people with hearing disability face difficulty in hearing the voice when the call puts on the speaker mode.


The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term “or” as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.


As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as managers, units, modules, hardware components or the like, are physically implemented by analog and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.


The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents, and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms primary, secondary, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.


Throughout this disclosure, the terms “LE audio headset” and “earbud” are used interchangeably and mean the same.


Accordingly, the embodiments herein provide a method for call sharing between Low Energy (LE) audio headsets. The method includes receiving, by a first LE audio headset connected with a user device over a first Connected Isochronous Group (CIG), a request message for sharing a call receiving at the user device with second LE audio headsets from the user device. The method includes connecting, by the first LE audio headset, to the second LE audio headsets over a second CIG for establishing a combined call between the first LE audio headset and the second LE audio headsets.


Accordingly, the embodiments herein provide the first LE audio headset for call sharing. The LE audio headset includes a call sharing controller, a memory, and a processor, where the call sharing controller is coupled to the memory and the processor. The call sharing is configured for receiving the request message for sharing the call receiving at the user device with the second LE audio headsets from the user device, where the first LE audio headset is connected with the user device over the first CIG. The call sharing is configured for connecting to the second LE audio headsets over the second CIG for establishing the combined call between the first LE audio headset and the second LE audio headsets.


Unlike existing methods and systems, the proposed method allows a user to share the call from the first LE audio headset of the user to second LE audio headsets of nearby selected members. So that the user needs not to put the call on speaker mode to allow nearby members to speak/listen to a caller. Due to not putting in the speaker mode, the audio of the call will not deliver to a person who the user does not wish to listen/speak in the call, which enhances privacy of the call


Unlike existing methods and systems, the first LE audio headset flawlessly synchronizes voice in the call by allowing the first LE audio headset to create the second CIG for the user device, the first LE audio headset, and the second LE audio headsets in the second CIG and shares the call to members in the second CIG. Further, the first LE audio headset determines a synchronization delay based on a source of audio in the call and number of members in the second CIG. Also, the first LE audio headset optimizes codec configuration settings and QoS parameters in the second CIG based on the number of second headsets to reduce latency. Thus, the first LE audio headset is able to stream the audio of the call to all members in the second CIG and the caller in a time-synchronized manner based on the synchronization delay.


Unlike existing methods and systems, the proposed method allows the user device (e.g. smartphone) to preconfigure the second LE audio headsets. Whenever the call is detected, the first LE audio headset shares the call to the preconfigured second LE audio headsets that are nearby to the first LE audio headset immediately based on a user gesture performed on the first LE audio headset, which enhances the user experience.


Also, the proposed method is useful for people with hearing disability by directly transferring the call audio to their proposed method enabled hearing aids, where the people with hearing disability face difficulty in hearing the voice when the call puts on the speaker mode.


Referring now to the drawings, and more particularly to FIGS. 2 through 15, there are shown preferred embodiments.



FIG. 2 is a block diagram of a first LE audio headset (100) for sharing a call to other LE audio headsets (300A-300B), according to an embodiment as disclosed herein. The other LE audio headsets are called as second LE audio headsets (300A-300B) in this disclosure, refer to FIG. 4. Examples of the first LE audio headset (100) and other LE audio headsets include, but are not limited to True Wireless Stereo (TWS) earbud pairs, wireless earbuds, Bluetooth earphones, etc. In an embodiment, the first LE audio headset (100) includes a call sharing controller (110), a memory (120), a processor (130), a communicator (140), and a sensor (150), The sensor (150) is configured to receive user input (e.g. head gesture, finger tap) and provide a trigger to the first LE audio headset (100) for sharing the call to the second LE audio headsets (300A-300B) pre-configured with the first LE audio headset (100).


The call sharing controller (110) is implemented by processing circuitry such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by a firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.


The first LE audio headset (100) is connected with a user device (200) over a first Connected Isochronous Group (CIG). Examples of the user device (200) include, but are not limited to a smartphone, a tablet computer, a Personal Digital Assistance (PDA), a desktop computer, an Internet of Things (IOT), a wearable device, etc. The call sharing controller (110) receives a request message for sharing a call receiving at the user device (200) with second LE audio headsets (300A-300B) from the user device (200), where the call receiving at the user device (200) is an ongoing call or an incoming call. In an embodiment, the call sharing controller (110) receives a gesture from a user. Further, the call sharing controller (110) sends a call sharing enable request message to the user device (200), where the user device (200) automatically selects the second LE audio headsets (300A-300B) pre-configured at the user device (200) and located in proximity of the first LE audio headset (100) upon receiving the call sharing enable request message and sends the request message. Further, the call sharing controller (110) receives the request message from the user device (200). Upon receiving the request message, the call sharing controller (110) connects to the second LE audio headsets (300A-300B) over a second CIG for establishing a combined call between the first LE audio headset (100) and the second LE audio headsets (300A-300B).


In an embodiment, the call sharing controller (110) is configured for optimizing codec configuration settings and Quality of Service (QOS) parameters in the second CIG based on a number of the second LE audio headsets (300A-300B) to reduce latency in LE audio communication.


In an embodiment, for receiving the request message for sharing the call receiving at the user device (200) with the second LE audio headsets (300A-300B) from the user device (200), the call sharing controller (110) receives a synchronization delay of the first CIG from the user device (200). Further, the call sharing controller (110) receives the request message including a list of the second LE audio headsets (300A-300B) in proximity of the first LE audio headset (100) from the user device (200) for creation of the second CIG.


In an embodiment, for connecting to the second LE audio headsets (300A-300B) over the second CIG for establishing the combined call between the first LE audio headset (100) and the second LE audio headsets (300A-300B), the call sharing controller (110) creates the second CIG between the first LE audio headset (100), the second LE audio headsets (300A-300B), and the user device (200). Further, the call sharing controller (110) determines a synchronization delay of the second CIG based on a number of the second LE audio headsets (300A-300B). Further, the call sharing controller (110) sends the synchronization delay of the second CIG to the second LE audio headsets (300A-300B) and the user device (200). Further, the call sharing controller (110) renders audio received from the user device (200) in the first LE audio headset (100) and the second LE audio headsets (300A-300B) based on the synchronization delay of the second CIG.


In an embodiment, for rendering the audio of the call together in the first LE audio headset (100) and the second LE audio headsets (300A-300B) based on the synchronization delay of the second CIG, the call sharing controller (110) receives audio from the second LE audio headsets (300A-300B). Further, the call sharing controller (110) renders the received audio and the audio of the call together in the first LE audio headset (100), and the user device (200) based on the synchronization delay of the second CIG.


In another embodiment, for rendering the audio of the call together in the first LE audio headset (100) and the second LE audio headsets (300A-300B) based on the synchronization delay of the second CIG, the call sharing controller (110) receives audio of a user (i.e. user-1). Further, the call sharing controller (110) renders the received audio and the audio of the call together in the second LE audio headsets (300A-300B), and the user device (200) based on the synchronization delay of the second CIG.


In another embodiment, for rendering the received audio and the audio of the call together in the first LE audio headset (100), and the user device (200) based on the synchronization delay of the second CIG, the call sharing controller (110) renders the received audio and the audio of the call together in the first LE audio headset (100), the user device (200), and other second LE audio headset (300A-300B)(s) based on the synchronization delay of the second CIG when multiple second LE audio headset (300A-300B)s are present in the second CIG.


The memory (120) stores details of the second LE audio headsets (300A-300B) pre-configured with the first LE audio headset (100). The memory (120) stores instructions to be executed by the processor (130). The memory (120) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory (120) may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (120) is non-movable. In some examples, the memory (120) can be configured to store larger amounts of information than its storage space. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache). The memory (120) can be an internal storage unit or it can be an external storage unit of the first LE audio headset (100), a cloud storage, or any other type of external storage.


The processor (130) is configured to execute instructions stored in the memory (120). The processor (130) may be a general-purpose processor, such as a Central Processing Unit (CPU), an Application Processor (AP), or the like, a graphics-only processing unit such as a Graphics Processing Unit (GPU), a Visual Processing Unit (VPU) and the like. The processor (130) may include multiple cores to execute the instructions. The communicator (140) is configured for communicating internally between hardware components in the first LE audio headset (100). Further, the communicator (140) is configured to facilitate the communication between the first LE audio headset (100) and other devices via one or more networks (e.g. Radio technology). The communicator (140) includes an electronic circuit specific to a standard that enables wired or wireless communication.


Although the FIG. 2 shows the hardware components of the first LE audio headset (100) but it is to be understood that other embodiments are not limited thereon. In other embodiments, the first LE audio headset (100) may include less or a greater number of components. Further, the labels or names of the components are used only for illustrative purpose and does not limit the scope of any embodiments of the present disclosure. One or more components can be combined together to perform same or substantially similar function for sharing a call to other LE audio headsets (300A-300B).



FIG. 3 is a flow diagram (300) illustrating a method for call sharing between the LE audio headsets (100, 300A-300B), according to an embodiment as disclosed herein. In an embodiment, the method allows the call sharing controller (110) to perform operations 301-302 of the flow diagram (300). At operation 301, the method includes receiving the request message for sharing the call receiving at the user device (200) with the second LE audio headsets (300A-300B) from the user device (200), where the first LE audio headset (100) is connected with the user device (200) over the first CIG. At operation 302, the method includes connecting to the second LE audio headsets (300A-300B) over the second CIG for establishing the combined call between the first LE audio headset (100) and the second LE audio headsets (300A-300B).



FIG. 4 is a sequential diagram illustrating signalling between the user device (200), the first LE audio headset (100), and the second LE audio headsets (300A-300B) for call sharing based on the trigger at the user device (200), according to an embodiment as disclosed herein. Consider, that the first LE audio headset (100) and the user device (200) are used by a user-1, whereas the second LE audio headset (300A) is used by a user-2, and the second LE audio headset (300B) is used by a user-3. The first LE audio headset (100) sends a targeted announcement indicating availability of the first LE audio headset (100) to the user device (200) for receiving the audio for context types: conversational (For receiving call audio), and an immediate alert (e.g. For receiving ringtone).


Later, an incoming call is detected on the user device (100) and immediately establishes a LE Asynchronous Connection Less (LE-ACL) connection with the first LE audio headset (100) to send a call state, i.e. incoming notification to the first LE audio headset (100). Further, the user device (200) configures codec parameters and enables Audio Stream Endpoint (ASE) for context type “immediate alert”. Further, the user device (200) establishes a LE CIG by creating a Connected Isochronous Stream (CIS) with the first LE audio headset (100), where the ringtone is audible on the first LE audio headset (100). When the user-1 accepts the call, the ASE state moves to a streaming state. Meanwhile, consider the second LE audio headsets (300A-300B) send a general announcement indicating that the second LE audio headsets (300A-300B) are available to receive the audio for above mentioned context types.


Consider, at 401 the user-1 enables the call sharing feature at the user device (200) and selects the second LE audio headsets (300A-300B) upon receiving the general announcement. At 402, the user device (200) sends a vendor specific command to the first LE audio headset (100) to establish the second CIG from the first LE audio headset (100). The second CIG consists of the second LE audio headsets (300A-300B), the first LE audio headset (100), and the user device (200).


At 403-404, the first LE audio headset (100) configures codec, Quality of Service (QOS), and enables an ASE context type as “Conversational” at the second LE audio headset (300A). At 405, the second LE audio headset (300A) enables an Audio Stream Control Service (ASCS) Audio Stream Endpoint (ASE) Identifier (ID) state. At 406-407, the first LE audio headset (100) configures the codec, the QoS, and enables the ASE context type as “Conversational” at the second LE audio headset (300B). At 408, the second LE audio headset (300B) enables the ASCS ASE ID state. At 409, the first LE audio headset (100) creates the second CIG and sends the synchronization delay to the second LE audio headsets (300A-300B) and the user device (200), where the synchronization delay is sent for rendering the audio to all the second LE audio headsets (300A-300B) and the user device (200) and a caller at the same time.


At 410-411, the first LE audio headset (100) creates multiple CIS connections with each second LE audio headset (300A-300B) and sets up an audio data path. The user device (200) forwards the incoming audio received from the user device (200) to the second LE audio headsets (300A-300B). Also, the user device (200) forwards the incoming audio coming from the second LE audio headsets (300A-300B) to the user device (200).



FIG. 5 illustrates an example scenario of call sharing based on the trigger at the user device (200), according to an embodiment as disclosed herein. Consider, a pair of first LE audio headsets (100) and the user device (200) are used by the user-1 (31), whereas a pair of second LE audio headsets (300A) is used by the user-2 (41), and a pair of second LE audio headsets (300B) is used by the user-3 (51). The user device (200) is connected with the pair of first LE audio headsets (100) using two CIS connections (CIS1, CIS2) established from the user device (200), where the CIS1 and the CIS2 belong to the first CIG (20). At 501, the user device (200) receives the call from a contact named Alice. The pairs of the second LE audio headsets (300A-300B) are nearby to the pair of first LE audio headsets (100). The user-1 (31) wants to add the user-2 (41) and the user-3 (51) in the call. At 502, the user-1 (31) opens the call sharing user interface in the user device (200) and selects the pairs of the second LE audio headsets (300A-300B) for sharing the call. At 503, the user device (200) requests the pair of first LE audio headsets (100) to share the call with the pairs of the second LE audio headsets (300A-300B). Further, the pairs of the first LE audio headset (100) create the second CIG (60), add the pairs of the second LE audio headsets (300A-300B) and the user device (200) in the second CIG (60), and establish two CIS connections (CIS3, CIS4) with the pair of second LE audio headsets (300A) and two CIS connections (CIS5, CIS6) with the pair of second LE audio headsets (300B).



FIG. 6 is a sequential diagram illustrating signalling between the user device (200), the first LE audio headset (100), and second LE audio headsets (300A-300B) for call sharing based on the trigger at the first LE audio headset (100), according to an embodiment as disclosed herein. Consider, that the first LE audio headset (100) and the user device (200) are used by the user-1, whereas the second LE audio headset (300A) is used by the user-2, and the second LE audio headset (300B) is used by the user-3. The first LE audio headset (100) sends the targeted announcement indicating availability of the first LE audio headset (100) to the user device (200) for receiving the audio for context types: conversational (For receiving call audio), and the immediate alert (e.g. For receiving ringtone). Also, the second LE audio headset (300A) are pre-configured with the first LE audio headset (100) via the user device (200).


Later, the incoming call is detected on the user device (100) and immediately establishes the LE-ACL connection with the first LE audio headset (100) to send the call state, i.e. incoming notification to the first LE audio headset (100). Further, the user device (200) configures the codec parameters and enables the ASE for context type “immediate alert”. Further, the user device (200) establishes the LE CIG by creating the CIS with the first LE audio headset (100), where the ringtone is audible on the first LE audio headset (100). When the user-1 accepts the call, the ASE state moves to the streaming state. Meanwhile, consider the second LE audio headsets (300A-300B) send the general announcement indicating that the second LE audio headsets (300A-300B) are available to receive the audio for above mentioned context types.


The user-1 performs the gesture on the first LE audio headset (100). Upon detecting the gesture at 601, the first LE audio headset (100) sends the vendor specific command to the user device (200) to enable the call sharing at 602. At 603, the user device (200) auto enables the call sharing feature by selecting the pre-configured second LE audio headsets (300A-300B) to be part of the call sharing group. At 604, the user device (200) sends the vendor specific command to the first LE audio headset (100) to establish the second CIG from the first LE audio headset (100). The second CIG consists of the second LE audio headsets (300A-300B), the first LE audio headset (100), and the user device (200).


At 605-606, the first LE audio headset (100) configures the codec, the QoS, and enables the ASE context type as “Conversational” at the second LE audio headset (300A). At 607, the second LE audio headset (300A) enables the ASCS enabling state. At 608-609, the first LE audio headset (100) configures the codec, the QoS, and enables the ASE context type as “Conversational” at the second LE audio headset (300B). At 610, the second LE audio headset (300B) enables the ASCS enabling state. At 611, the first LE audio headset (100) creates the second CIG and sends the synchronization delay to the second LE audio headsets (300A-300B) and the user device (200), where the synchronization delay is sent for rendering the audio to all the second LE audio headsets (300A-300B) and the user device (200) and a caller at the same time.


At 612-613, the first LE audio headset (100) creates the multiple CIS connections with each second LE audio headset (300A-300B) and sets up the audio data path. The user device (200) forwards the incoming audio received from the user device (200) to the second LE audio headsets (300A-300B). Also, the user device (200) forwards the incoming audio coming from the second LE audio headsets (300A-300B) to the user device (200).



FIG. 7 illustrates an example scenario of call sharing based on the trigger at the first LE audio headset (100), according to an embodiment as disclosed herein. Consider, the pair of first LE audio headsets (100) and the user device (200) are used by the user-1 (31), whereas the pair of second LE audio headsets (300A) is used by the user-2 (41), and the pair of second LE audio headsets (300B) is used by the user-3 (51). The user device (200) is connected with the pair of first LE audio headsets (100) using the two CIS connections (CIS1, CIS2) established from the user device (200), where the CIS1 and the CIS2 are belong to the first CIG (20). At 701, the pairs of the second LE audio headsets (300A-300B) are pre-configured with the one of the first LE audio headset (100) via the user device (200) for sharing the call. At 702, the user device (200) receives the call from the contact named Alice. The pairs of the second LE audio headsets (300A-300B) are nearby to the pair of first LE audio headsets (100). The user-1 (31) wants to add the user-2 (41) and the user-3 (51) in the call. Further, the user-1 (31) performs the gesture on at least one of the first LE audio headset (100) for sharing the call. The gesture can be a backward-forward head motion (703A), or a left-right head motion (703B), or a finger tap (703C). At 704, the pairs of the first LE audio headset (100) create the second CIG (60), add the pre-configured second LE audio headsets (300A-300B) and the user device (200) in the second CIG (60), and establish the two CIS connections (CIS3, CIS4) with the pair of second LE audio headsets (300A) and the two CIS connections (CIS5, CIS6) with the pair of second LE audio headsets (300B).



FIG. 8 is a flow diagram (800) illustrating a method for call sharing by the user device (200), according to an embodiment as disclosed herein. The user device (200) performs the operations 801-803 of the flow diagram (800). At 801, the method includes creating the first CIG and sharing the synchronization delay of the first CIG to the first LE audio headset (100). At 802, the method includes sending the vendor specific command to the first LE audio headset (100) to trigger the creation of the second CIG. At 803, the method includes transmitting Packet Data Units PDUs of the audio received at the user device (200) in a first CIG event. The audio received at the user device (200) is sent to the first LE audio headset (100) in the first CIG event. A CIG event consists of corresponding CIS events of the CISs currently making up that CIG. A CIS event is an opportunity for a Central and Peripheral devices to exchange the CIS PDUs.



FIG. 9 is a flow diagram (900) illustrating a method for call sharing by the first LE audio headset (100), according to an embodiment as disclosed herein. The first LE audio headset (100) performs the operations 901-909 of the flow diagram (900). At 901, the method includes receiving the synchronization delay of the first CIG from the user device (100). At 902, the method includes receiving the vendor specific command from the user device (100) to trigger the creation of the second CIG. At 903, the method includes creating the second CIG between the first LE audio headset (100) and the second LE audio headsets (300A-300B) and the user device (200). At 904, the method includes sharing the synchronization delay of the second CIG to the second LE audio headsets (300A-300B) and the user device (200).


At 905, the method includes receiving the PDUs in the first CIG event. At 906, the method includes scheduling a second CIG event at a first CIG synchronization point. The second CIG event corresponds to all CIS events of all CIS's which are part of the second CIG. The synchronization point is a time reference of an SDU that allows synchronization of isochronous data in multiple devices. At 907, if the PDU is received from the user device (100), then the method includes transmitting the PDU to the second LE audio headsets (300A-300B) in their respective CIS's in the second CIG event. At 908, if the PDU is generated at the first LE audio headset (100), then the method includes transmitting the PDU to the second LE audio headsets and the user device in their respective CIS's in the second CIG event. If the PDU is received from one or more second LE audio headset (300A-300B) in its CIS event, then the method includes mixing the audio and transmitting the PDU to the second LE audio headsets (300A-300B) as well as the user device (100) in their respective CIS events in upcoming second CIG event.



FIG. 10 illustrates a timing diagram of synchronized rendering of incoming audio from the user device (200) in the first CIG event to both the first LE audio headset (100) as well as all second LE audio headset (300A-300B) at a second CIG synchronization point, according to an embodiment as disclosed herein. C represents a central role device. P represents a peripheral role device. The first CIG is a CIG between the user device (200) and pair of first LE audio headsets (100), in which the user device (200) is the central role device, and the pair of first LE audio headsets (100) is the peripheral role device. The second CIG is a CIG between the pair of first LE audio headsets (100), the user device (200) and the pairs of second LE audio headset (300A-300B), in which the pair of first LE audio headsets (100) is the central role device, and the user device (200) and the pairs of second LE audio headset (300A-300B) are the peripheral role device. CIS1 and CIS2 are connected isochronous streams between the user device (200) and the pair of first LE audio headset (100). CIS3, CIS4 are the connected isochronous streams between the pair of first LE audio headsets (100) and one pair of the second LE audio headsets (300A). CIS5, CIS6 are connected isochronous streams between the pair of first LE audio headsets (100) and another pair of the second LE audio headsets (300B). CISP is a connected isochronous stream between the pair of first LE audio headsets (100) and the user device (100) belonging to the second CIG. CIG1_Sync_Delay is a maximum time in microseconds required for transmission of the PDUs of all CISs in the first CIG. CIG2_Sync_Delay is a maximum time in microseconds required for transmission of PDUs of all CISs in the second CIG.



FIG. 11 is a flow diagram (1100) illustrating a method for determining the synchronization delay of the second CIG for broadcasting the audio of the call received from the user device (200), according to an embodiment as disclosed herein. The first LE audio headset (100) performs the operations 1101-1105 of the flow diagram (1100). At 1101, the method includes receiving the incoming PDUs from the user device (200) in the first CIG event. At 1102, the method includes scheduling the second CIG event at the first CIG synchronization point. At 1103, the method includes transmitting the PDUs to the second LE audio headsets (300A-300B) in their respective CISs in the second CIG. At 1104, the method includes emptying the CIS event slot for the user device (200) in the second CIG. At 1105, the method includes rendering the audio in the first LE audio headset (100) as well as the second LE audio headsets (300A-300B) at a second CIG synchronization point. The CIG synchronization point is a time reference of an SDU that allows synchronization of isochronous data in multiple devices. Total synchronization delay determined as synchronization delay in the first CIG+synchronization delay in the second CIG.



FIG. 12 illustrates a timing diagram of synchronized rendering of the incoming audio from the first LE audio headset device (100) in the second CIG event to both second LE audio headsets (300A-300B) as well as the user device (200) at the second CIG synchronization point, according to an embodiment as disclosed herein. C represents a central role device. P represents a peripheral role device. The first CIG is a CIG between the user device (200) and pair of first LE audio headsets (100), in which the user device (200) is the central role device, and the pair of first LE audio headsets (100) is the peripheral role device. The second CIG is a CIG between the pair of first LE audio headsets (100), the user device (200) and the pairs of second LE audio headset (300A-300B), in which the pair of first LE audio headsets (100) is the central role device, and the user device (200) and the pairs of second LE audio headset (300A-300B) are the peripheral role device. CIS1 and CIS2 are connected isochronous streams between the user device (200) and the pair of first LE audio headsets (100). CIS3, CIS4 are the connected isochronous streams between the pair of first LE audio headsets (100) and one pair of the second LE audio headsets (300A). CIS5, CIS6 are connected isochronous streams between the pair of first LE audio headsets (100) and another pair of the second LE audio headsets (300B). CISP is a connected isochronous stream between the pair of first LE audio headsets (100) and the user device (100) belonging to the second CIG. CIG1_Sync_Delay is a maximum time in microseconds required for transmission of the PDUs of all CISs in the first CIG. CIG2_Sync_Delay is a maximum time in microseconds required for transmission of PDUs of all CISs in the second CIG.



FIG. 13 is a flow diagram (1300) illustrating a method for determining the synchronization delay of the second CIG for broadcasting the audio generated at the first LE audio headset (100), according to an embodiment as disclosed herein. The first LE audio headset (100) performs the operations 1301-1304 of the flow diagram (1300). At 1301, the method includes emptying slots in the first CIG event as there is no incoming data from the user device (200). At 1302, the method includes scheduling the second CIG event at the first CIG synchronization point. At 1303, the method includes transmitting the PDUs to the second LE audio headsets (300A-300B) and the user device (200) in their respective CISs in the second CIG. At 1304, the method includes rendering the audio in to the second LE audio headsets (300A-300B) as well as the user device (200) at the second CIG synchronization point. Total synchronization delay determined as the synchronization delay in the second CIG.



FIG. 14 illustrates a timing diagram of mixing of the incoming audio from the second LE Audio headsets (300A-300B) in the second CIG event and the incoming audio from the user device (200) in the first CIG event M+1 at the first CIG event M+1 synchronization points and render the audio to other devices at CIG event N+1 synchronization point, according to an embodiment as disclosed herein. C represents a central role device. P represents a peripheral role device. The first CIG is a CIG between the user device (200) and pair of first LE audio headsets (100), in which the user device (200) is the central role device, and the pair of first LE audio headsets (100) is the peripheral role device. The second CIG is a CIG between the pair of first LE audio headsets (100), the user device (200) and the pairs of second LE audio headset (300A-300B), in which the pair of first LE audio headsets (100) is the central role device, and the user device (200) and the pairs of second LE audio headset (300A-300B) are the peripheral role device. CIS1 and CIS2 are connected isochronous streams between the user device (200) and the pair of first LE audio headsets (100). CIS3, CIS4 are the connected isochronous streams between the pair of first LE audio headsets (100) and one pair of the second LE audio headsets (300A). CIS5, CIS6 are connected isochronous streams between the pair of first LE audio headsets (100) and another pair of the second LE audio headsets (300B). CISP is a connected isochronous stream between the pair of first LE audio headsets (100) and the user device (100) belonging to the second CIG. CIG1_Sync_Delay is a maximum time in microseconds required for transmission of the PDUs of all CISs in the first CIG. CIG2_Sync_Delay is a maximum time in microseconds required for transmission of PDUs of all CISs in the second CIG.



FIG. 15 is a flow diagram (1500) illustrating a method for determining the synchronization delay of the second CIG for receiving and broadcasting the audio generated at the user device (200), and the second LE audio headsets (300A-300B), according to an embodiment as disclosed herein. The first LE audio headset (100) performs the operations 1501-1504 of the flow diagram (1500). At 1501, the method includes receiving the PDU from the second LE audio headsets (300A-300B) in a second CIG event N. At 1502, the method includes receiving the PDU from the user device (200) in a first CIG event M+1. At 1503, the method includes mixing the audio at the first CIG event M+1 synchronization point. At 1504, the method includes rendering the data and the device's own data to all devices (100, 200, 300A-300B) at a second CIG event N+1 synchronization point. Total synchronization delay for the second LE audio headsets (300A) is determined as (2×synchronization delay in the second CIG)+synchronization delay in the first CIG−(NSE×1×minimum subevent length). NSE represents number of subevents. The CIS event is an opportunity for master and slave devices to exchange CIS PDUs. Each CIS event in turn contains the NSE subevents. Each subevent can be used to transmit a CIS PDU from the master device to the slave device followed by a response from the slave device to the master device. The minimum subevent length is a minimum time interval of a subevent.


Total synchronization delay for the second LE audio headsets (300B) is determined as (2×synchronization delay in the second CIG)+synchronization delay in the first CIG−(NSE×3×minimum subevent length).


The various actions, acts, blocks, steps, operations, or the like in the flow diagrams (300, 800, 900, 1100, 1300, and 1500) may be performed in the order presented, in a different order, or simultaneously. Further, in some embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the present disclosure.


The embodiments disclosed herein can be implemented using at least one hardware device and performing network management functions to control the elements.


The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the embodiments as described herein.

Claims
  • 1. A method for call sharing between Low Energy (LE) audio headsets, the method being executed by a processor, the method comprising: receiving, by a first LE audio headset connected to a user device, a request message for sharing a call received at the user device with at least one second LE audio headset, wherein the first LE audio headset is connected to the user device over a first Connected Isochronous Group (CIG); andestablishing, by the first LE audio headset, a combined call between the first LE audio headset and the at least one second LE audio headset using a second CIG to connect the first LE audio headset with the at least one second LE audio headset.
  • 2. The method according to claim 1, wherein the receiving comprises: receiving, by the first LE audio headset, a first synchronization delay of the first CIG from the user device; andreceiving, by the first LE audio headset, the request message, wherein the request message comprises a list of the at least one second LE audio headset in proximity of the first LE audio headset from the user device for creation of the second CIG.
  • 3. The method according to claim 1, wherein the establishing comprises: creating, by the first LE audio headset, the second CIG between the first LE audio headset, the at least one second LE audio headset, and the user device;determining, by the first LE audio headset, a second synchronization delay of the second CIG based on a number of the at least one second LE audio headset;sending, by the first LE audio headset, the second synchronization delay of the second CIG to the at least one second LE audio headset and the user device; andrendering, by the first LE audio headset, audio received from the user device in the first LE audio headset and the at least one second LE audio headset based on the second synchronization delay of the second CIG.
  • 4. The method according to claim 3, wherein the rendering comprises: receiving, by the first LE audio headset, audio from the at least one second LE audio headset; andrendering, by the first LE audio headset, the received audio and the audio from the at least one second LE audio headset together in the first LE audio headset, and the user device based on the second synchronization delay of the second CIG.
  • 5. The method according to claim 3, wherein the rendering comprises: receiving, by the first LE audio headset, audio of a user; andrendering, by the first LE audio headset, the received audio and the audio of the user together in the at least one second LE audio headset, and the user device based on the second synchronization delay of the second CIG.
  • 6. The method according to claim 4, wherein rendering the received audio and the audio of the user together comprises: rendering, by the first LE audio headset, the received audio and the audio from the at least one second LE audio headset together in the first LE audio headset, the user device, and another second LE audio headset based on the second synchronization delay of the second CIG when a plurality of second LE audio headsets are present in the second CIG.
  • 7. The method according to claim 1, wherein the method further comprises: optimizing, by the first LE audio headset, codec configuration settings and Quality of Service (QOS) parameters in the second CIG based on a number of the at least one second LE audio headset to reduce latency in LE audio communication.
  • 8. The method according to claim 1, wherein the receiving comprises: receiving, by the first LE audio headset, a gesture from a user;sending, by the first LE audio headset, a call sharing enable request message to the user device, wherein the user device automatically selects the at least one second LE audio headset pre-configured at the user device and located in proximity of the first LE audio headset in response to receiving the call sharing enable request message and sends the request message; andreceiving, by the first LE audio headset, the request message from the user device.
  • 9. The method according to claim 1, wherein the call received at the user device is an ongoing call or an incoming call.
  • 10. A Low Energy (LE) audio headset for call sharing, the LE audio headset comprising: a memory;a processor; anda call sharing controller coupled to the memory and the processor, configured for:receiving a request message for sharing a call received at a user device with at least one second LE audio headset, wherein the LE audio headset is connected with the user device over a first Connected Isochronous Group (CIG), andestablishing a combined call between the LE audio headset and the at least one second LE audio headset using a second CIG to connect the LE audio headset with the at least one second LE audio headset.
  • 11. The LE audio headset according to claim 10, wherein the receiving comprises: receiving a first synchronization delay of the first CIG from the user device; andreceiving the request message comprising a list of the at least one second LE audio headset in proximity of the LE audio headset from the user device for creation of the second CIG.
  • 12. The LE audio headset (100) according to claim 10, wherein the establishing comprises: creating the second CIG between the first LE audio headset, the at least one second LE audio headset, and the user device;determining a second synchronization delay of the second CIG based on a number of the at least one second LE audio headset;sending the second synchronization delay of the second CIG to the at least one second LE audio headset and the user device; andrendering audio received from the user device in the LE audio headset and the at least one second LE audio headset based on the second synchronization delay of the second CIG.
  • 13. The LE audio headset according to claim 12, wherein the rendering comprises: receiving audio from at least one second LE audio headset; andrendering the received audio and the audio from at least one second LE audio headset together in the LE audio headset, and the user device based on the second synchronization delay of the second CIG.
  • 14. The LE audio headset according to claim 12, wherein the rendering comprises: receiving audio of a user; andrendering the received audio and the audio of the user together in the at least one second LE audio headset and the user device based on the second synchronization delay of the second CIG.
  • 15. The LE audio headset according to claim 13, wherein the rendering the received audio and the audio of the user together comprises: rendering the received audio and the audio of the user together in the LE audio headset, the user device, and another second LE audio headset based on the second synchronization delay of the second CIG when a plurality of second LE audio headsets are present in the second CIG.
  • 16. A non-transitory computer-readable storage medium storing instructions for call sharing between Low Energy (LE) audio headsets, the instructions causing at least one processor to: receive, by a first LE audio headset connected to a user device, a request message for sharing a call received at the user device with at least one second LE audio headset, wherein the first LE audio headset is connected to the user device over a first Connected Isochronous Group (CIG); andestablish, by the first LE audio headset, a combined call between the first LE audio headset and the at least one second LE audio headset using a second CIG to connect the first LE audio headset with the at least one second LE audio headset.
  • 17. The non-transitory computer-readable storage medium according to claim 16, wherein the receiving comprises: receiving, by the first LE audio headset, a first synchronization delay of the first CIG from the user device; andreceiving, by the first LE audio headset, the request message, wherein the request message comprises a list of the at least one second LE audio headset in proximity of the first LE audio headset from the user device for creation of the second CIG.
  • 18. The non-transitory computer-readable storage medium according to claim 16, wherein the establishing comprises: creating, by the first LE audio headset, the second CIG between the first LE audio headset, the at least one second LE audio headset, and the user device;determining, by the first LE audio headset, a second synchronization delay of the second CIG based on a number of the at least one second LE audio headset;sending, by the first LE audio headset, the second synchronization delay of the second CIG to the at least one second LE audio headset and the user device; andrendering, by the first LE audio headset, audio received from the user device in the first LE audio headset and the at least one second LE audio headset based on the second synchronization delay of the second CIG.
  • 19. The non-transitory computer-readable storage medium according to claim 18, wherein the rendering comprises: receiving, by the first LE audio headset, audio from the at least one second LE audio headset; andrendering, by the first LE audio headset, the received audio and the audio from the at least one second LE audio headset together in the first LE audio headset, and the user device based on the second synchronization delay of the second CIG.
  • 20. The non-transitory computer-readable storage medium according to claim 18, wherein the rendering comprises: receiving, by the first LE audio headset, audio of a user; andrendering, by the first LE audio headset, the received audio and the audio of the user together in the at least one second LE audio headset, and the user device based on the second synchronization delay of the second CIG.
Priority Claims (2)
Number Date Country Kind
202141034776 Aug 2021 IN national
202141034776 Jul 2022 IN national
CROSS-REFERENCES TO RELATED APPLICATIONS

The present application is a bypass continuation application of International Application No. PCT/KR2022/011358, filed on Aug. 2, 2022, at the Korean International Patent Office, which claims priority to Provisional Indian Patent Application No. 202141034776 filed on Aug. 2, 2021, and Indian Patent Application No. 202141034776 (Complete Specification) filed on Jul. 21, 2022, both filed at the Indian Intellectual Property Office, the disclosures of which are incorporated in their entireties.

Continuations (1)
Number Date Country
Parent PCT/KR2022/011358 Aug 2022 WO
Child 18431678 US