The disclosure relates to wireless audio devices, and for example, to a method and a device for managing an audio based on a spectrogram of the audio.
Wireless audio devices are very common gadgets used along with electronic devices such as a smartphone, a laptop, a tablet, a smart television, etc. Wireless audio devices operate as a host of the electronic devices to wirelessly receive an audio playing at the electronic devices, and deliver the audio to a user of the wireless audio devices. The wireless audio devices flawlessly generate the audio from wireless signals from the electronic devices only if the wireless signals are strong enough to deliver audio data to the wireless audio devices according to existing methods.
As shown in
Embodiments of the disclosure provide a method and a device e.g., a transmitter device and a receiver device, for managing an audio based on a spectrogram of the audio.
Generally, an audio drop occurs in received signal at the receiver device upon receiving a weak signal from the transmitter device. The disclosed method allows the transmitter device to convert the audio to the spectrogram and send along with a signal including the audio to the receiver device. Upon not experiencing the audio drop while generating the audio from the received signal, the receiver device generates the audio according to a conventional method. Upon experiencing the audio drop while generating the audio from the received signal, the receiver device generates the audio from the spectrogram using the disclosed method. The spectrogram consumes a much lower amount of bandwidth of the signal compared to the audio. Therefore, the receiver device more efficiently captures the spectrogram from the received signal even the received signal is weak. Thus, a user may not experience a loss of information from the audio even the received signal is weak. Moreover, a latency will also get reduced due to flawlessly generating the audio from the spectrogram.
Accordingly, example embodiments herein provide a method for managing an audio based on a spectrogram. The method includes: receiving, by a transmitter device, the audio to send to a receiver device; generating, by the transmitter device, the spectrogram of the audio; identifying, by the transmitter device, a first spectrogram corresponding to vocals in the audio and a second spectrogram corresponding to music in the audio from the spectrogram of the audio using a neural network model; extracting, by the transmitter device, a music feature from the second spectrogram; and transmitting, by the transmitter device, a signal comprising the first spectrogram, the second spectrogram, the music feature and the audio to the receiver device.
In an example embodiment, where the music feature comprises texture, dynamics, octaves, pitch, beat rate, and key of the music.
Accordingly, example embodiments herein provide a method for managing the audio based on the spectrogram. The method includes: receiving, by the receiver device, the signal comprising the first spectrogram, the second spectrogram, the music feature and the audio from the transmitter device, where the first spectrogram signifies vocals in the audio and the second spectrogram signifies music in the audio; determining, by the receiver device, whether an audio drop is occurring in the received signal based on a parameter associated with the received signal; and generating, by the receiver device, the audio using the first spectrogram, the second spectrogram, the music feature, in response to determining that the audio drop is occurring in the received signal.
In an example embodiment, determining, by the receiver device, whether the audio drop is occurring in the received signal based on the parameter associated with the received signal received, comprises: determining, by the receiver device, an audio data traffic intensity of the audio in the received signal, detecting, by the receiver device, the audio data traffic intensity matches a threshold audio data traffic intensity, predicting, by the receiver device, an audio drop rate by applying the parameter associated with the received signal to a neural network model, determining, by the receiver device, whether the audio drop rate matches a threshold audio drop rate; and performing, by the receiver device, one of: detecting that the audio drop is occurring in the received signal, in response to determining that the audio drop rate matches to the threshold audio drop rate, and detecting that the audio drop is not occurring in the received signal, in response to determining that the audio drop rate does not match to the threshold audio drop rate.
In an example embodiment, generating, by the receiver device, the audio using the first spectrogram, the second spectrogram, the music feature, comprises: generating, by the receiver device, encoded image vectors of the first spectrogram and the second spectrogram, generating, by the receiver device, a latent space vector by sampling the encoded image vectors, generating, by the receiver device, two spectrograms based on the latent space vector and the audio feature, concatenating, by the receiver device, the two spectrograms, determining, by the receiver device, whether the concatenated spectrogram is equivalent to the spectrogram of the audio based on a real data set, performing, by the receiver device, denoising, stabilization, synchronization and strengthening of the concatenated spectrogram using the neural network model, in response to determining that the concatenated spectrogram is equivalent to the spectrogram of the audio, and generating, by the receiver device, the audio from the concatenated spectrogram.
In an example embodiment, wherein the parameter associated with the received signal comprises a Signal Received Quality (SRQ), a Frame Error Rate (FER), a Bit Error Rate (BER), a Timing Advance (TA), and a Received Signal Level (RSL).
Accordingly, example embodiments herein provide a transmitter device configured to manage the audio based on the spectrogram. The transmitter device includes: an audio and spectrogram controller, a memory, a processor, where the audio and spectrogram controller is coupled to the memory and the processor; wherein the audio and spectrogram controller is configured to: receive the audio to send to the receiver device; generate the spectrogram of the audio; identify the first spectrogram corresponding to the vocals in the audio and the second spectrogram corresponding to the music in the audio from the spectrogram of the audio using a neural network model; extract the music feature from the second spectrogram; and transmit the signal comprising the first spectrogram, the second spectrogram, the music feature and the audio to the receiver device.
Accordingly, example embodiments herein provide a receiver device configured to manage the audio based on the spectrogram. The receiver device includes: an audio and spectrogram controller, a memory, a processor, where the audio and spectrogram controller is coupled to the memory and the processor. The audio and spectrogram controller is configured to: receive the signal comprising the first spectrogram, the second spectrogram, the music feature and the audio from the transmitter device, where the first spectrogram signifies vocals in the audio and the second spectrogram signifies music in the audio; determine whether the audio drop is occurring in the received signal based on the parameter associated with the received signal; and generate the audio using the first spectrogram, the second spectrogram, the music feature, in response to determining that the audio drop is occurring in the received signal.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating various example embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the disclosure, and the embodiments herein include all such modifications.
This method and device are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The above and other aspects, features and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
The embodiments herein and the various features and advantageous details thereof are explained in greater detail with reference to various example non-limiting embodiments illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques may be omitted so as to not unnecessarily obscure the embodiments herein. The various example embodiments described herein are not necessarily mutually exclusive, as various embodiments can be combined with one or more other embodiments to form new embodiments. The term “or” as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the disclosure and embodiments herein.
Various example embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as managers, units, modules, hardware components or the like, are physically implemented by analog and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits of a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
The accompanying drawings are used to aid in understanding various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
Accordingly, example embodiments herein provide a method for managing an audio based on a spectrogram. The method includes receiving, by a transmitter device, the audio to send to a receiver device. The method includes generating, by the transmitter device, the spectrogram of the audio. The method includes identifying, by the transmitter device, a first spectrogram corresponding to vocals in the audio and a second spectrogram corresponding to music in the audio from the spectrogram of the audio using a neural network model. The method includes extracting, by the transmitter device, a music feature from the second spectrogram. The method includes transmitting, by the transmitter device, a signal comprising the first spectrogram, the second spectrogram, the music feature and the audio to the receiver device.
Accordingly, example embodiments herein provide a method for managing the audio based on the spectrogram. The method includes receiving, by the receiver device, the signal comprising the first spectrogram, the second spectrogram, the music feature and the audio from the transmitter device, where the first spectrogram signifies vocals in the audio and the second spectrogram signifies music in the audio. The method includes determining, by the receiver device, whether an audio drop is occurring in the received signal based on a parameter associated with the received signal. The method includes generating, by the receiver device, the audio using the first spectrogram, the second spectrogram, the music feature, in response to determining that the audio drop is occurring in the received signal.
Accordingly, example embodiments herein provide a transmitter device configured to manage the audio based on the spectrogram. The transmitter device includes an audio and spectrogram controller, a memory, a processor, where the audio and spectrogram controller is coupled to the memory and the processor. The audio and spectrogram controller is configured for receiving the audio to send to the receiver device. The audio and spectrogram controller is configured for generating the spectrogram of the audio. The audio and spectrogram controller is configured for identifying the first spectrogram corresponding to the vocals in the audio and the second spectrogram corresponding to the music in the audio from the spectrogram of the audio using the neural network model. The audio and spectrogram controller is configured for extracting the music feature from the second spectrogram. The audio and spectrogram controller is configured for transmitting the signal comprising the first spectrogram, the second spectrogram, the music feature and the audio to the receiver device.
Accordingly, example embodiments herein provide the receiver device configured to manage the audio based on the spectrogram. The receiver device includes an audio and spectrogram controller, a memory, a processor, where the audio and spectrogram controller is coupled to the memory and the processor. The audio and spectrogram controller is configured for receiving the signal comprising the first spectrogram, the second spectrogram, the music feature and the audio from the transmitter device, where the first spectrogram signifies vocals in the audio and the second spectrogram signifies music in the audio. The audio and spectrogram controller is configured for determining whether the audio drop is occurring in the received signal based on the parameter associated with the received signal. The audio and spectrogram controller is configured for generating the audio using the first spectrogram, the second spectrogram, the music feature, in response to determining that the audio drop is occurring in the received signal.
Generally, an audio drop occurs at the receiver device upon receiving a weak signal from the transmitter device. Unlike existing methods and systems, the disclosed method allows the transmitter device to convert the audio to the spectrogram and send along with a signal including the audio to the receiver device. Upon not experiencing the audio drop while generating the audio from the received signal, the receiver device generates the audio according to a conventional method. Upon experiencing the audio drop while generating the audio from the received signal, the disclosed method allows the receiver device to generate the audio from the spectrogram. The spectrogram consumes very less amount of bandwidth of the signal compared to the audio. Therefore, the receiver device may flawlessly capture the spectrogram from the received signal even the received signal is weak. Thus, a user may not experience a loss of information from the audio even the received signal is weak. Moreover, a latency will also get reduced due to flawlessly generating the audio from the spectrogram.
The disclosed method aims in speech enhancement by separating speech/vocal from background noise. These features are then concatenated by a fusion network which also outputs corresponding clean speech. So by separating vocals and music, the background noise also gets removed. The speech enhancement may use one-dimensional convolutional layers to reconstruct magnitude of the spectrogram of the clean speech and uses the magnitude to further estimate its phase spectrogram.
Referring now to the drawings, and more particularly to
In an embodiment, the receiver device (200) includes an audio and spectrogram controller (e.g., including processing and/or control circuitry) (210), a memory (220), a processor (e.g., including processing circuitry) (230), a communicator (e.g., including communication circuitry) (240) and a NN model (e.g., including various processing circuitry and/or executable program instructions) (250). In an embodiment, the receiver device (200) additionally includes a speaker or the receiver device (200) is connected to a speaker. The audio and spectrogram controller (110, 210) and the NN model (150, 250) are implemented by processing circuitry such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by a firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
The audio and spectrogram controller (110) receives the audio to send to the receiver device (200). In an embodiment, the audio and spectrogram controller (110) receives the audio from an audio/video file stored in the memory (120). In an embodiment, the audio and spectrogram controller (110) receives the audio from an external server such as internet. In an embodiment, the audio and spectrogram controller (110) receives the audio from an incoming phone call or outgoing phone call. In an embodiment, the audio and spectrogram controller (110) receives the audio from surrounding of the transmitter device (100). Further, the audio and spectrogram controller (110) generates the spectrogram of the audio. Further, the audio and spectrogram controller (110) identifies and separates a first spectrogram corresponding to vocals in the audio and a second spectrogram corresponding to music (e.g., tone) in the audio from the spectrogram of the audio using the NN model (150). The audio and spectrogram controller (110) extracts a music feature from the second spectrogram. In an embodiment, the music feature includes texture, dynamics, octaves, pitch, beat rate, and key of the music. Examples of the music feature includes, but not limited to, melody, beats, signer style, etc.
The pitch may refer, for example, to a quality that makes it possible to judge sounds as “higher” and “lower” in a sense associated with musical melodies. The beat rate simply characterized as number of beats fixed in a minute. The beat rate enables to accurately find songs that have fixed beats per minute (bpm) and thereby to classify them in a single group. The beat rate depends on genre of the audio. For example, 60-90 bpm for reggae, 85-115 bpm for hip-hop, 120-125 bpm for jazz, etc. The key of a piece (e.g., a musical composition) is a group of pitches that forms a basis of a music composition in classical and western pop music. The texture is indicating that tempo, melodic, and harmonic elements are combined in a musical composition, determining the overall quality of the sound in a piece. The texture is often described in regard to the density, or thickness, and range, or width, between lowest and highest pitches, in relative terms as well as more specifically distinguished according to number of voices, or parts, and relationship between these voices. Monophonic texture, heterophonic texture, homophonic texture, polyphonic texture are the various textures.
The monophonic texture includes a single melodic line with no accompaniment. The heterophonic texture includes two distinct lines, the lower sustaining a drone (constant pitch) while the other line creates a more elaborate melody above it. The polyphonic texture includes multiple melodic voices which are to a considerable extent independent from or in imitation with one another. The dynamics refers to a volume of a performance. In written compositions, the dynamics are indicated by abbreviations or symbols that signify the intensity at which a note or passage should be played or sung. The dynamics can be used like punctuation in a sentence to indicate precise moments of emphasis. The dynamics of a composition can be used to determine when the artist will bring a variation in their voice, this is important because an artist can have a different diction for a song depending upon the harmony.
The octave is an interval between one musical pitch and another with double its frequency. The octave relationship is a natural phenomenon that has been referred to as the “basic miracle of music”. As the frequency ‘f’ of a pitch doubles in value, the musical relationship remains that of an octave. Thus for any given frequency, rising octaves can be expressed as f*2∧y, where ‘y’ is a whole number. x=log (value-1/value-2)/log (2) octaves, where value1, value2 are frequencies, and value1 and value2 are x octaves apart. Ratios of pitches to describe a scale, which has an interval of repetition, called octave. Examples of octaves are given in table 1 below.
The audio and spectrogram controller (110) transmits a signal including the first spectrogram, the second spectrogram, the music feature and the audio to the receiver device (200).
The audio and spectrogram controller (210) receives the signal from the transmitter device (100). The audio and spectrogram controller (210) determines whether an audio drop is occurring in the received signal based on a parameter associated with the received signal. In an embodiment, the parameter associated with the received signal includes a Signal Received Quality (SRQ), a Frame Error Rate (FER), a Bit Error Rate (BER), a Timing Advance (TA), and a Received Signal Level (RSL). In an embodiment, the audio and spectrogram controller (210) determines an audio data traffic intensity of the audio in the received signal. Further, the audio and spectrogram controller (210) detects the audio data traffic intensity matches a threshold audio data traffic intensity. Further, the audio and spectrogram controller (210) predicts an audio drop rate by applying the parameter associated with the received signal to the NN model (250).
The audio and spectrogram controller (210) determines whether the audio drop rate matches a threshold audio drop rate. The audio and spectrogram controller (210) detects that the audio drop is occurring in the received signal, in response to determining that the audio drop rate matches to the threshold audio drop rate. Further, the audio and spectrogram controller (210) detects that the audio drop is not occurring in the received signal, in response to determining that the audio drop rate does not match to the threshold audio drop rate.
The audio and spectrogram controller (210) generates the audio using the first spectrogram, the second spectrogram, the music feature, in response to determining that the audio drop is occurring in the received signal. In an embodiment, the audio and spectrogram controller (210) generates encoded image vectors of the first spectrogram and the second spectrogram using the NN model (250). The audio and spectrogram controller (210) generates a latent space vector by sampling the encoded image vectors. The audio and spectrogram controller (210) generates two spectrograms based on the latent space vector and the audio feature using the NN model (250). The audio and spectrogram controller (210) concatenates the two spectrograms. The audio and spectrogram controller (210) determines whether the concatenated spectrogram is equivalent to the spectrogram of the audio based on a real data set. In an embodiment, the audio and spectrogram controller (210) receives audio packets from the transmitter device (100) under low network conditions, where these audio packets has all information of the audio. The audio and spectrogram controller (210) decrypts the audio packets and generates the actual audio using a Generative Adversarial Network (GAN) model. The audio and spectrogram controller (210) performs denoising, stabilization, synchronization and strengthening of the concatenated spectrogram using the NN model (250), in response to determining that the concatenated spectrogram is equivalent to the spectrogram of the audio. Further, the audio and spectrogram controller (210) generating the audio from the concatenated spectrogram using the speaker.
The memory (120) stores the audio/video file. The memory (220) stores the real data set. The memory (120) and the memory (220) stores instructions to be executed by the processor (130) and the processor (230) respectively. The memory (120, 220) may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory (120) may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (120, 220) is non-movable. In various examples, the memory (120, 220) can be configured to store larger amounts of information than its storage space. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache). The memory (120) can be an internal storage unit or it can be an external storage unit of the transmitter device (100), a cloud storage, or any other type of external storage. The memory (220) can be an internal storage unit or it can be an external storage unit of the receiver device (200), a cloud storage, or any other type of external storage.
The processor (130, 230) may be a general-purpose processor, such as a Central Processing Unit (CPU), an Application Processor (AP), or the like, a graphics-only processing unit such as a Graphics Processing Unit (GPU), a Visual Processing Unit (VPU) and the like. The processor (130, 230) may include multiple cores to execute the instructions. The communicator (140) may include various communication circuitry and may be configured for communicating internally between hardware components in the transmitter device (100). Further, the communicator (140) is configured to facilitate the communication between the transmitter device (100) and other devices via one or more networks (e.g. Radio technology). The communicator (240) is configured for communicating internally between hardware components in the receiver device (200). Further, the communicator (240) is configured to facilitate the communication between the receiver device (200) and other devices via one or more networks (e.g. Radio technology). The communicator (140, 240) includes an electronic circuit specific to a standard that enables wired or wireless communication.
In an embodiment, when the audio does not contain the music, the transmitter device (100) converts the vocal in the audio to the first spectrogram and send the signal includes the first spectrogram and the audio to the receiver device (200). In response to detecting the audio drop in the received signal, the receiver device (200) uses the first spectrogram to generate the vocal in the audio using the speaker.
Although
At operation 308, the method includes identifying that audio drop is absent in the audio, upon determining that the predicted audio drop rate does not match the threshold audio drop rate. The method further flows from operation 308 to operation 305. At operation 309, the method includes identifying that audio drop is present in the audio, upon determining that the predicted audio drop rate matches the threshold audio drop rate. At operation 310, the method includes processing the spectrogram and audio generation for generating the concatenated spectrogram. At operation 311, the method includes performing denoising, stabilization, synchronization and strengthening using the NN model (250) on the concatenated spectrogram. At operation 312, the method includes generating the audio from the concatenated spectrogram. A Deep Neural Network (DNN) in the NN model (250) may be trained by performing feed forwarding and backward propagation generating the audio.
The various actions, acts, blocks, steps, or the like in the flowcharts (300, 400, and 500) may be performed in the order presented, in a different order or simultaneously. Further, in various embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the disclosure.
The binary cross-entropy loss function, Hy(q)=−y*log(q(y))−(1−y)*log(1−q(y)). Soft max function, pk(x)=exp(ak(x))/Σk=1k exp(ak(x)), where k is a feature channel, ak(x) is an activation in feature channel k at pixel position x, y is a binary label for classes, q is a probability of belonging to y class, and x is an input vector.
Variational Autoencoder-Generative Adversarial Network (VAE-GAN) of the NN model (150) ensures that the first spectrogram (610) and the second spectrogram (611) are continuous. If the first spectrogram (610) and the second spectrogram (611) are not continuous, then the receiver device (200) marks the concatenated spectrogram as fake. As the VAE-GAN operates on each spectrogram individually, this property can be applied to the audio of arbitrary length.
The BER is a percentage of bits with errors divided by the total number of transmitted bits defined. The TA refers to a time length taken for a mobile station signal to communicate with a base station. The RSL refers to a radio signal level or strength of the mobile station signal which was received from a base station transceiver's transmitting antenna. The output layer (804) provides an expected value and a prediction of the audio drop rate. If the predicted audio drop rate is less than or equal to 0.5, then the expected value is 0, whereas if the predicted audio drop rate is greater than 0.5, then the expected value is 1. Values of the parameter associated with the received signal, the predicted audio drop rate and the expected value in an example is given in table 2.
In an embodiment, the NN model (250) includes a summing junction and a nonlinear element f(e) as shown in
In an embodiment, the NN model (250) includes the summing junction, the nonlinear element f(e), and an error function (δ) as shown in
where δn is the error function.
Further, the receiver device (200) determines a dot product of the latent space vector (906) and each music feature (907) that is in vector form. Further, the receiver device (200) passes the dot product value through a SoftMax layer and performs a cross product with the latent space vector (906). Further, the receiver device (200) concatenates all the cross products values and pass to a decoder (907). Further, the receiver device (200) generates the two spectrograms (908, 909) using the decoder (907), the decoder (907) decodes the cross products values.
The receiver device (200) checks whether the concatenated spectrogram is equivalent to the spectrogram of the audio for the comparison. If the concatenated spectrogram is equivalent to the spectrogram of the audio, then the receiver device (200) identifies the concatenated spectrogram (910) as real. If the concatenated spectrogram is not equivalent to the spectrogram of the audio, then the receiver device (200) identifies the concatenated spectrogram (910) as fake.
The block P(C) eliminates inconsistent components contained in the output (Y) and generates an output (Z). The DNN receives the input (X), the output (Y), and the output (Z) and improves a quality of the concatenated spectrogram. The DNN requires low computational cost and provide changeable number of iterations as parameters, which are shared between layers. The output from the DNN and the output (Z) concatenates to form a synchronized, strong and stabilized spectrogram (911) without the noise. The spectrogram (911) can be determined using the equation as follows. X[m+1]=B(X[m])=Z[m]−DNN(X[m], Y[m], Z[m]), where B is a Deep Griffin Lim (DeGLI) block and the spectrogram (911).
In an example, the receiver device (200) uses Griffin-Lim method to reconstruct the audio from the spectrogram (911) by phase reconstruction from the amplitude spectrogram (911). The Griffin-Lim method employs alternating convex projections between a time-domain and a STFT domain that monotonically decrease a squared error between a given STFT magnitude and a magnitude of an estimated time-domain signal, which produces an estimate of the STFT phase.
As shown in
As shown in
The various example embodiments disclosed herein can be implemented using at least one hardware device and performing network management functions to control the elements.
While the disclosure is illustrated and described reference to various example embodiments, it will be understood that the various example embodiments are intended to be illustrative, not limiting. It will be further understood by those skilled in the art that various changes in form and detail may be made without departing from the true spirit and full scope of the disclosure, including the appended claims and their equivalents. It will also be understood that any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein.
Number | Date | Country | Kind |
---|---|---|---|
202241000585 | Jan 2022 | IN | national |
This application is a continuation of International Application No. PCT/KR2023/000222, filed on Jan. 5, 2023, in the Korean Intellectual Property Receiving Office and claiming priority to Indian Patent Application No. 202241000585, filed on Jan. 5, 2022, in the Indian Patent Office, the disclosures of which are incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2023/000222 | Jan 2023 | US |
Child | 18189545 | US |