This application is based on and claims priority under 35 U.S.C. § 119(e) to Indian Patent Application No. 202111048934, filed on Oct. 26, 2021, in the Indian Patent Office, the disclosure of which is incorporated by reference herein in its entirety.
The disclosure relates to communication systems, in particular, to voice based calling systems.
Related art Internet based voice-calling systems usually undergo negative impact upon conversation due to low-quality voice. The same may take place due to inadvertent speaking, slow speaking, fast speaking, baby-talk etc. Accordingly, the captured audio packet or quality is not up to expectation and the same leads to a quality-compromised audio communication or voice-call for the receiver.
Other type of low quality voice emanating from the user may be degraded voice due to biological or physical issues of a speaker, such as vocal related diseases, aging, screaming, thyroid problems, use, smoking, throat dehydration, voice misuse or overuse etc. In another scenario, during voice call if a speaker speaks slowly or whispers, the user at other end cannot hear what has been spoken by speaker. In an example, late-night conversations are further problematic when the user also tries to maintain silence or avoid disturbance to others. In yet another scenario, types of degraded voice may be observed from personal-talk on public places such as office, bank, train etc, where unintentional use of speaker-phone leads to loss of sensitive and personal information.
Various types of related art mechanisms usually recommend an optimum internet connective for call-quality and ease of communication. To augment voice quality, there may be some application-specific features such as improving audio quality with machine learning by targeting the missing packets and packets in the wrong order for auto-completion.
Other related art options include static tools to monitor, troubleshoot, manage, and improve call quality. For example, a call quality dashboard may be provided to analyses trends or problems, call analytics may be provided to analyze call and meeting quality for individual users and Quality of Service (QoS) may be provided to prioritize important network traffic.
In yet another scenario, the spoken voice may not degraded, but may include use of foul language during phone call by the caller or receiver. There may be also a scenario in which, a listener is not listening properly during call or not paying attention on phone-call knowingly or unknowingly due to distractions.
There is at least a need to adjust user's audio/voice during the conversation at listener end using smartphones and thereby prevent improper or unwanted generation of voice/audio at receiving end to the listener.
There is at least a need to enhance communication quality even when there is no packet loss.
There is at a least a need to detect problem with an audio stream/packet/waveform and remedy the detected problem.
Provided is a method for enabling the user's audio/voice adjustment during the conversation at listener end using smartphones. An AI model (for example in a mobile device) detects vocal issues such as speech deterioration, voice less sound, voice breaking or tremoring, whispering as would have occurred due to change in pace/style of speaking.
Further, provided is a method for enabling the quality of voiceless verbal syllables and words by reinforcing the existing packets for generating the missing packets based on the vicinity. Thereafter, communication quality is enhanced using the audio remediation even when there is no packet loss.
Further, provided is a method for detecting problems with an audio stream/packet/waveform and modifying or generating the waveform for detected issue using to remedy the detected problem. In an example, the detected problem may be audio/voice packet drop during conversation due to incorrect speaking, noise packet deterioration etc.
According to an aspect of the disclosure, there is provided a method of generating voice in a call session, the method including: extracting a plurality of features from a voice input through an artificial neural network (ANN); identifying one or more lost audio frames within the voice input; predicting by the ANN, for each of the one or more lost audio frames, one or more features of the respective lost audio frame; and superposing the predicted features upon the voice input to generate an updated voice input.
The predicting by the ANN may include: operating a single-layer recurrent neural network for audio generation configured to predict raw audio samples.
The single-layer recurrent neural network includes an architecture defined by WaveRNN and a dual softmax layer.
The method may further include correcting the updated voice input by: receiving the updated voice input and obtaining a confidence score of the updated voice input; splitting the updated voice input into a plurality of phenomes based on the confidence score; and identifying one or more non-aligned phenomes out of the plurality of phenomes based on comparing the plurality of phenomes with language vocabulary knowledge.
The method may further include: generating a plurality of variant phenomes; and updating the identified one or more non-aligned phenomes through one or more of: replacing the identified one or more non-aligned phenomes with the plurality of variant phenomes; adding additional phenomes to supplement the identified one or more non-aligned phenomes; or deleting the identified one or more non-aligned phenomes; or regenerating the updated voice input defined by one or more of: replacement of the identified one or more non-aligned phenomes with the variant phenomes; removal of the identified one or more non-aligned phenomes; or additional phenomes supplementing the identified one or more non-aligned phenomes.
The method may further include converting the updated voice input defined by a whisper into a converted updated voice input defined by a normal voice-input based on: executing a time-alignment between whispered and a corresponding normal speech; and learning by a generator adversarial network (GAN) model cross-domain relations between the whisper and the normal speech.
The method may further include improving voice quality of the updated voice input based on a nonlinear activation function.
According to an aspect of the disclosure, there is provided a method of summarizing voice communication in a call session based, the method including: extracting one or more frames from an input audio; encoding a temporal-information associated with the one or more frames through a convolution network; executing a deconvolution over the encoded information to obtain a prediction score; classifying the extracted frames into key frames or non-key frames based on the prediction score; and presenting a summary based on the classified key frames.
The presenting the summary may include: learning relation between raw audio frames (A) and a set of summary audios (S), wherein a distribution of resultant summary-audios F(A) is targeted to be similar to a distribution of the set of summary audios (S); and training a summary discriminator to differentiate between the generated summary audio F(A) and a real summary audio.
The extracting the one or more frames from the input audio may include extracting the one or more frames from an updated voice input generated by: extracting a plurality of features from a voice input through an artificial neural network (ANN); identifying one or more lost audio frames within the voice input from the voice input through the ANN; predicting by the ANN, for each of the one or more lost audio frames, one or more features of the respective lost audio frame; and superposing the predicted features upon the voice input to generate the updated voice input.
According to an aspect of the disclosure, there is provided a system for generating voice in a call session, the system including: a microphone; a memory storing one or more instructions; and a processor configured to execute the one or more instructions to: extract a plurality of features from a voice input through an artificial neural network (ANN); identify one or more lost audio frames within the voice input; predict by the ANN the features of the lost frame for each missing frame; and superpose the predicted features upon the voice input to generate an updated voice input.
The processor may be further configured to execute the one or more instructions to by the ANN by operating a single-layer recurrent neural network for audio generation configured to predict raw audio samples.
The single-layer recurrent neural network may include an architecture defined by WaveRNN and a dual softmax layer.
The processor may be further configured to execute the one or more instructions to correct the updated voice input by: receiving the updated voice input and obtaining a confidence score of the updated voice input; splitting the updated voice input into a plurality of phenomes based on the confidence score; and identifying one or more non-aligned phenomes out of the plurality of phenomes based on comparing the plurality of phenomes with language vocabulary knowledge.
The processor may be further configured to: extract one or more frames from an input audio; encode a temporal information associated with the one or more frames through a convolution network; execute a deconvolution over the encoded information to obtain a prediction score; classify the extracted frames into key frames or non-key frames based on the prediction score; and present a summary based on the classified key frames.
The above and other features, aspects, and advantages of certain embodiment of the disclosure will be more apparent from the following description taken in conjunction with the accompanying drawings, in which:
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the present disclosure and are not intended to be restrictive thereof.
Further, it will be appreciated that elements in the drawings are illustrated for simplicity and may not necessarily be drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent operations involved to help to improve understanding of aspects of the disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms “comprises,” “comprising,” “includes,” “including,” or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of operations does not include only those operations but may include other operations not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises/includes . . . a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Thereafter, the method includes, in operation 106, predicting by the ANN, the features of the lost frame for each missing frame. The predicting by the ANN may include operating a single-layer recurrent neural network for audio generation configured to predict raw audio samples. Such single-layer recurrent neural network may include an architecture defined by WaveRNN and a dual-softmax layer. Further, the method includes, in operation 108, superposing the predicted features upon the voice input to result into an updated voice input.
Further, the method includes correcting the updated voice input based on receiving the updated voice input and computing a confidence score of the correction. The received updated voice input is split into a plurality of phenomes based on a confidence score. One or more non-aligned phenomes are identified out of the plurality of phenomes based on comparing with language vocabulary knowledge.
Further, the method includes generating a plurality of variant phenomes, and updating the identified phenomes through replacement of the identified phenomes with the plurality of variant phenomes to update the plurality of phenomes with the variant phenomes. Other updating examples include addition of further phenomes to supplement the identified phenomes, and deletion of the identified phenomes. The updated voice input is regenerated based on replacement of the identified phenomes with the variant phenomes, removal of the identified phenomes, and supplementing of the identified phenomes by additional phenomes.
The method further includes converting the updated voice input defined by a whisper into the updated voice input defined by a normal voice-input based on the operation of executing a time-alignment between whispered and a corresponding normal speech, and learning by a generator adversarial network (GAN) model cross-domain relations between the whisper and the normal-speech. Finally, the voice quality of the updated voice input is improved based on a non-linear activation function.
According to an embodiment, the presenting of the summary may include learning relation between raw audio frames (A) and a set of summary audios (S), wherein a distribution of resultant summary-audios F(A) is targeted to be similar to the distribution of the set of summary audios (S). Thereafter, a summary discriminator is trained to differentiate between the generated summary audio F(A) and a real summary audio to improve the generated summary.
As a precursor, an input waveform may undergo audio feature extraction to extract voice features raw audio-input and generate a spectrogram. In an example, the extracted features may correspond to Mel-frequency Cepstral Coefficients (MFCC) (39 features), Linear Predictive Cepstral Coefficients (LPCC) (39 features) and Gammatone Frequency Cepstral Coefficients (GFCC) (36 features). Further, the extracted features may undergo Principal Component Analysis (PCA) as a state of the art practice. According to an embodiment, the LPCC may include 13 LPCC features, 13 Delta LPCC features, and 13 Delta Delta LPCC features. According to an embodiment, the MFCC may include 12 MFCC Cepstral Coefficients, 12 Delta MFCC Cepstral Coefficients, 12 Double Delta MFCC Cepstral Coefficients, 1 Energy Coefficient, 1 Delta Energy Coefficient, and 1 Double Delta Energy Coefficient. According to an embodiment, the GFCC may include 12 GFCC Coefficients, 12 Delta GFCC Coefficients, and 12 Double GFCC Cepstral Coefficients.
In operation 302, the method includes generating the audio packets which are lost due to vocal issues including packets lost due to network issues. Accordingly, a complete waveform spectrogram is generated from the spectrogram. According to an embodiment, operation 302 may be implemented at receiver end and not at the transmitter, since audio/voice may be poor or substandard due to network quality. According to an embodiment, operation 302 may include pruning of nodes in the neural network (e.g. using Lottery Ticket hypothesis, LTH) for fast convergence and inference. In an example, nodes having major weights after training are shortlisted for further use, rest are discarded.
In operation 304, the method includes generating modified spectrogram from a complete spectrogram.
In operation 306, the method includes generating normal speech-signal back from the spectrogram. In an example, a single layer recurrent neural network is used for audio generation and fast inference time.
In operation 308, the method includes generating an enhanced speech from the normal-speech.
In operation 308, the method includes generating paraphrasing & summarization of the enhanced speech.
The operations 302 to 308 refer the operations 102 to 108 of
In an implementation, as a speech-conversion criteria, Real-time PLC (Packet loss concealment) algorithms for the lost packets are applied. For each lost frame, deep model estimates the features of the lost-frame and overlaps them with audio-content. The method may be a “Model-based WaveRNN” method that leverages speech models of interpolating and extrapolating speech gaps.
As illustrated in
As an example, for estimation of a single coarse or fine-bits:
1 gated recurrent units (GRU) (composed of 3 matrices)+2 fully connected (FC) layers (1 matrix for each FC layer)
Example dimension of network components is provided as follows:
Hyper-Parameters
Dimension of Matrices
For each matrix, dimGRU×(dimGRU+dimmel+dimx)
Here, z represents the updating rule.
The target sparsity Z is set to 90˜95.8%. According to an embodiment, only matrices in the GRU unit are pruned. According to an embodiment, iterative pruning is adopted. As a part of pruning, nodes are pruned after every 100 iterations based on calculated z. Optionally, Lottery Ticket Hypothesis (LTH) is adoptable.
As a part of pruning of nodes in the network, a few nodes having major weights (post training) is retained for fast convergence and inference. The relatively unnecessary nodes are removed. The relatively unnecessary nodes are removed.
“Lottery Ticket Hypothesis” (LTH) refers a state of the art way to find “winning ticket” with randomly-initialized network weights. The same is verified for the classification tasks and adoptable for iteratively done pruning.
Element 802 refers phenome identifier for breaking of incoming voice (i.e. spectrogram) into phenomes based on confidence score to result into predicted-phenomes.
Element 804 refers mis-alignment marker for identification of non-aligning phenomes by comparing with language vocabulary-knowledge. The dataset of predicted phenomes and phonetics dataset acts as an input while non-aligned phenomes are outputted.
Element 806 refers a replacement-finder for non-aligned phenomes. Mutations are generated for identified phenomes to maximize a goodness of pronunciation (GOP) score or a confidence score and thereby result in a substitution-alternative. For such purposes, a confidence scoring system assists the replacement finder in the present operation by scoring substitution, insertion and deletion alternatives.
Element 808 refers a wave-mutator to apply mutation to the waveform to remedy the problem. A substitution alternative from element 806 is provided as input and a modified-waveform is outputted.
As a part of operation of phenome-identifier, the Spectrogram is broken into small units[X1, X2, X3 . . . ] and analyzed. Previous blocks score is used to provide context. Each block mapped to a confidence score for 44 phenomes and pause. In an example, the phenome sequence [kæt] is found in spectrogram for spoken word “CAT”.
a) For substitution, apply inverse X waveform and Y waveform.
b) For pre-insertion, insert Y waveform before the slice.
c) For post-insertion, insert Y waveform after the slice.
d) For deletion, Apply inverse X waveform and remove silence if needed.
In an example, the substitution score with Phenom's is found as maximum. So inverse of 3 and s is transposed in the waveform slice containing 3.
As illustrated in
A deep neural network (DNN) with three hidden layers (each contains 512 neurons) is implemented for both the discriminators and the generators of
Adversarial Loss may represent as followed:
Generator Loss:
L
D
=−Ex
w˜pw[log(Dw(Xw))]−Exs˜ps[1−log(Dw(Gsw(Xs)]
Discriminator Loss:
Loss for reconstructing whispered speech:
L
w
=d((Gsw(Gws(Xw)),Xw)
Loss for reconstructing normal speech:
L
s
=d((Gws(Gsw(Xs)),Xs)
Parameters may be defined as follows
As illustrated in
a) raw audio frames (A); and a
b) set of summary audios (S), without any correspondence.
The model learns a mapping function (key frame selector) F:A→S such that the distribution of resultant summary audios from F(A) is similar to the distribution of S with the help of an adversarial objective. Thereafter, a summary discriminator is trained that differentiates between a generated summary audio F(A) and a real summary audio s∈S.
Overall, the model may include two sub-networks Key frame selector network (SK) and Summary discriminator network (SD) as described in
At operation 1702, Key Frames are selected from Input Audio.
At operation 1704, the temporal information is encoded by performing convolutional and Pooling.
At operation 1706, Temporal deconvolution operations produces prediction score vector.
At operation 1708, a score indicating key/non key Frame is provided.
At operation 1710, ‘k’ Key Frames are selected to form predicted summary.
At operation 1712, frame-level feature are retrieved to merge with reconstructed features.
According to an embodiment, the architecture may include an Application 1902, a Framework 1904, a Radio Interface Layer (RIL) and a communication processor 1908. According to an embodiment, Application 1902 and/or Framework 1904 may be implemented by an Application Processor (AP). However, the disclosure is not limited thereto. The Application 1902 may include applications for basic functions of the device such as making a phone call, Contacts etc.
According to an embodiment, framework 1904 provides Telephony (Application Programming Interfaces) APIs to Application Layer which may include components such as Call, Network configuration and Smart Call Manager.
According to an embodiment, the Smart Call Manager may implement the method illustrated in
According to an embodiment, the RIL 1906 is an intermediate layer between the Application Processor and Communication Processor 1908, which will establish a communication between the processors. According to an embodiment, the RIL layer includes a library and a kernel. According to an embodiment, the library may include RIL daemon and the kernel may include RIL drivers. The drivers refer wireless module drivers.
According to an embodiment, the communication processor 1908 may include a cellular protocol layer. According to an embodiment, the communication processor 1908 may refer to a Modem, BP (BaseBand Processor) etc.
According to an embodiment, the Smart Call Manager remediates distorted, degraded voice electronically, even in a case such as vocal fold related diseases (like Spasmodic dysphonia), which cause voice deterioration that produce whispered speech, voice breaks, voice sound tight, strained, breathy or vocal tremor (voice to tremble).
According to an embodiment, the Smart Call Manager remediates degraded voice due to whispering which is normally done to make the conversation private such as night and speaker mode and corresponds to conversation in quiet environments like a library, a hospital, a meeting room, and for various forensic applications.
According to an embodiment, the Smart Call Manager remediates degraded voice due to voice problems in children and accordingly makes conversation unattractive. The degraded voice may also be to speech that is difficult to understand, since children tend to use only vowel sounds, and thus speech is produced that is unclear alongside dribbling and messy eating skills.
According to an embodiment, the Smart Call Manager remediates degraded voice due to mispronunciations by any speaker. The mispronunciations may create confusion among listeners and can reduce the communication quality greatly. Sometimes a mispronunciation may lead to unwanted twist in communications.
According to an embodiment, the Smart Call Manager remediates degraded voice when speaker or listener is moving during conversation away from mobile. Here, voice may change sometimes to whisper and back to normal or loud multiple times. It happens a lot in case of phone kept in one place (i.e., the phone maybe on charging/speaker) and user is moving in vicinity while talking.
According to an embodiment, the Smart Call Manager remediates degraded voice due to unintentional use of foul language on mobile when call listener is using the speaker in public places.
According to an embodiment, the Smart Call Manager facilitates paraphrasing or summary generation. The same is necessary when the called user may not be listening properly due to distractions such as not paying attention on phone-call by looking around the room, watching TV, or glancing at their phone, in chat with another person in between etc. The present subject matter offers a voice panel wherein voice panel occurs on AI Trigger. The present subject matter addresses the same by providing various options on the voice panel such as repeating, paraphrasing and summarization
In a networked deployment, the computer system 2500 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 2500 can also be implemented as or incorporated across various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single computer system 2500 is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
The computer system 2500 may include a processor 2502 e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor 2502 may be a component in a variety of systems. For example, the processor 2502 may be part of a standard personal computer or a workstation. The processor 2502 may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor 2502 may implement a software program, such as code generated manually (i.e., programmed).
The computer system 2500 may include a memory 2504, such as a memory 2504 that can communicate via a bus 2508. The memory 2504 may include, but is not limited to computer-readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, memory 2504 includes a cache or random access memory for the processor 2502. In alternative examples, the memory 2504 is separate from the processor 2502, such as a cache memory of a processor, the system memory, or other memory. The memory 2504 may be an external storage device or database for storing data. The memory 2504 is operable to store instructions executable by the processor 2502. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor 2502 for executing the instructions stored in the memory 2504. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
As shown, the computer system 2500 may or may not further include a display unit 2510, such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), a flat panel display, a solid-state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display 2510 may act as an interface for the user to see the functioning of the processor 2502, or specifically as an interface with the software stored in the memory 2504 or a driver 2506.
Additionally, the computer system 2500 may include an input device 2512 configured to allow a user to interact with any of the components of system 2500. The computer system 2500 may also include a disk or optical driver 2506. The disk driver 2506 may include a computer-readable medium 2522 in which one or more sets of instructions 2524, e.g. software, can be embedded. Further, the instructions 2524 may embody one or more of the methods or logic as described. In a particular example, the instructions 2524 may reside completely, or at least partially, within the memory 2504 or within the processor 2502 during execution by the computer system 2500.
According to an embodiment, a computer-readable medium that includes instructions 2524 or receives and executes instructions 2524 responsive to a propagated signal so that a device connected to a network 2526 can communicate voice, video, audio, images, or any other data over the network 2526. Further, the instructions 2524 may be transmitted or received over the network 2526 via a communication port or interface 2520 or using a bus 2508. The communication port or interface 2520 may be a part of the processor 2502 or maybe a separate component. The communication port 2520 may be created in software or maybe a physical connection in hardware. The communication port 2520 may be configured to connect with a network 2526, external media, the display 2510, or any other components in system 2500, or combinations thereof. The connection with the network 2526 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed later. Likewise, the additional connections with other components of the system 2500 may be physical or may be established wirelessly. The network 2526 may alternatively be directly connected to the bus 2508.
The network 2526 may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, 802.1Q or WiMax network. Further, the network 826 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols. The system is not limited to operation with any particular standards and protocols. For example, standards for Internet and other packet-switched network transmissions (e.g., TCP/IP, UDP/IP, HTML, and HTTP) may be used.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein.
Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to the problem and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
Number | Date | Country | Kind |
---|---|---|---|
202111048934 | Oct 2021 | IN | national |