The present disclosure relates to a face mask for capturing speech produced by a wearer of the face mask, a method by a face mask for capturing speech produced by a wearer of the face mask, and a corresponding computer program product.
One of the well-known consequences of the COVID-19 pandemic is the increasingly common use of a face mask. Face masks have been shown to be effective in stopping the spread of the virus. This finding has generated an interest which has greatly boosted the production of such masks with varying properties ranging from surgical masks and cloth-based masks to N95 face masks and KN95 face masks.
There are several known solutions for capturing speech of a person wearing a face mask. One such solution is based on placing a microphone on the outside of the face mask. However, the audio captured by such a microphone is often muffled and resembles that of speaking through a gag, which can degrade experience of communication, e.g., when engaged in a phone call.
An alternative is to provide a microphone inside the face mask, such as MaskFone™ and Xupermask™. In such systems, a microphone converts received sounds into audio signal for transmission. However, the sounds received by the microphone include not only the wearer's voice but, inhaling/exhaling noise as well. When the wearer inhales, the sound of gas flow through the mask's breathing regulator is often particularly loud and is transmitted as noise having a large component comparable in both frequency and intensity to the sounds made by a person when speaking. Accordingly, additional effort needs to be placed in the design of the microphone to eliminate unwanted sounds caused by inhaling/exhaling.
Beyond microphone-based solutions, mask-like devices in the area of silent-speech control interfaces can be used as an input device. The overall goal of devices with silent-speech interfaces is to recognize silent speech for controlling consumer wearables. For example, A. Bedri et al., “Toward Silent-Speech Control of Consumer Wearables,” in Computer (vol. 48, issue 10, pp. 54-62, IEEE, 2015) discloses a tongue-mounted magnet to learn specific commands for controlling consumer wearables. However, this approach requires invasive prosthetics. In another example, Suzuki Y. et al., “A Mouth Gesture Interface Featuring a Mutual-Capacitance Sensor Embedded in a Surgical Mask,” in: Kurosu M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction, (HCII 2020), pp. 154-165, Lecture Notes in Computer Science, vol. 12182, Springer, 2020) discloses a surgical mask with embedded mutual-capacitance sensors, which allows recognizing basic non-verbal mouth gestures.
It is an object of the invention to provide an improved alternative to the above techniques and prior art. More specifically, it is an object of the invention to provide improved solutions for capturing speech produced by a wearer of a face mask.
Some embodiments of the present disclosure are directed to a face mask for capturing speech produced by a wearer of the face mask. The face mask includes sensors adapted to capture changes in shape of a part of a face of the wearer while producing speech. The face mask also includes a processing circuitry adapted to receive data from the sensors, the data representing the changes in shape of the part of the face of the wearer. The data received from the plurality of sensors is classified into units of speech using a machine learning model.
Some other related embodiments are directed to a method by a face mask for capturing speech produced by a wearer of the face mask. The method includes receiving data from sensors comprised in the face mask, the data representing changes in shape of a part of a face of the wearer while producing speech, and classifying the data received from the plurality of sensors into units of speech using a machine learning model.
Some other related embodiments are directed to a computer program product for capturing speech produced by a wearer of a face mask. The computer program product includes a non-transitory computer readable medium storing program code that is executable by at least one processor of the face mask to perform operations including receiving data from sensors comprised in the face mask, the data representing changes in shape of a part of a face of the wearer while producing speech, and classifying the data received from the plurality of sensors into units of speech using a machine learning model.
Potential advantages of one or more of these embodiments may include that the face mask is able to capture speech produced by the wearer of the face mask without, or to lesser extent, capturing unwanted sounds caused by inhaling/exhaling.
Other devices, methods, and computer program products according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description.
Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying drawings. In the drawings:
accordance with some embodiments of the present disclosure;
Inventive concepts will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of various present inventive concepts to those skilled in the art. It should also be noted that these embodiments are not mutually exclusive. Components from one embodiment may be tacitly assumed to be present/used in another embodiment.
A face mask, a method, and a computer program product, are disclosed that capture speech produced by a wearer of the face mask from changes in shape of a part of a face of the wearer while producing speech. The wearer, in the context of this invention is a human being. The face mask includes sensors adapted to capture the changes in shape of the part of the face of the wearer while producing speech. The face mask also includes a processing circuitry adapted to receive data from the sensors, the data representing the changes in shape of the part of the face of the wearer while producing speech, i.e., while the wearer is speaking. The data received from the sensors is classified into units of speech using a machine learning model.
The units of speech may, e.g., be phonemes, graphemes, phones, syllables, articulations, utterances, vowels, consonants, or any combination thereof. In one embodiment, speech is generated from the units of speech. In another embodiment, text is generated from the units of speech. The generation of the speech or text may be performed on a communication device communicatively connected to the face mask, such as a smartphone. The generation of the speech or text may alternatively be performed on a cloud computing system. In yet another embodiment, data representing units of speech may be transmitted to a communication device associated with a receiver of the speech produced by the wearer. The communication device associated with the receiver may be adapted to generate speech or text from the received units of speech.
The speech produced by the wearer may be vocalized speech or subvocalized speech. In an embodiment, speech is captured from changes in shape of a part of a face of the wearer while speaking. This part of the face includes articulators that abut the face mask. The movement of the articulators are commensurate with the speech produced by the wearer. While producing speech, the wearer's articulators are continuously moving. The articulatory movement is continuously measured by the sensors, which then transmit data representing the measured articulatory movement to the processing circuitry for subsequent processing. In comparison with speech captured in a conventional way using a microphone for converting sound into an oscillating electrical current, speech captured from changes in shape of the face of the wearer which represent articulatory movements are more robust since it is less affected by background noise, in particular inhaling/exhaling noise. Hence, the face mask does not require ambient noise cancellation to mitigate unwanted background noise. The captured data representing the articulatory movements of the wearer may be transmitted in the form of adjacency matrices. This data in the form of adjacency matrices may require lower network bandwidth during transmission than conventional speech data.
The face mask includes a plurality of sensors adapted to capture changes in shape of a part of a face of the wearer while speaking. In an embodiment, the part of the face may, e.g., include the region where the buccolabial group of muscles is located. The buccolabial muscles enable movements of the mouth and lips. The function of the buccolabial muscles is to control the shape and movements of the mouth and lips, such as closing, protruding, and compressing, the lips. Performing these actions, buccolabial muscles facilitate speech and help in producing various facial expressions, such as anger, sadness, and others.
Various embodiments of the present disclosure are described in the context of a face mask that includes a plurality of sensors adapted to capture changes in shape of a part of a face of the wearer while producing speech, and a processing circuitry. The plurality of sensors are arranged in the form of an array. The number of sensors required in the face mask depends on the level of accuracy required in capturing speech of the wearer. An increased number of sensors in the face mask results in the face mask being able to capture the speech produced by the wearer more accurately. This is the case since a face mask with an increased number of sensors captures changes in shape of the part of the face of the wearer more accurately. The data from the array of sensors captures the changes in shape of the part of the face and may be represented in the form of adjacency matrices.
In an embodiment, the face mask is adapted to abut certain facial muscles of the wearer including articulators that are exposed to the face mask. The face mask captures changes in shape of a part of a face of the wearer. In the following, reference is made to
Most of all speech sounds are produced using an outward flow of air from the lungs as the energy source. The flow of air being released from the lungs in a quasi-periodic manner results in the process by which the vocal folds produce certain sounds through quasi-periodic vibration, known as phonation. Speech produced in this manner is commonly referred to as voiced speech. The airstream from the lungs passes through the glottis and enters the pharynx. The pharynx can be adjusted with tongue movement and depending on the state of the velum, the airstream flows into the oral or the nasal cavity. The shape of the oral cavity can be varied with the tongue position, extent to which the jaw is opened, and the shape of the lips. Different shapes of the oral cavity may lead to different speech sounds.
The two major classes that speech sounds are categorized into are vowels and consonants. Vowels are produced by varying the shape of the pharyngeal and oral cavities such that the airflow from the glottis is relatively unobstructed. Consonants are generally formed by either constricting or blocking the airstream by using the tongue, teeth, and lips. Consonants can be either voiced or unvoiced and are commonly described by the place and manner of articulation. The place of articulation refers to the location in the vocal tract where the constriction is made, whereas the manner of articulation refers to the degree to which the airstream is constricted. Each of the different lineaments of facial muscles result in different shapes of the oral cavity causing a subset of sensors within the array of sensors to produce a unique measured value.
A sensor in the array of sensors 202, disposed along the layer of the face mask 200, experiences changes in its electrical characteristics when a portion of the face mask where the sensor is located is stretched. The change in electrical characteristics from the sensor may, e.g., be represented as a normalized number between a minimum value and a maximum value, e.g., “0” and “1”, where “0” represents no strain and “1” represents the highest amount of strain that can be measured. The electrical characteristics can include changes in capacitance, resistance, impedance, inductance, voltage, current, etc. The changes in electrical characteristics, such as voltage, may be encoded as an analog signal representing the changes in the shape of the part of the face of the wearer while speaking. This analog signal may be fed into an analog-to-digital converter (ADC). The ADC takes the analog signal from the sensor as input and converts the analog signal to digital information, which is then output to the processing circuitry for subsequent processing. An advantage of capturing the changes in shape of the part of the face of the wearer while speaking by measuring changes in electrical characteristics of the sensors is that the speech produced by the wearer can be captured without any background noise, in particular inhaling/exhaling noise.
The face mask 200 may be arranged to be fastened around the wearer's face using, for example, stretchable bands connecting the face mask 200 on one end and looping around the wearer's cars on the other end. The position of the face mask 200 on the wearer's mouth remains substantially fixed upon fastening. Therefore, the location of the sensors in the array of sensors 202 also remains substantially fixed relative to the wearer's mouth due to the array of sensors 202 being embedded on the face mask 200. As the wearer moves her/his mouth while wearing the mask, the sensors 202 produce data representing the changes in shape of a part of a face of the wearer while producing speech. The sensors 202 are adapted to continuously capture the changes in shape of the part of the face of the wearer while producing speech. The measurements captured by different sensors affected by the changes in shape of the part of the face of the wearer represent the speech produced by the wearer. The data representing the changes in shape of the part of the face may include distances between sensors 202 and/or positions of the sensors 202.
The sensors 202 communicate the data representing the changes in shape of the part of the face of the wearer to a processing circuitry (not shown in
The processing circuitry 310 may classify the data received from the array of sensors 202 into units of speech using a machine learning model. The units of speech may include phonemes, graphemes, phones, syllables, articulations, utterances, vowels, consonants, or any combination thereof. In one embodiment, data representing the units of speech is communicated to the communications device 316 via the network interface 314. The data representing the units of speech may be communicated to the communications device 316 over any short-range communications protocol. The communications device 316 may generate speech or text from the data representing the units of speech. The communications device 316 may then communicate the speech or text to an external recipient. The communications device 316 may be any one of a smartphone, a mobile phone, a tablet, a laptop, a smartwatch, a media player, a Personal Digital Assistant (PDA), a Head-Mounted Display (HMD) device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a Mixed Reality (MR) device, a home assistant, an autonomous vehicle, and a drone. In another embodiment, the data representing the units of speech may be communicated to a cloud computing system. The generation of the speech or the text is then performed on the cloud computing system. In an alternative embodiment, the data representing the units of speech may be communicated directly to a communications device of the external recipient over any short-range communications protocol. The communications device of the external recipient may then generate speech or text from the units of speech.
In some embodiments, the processing circuitry 310 may also be configured to send control signals to the array of sensors 202. For example, the processing circuitry 310 may calibrate a sensor 304 in the array of sensors 202 by sending a control signal to the sensor 304. The processing circuitry 310 may also turn the sensor 304 off or on. For example, the processing circuitry 310 may send a control signal which disables the sensor 304, thereby minimizing power required by the sensor 304. The processing circuitry 310 may also send a control signal to turn off the sensor 304 in response to data from other sensors (sensor 306 and sensor 308) in the array of sensors 202. For example, some sensors, such as the sensor 304 may be turned off to conserve power source if the face mask 302 is detected to not be in use. When the face mask 302 is detected to not be in use, the processing circuitry 310 may maintain only select sensors in “on” position. When the face mask 302 is detected to again be in use by the wearer, i.e., for example, when the wearer starts speaking, the processing circuitry 310 may reactivate, or turn on, the remaining sensors.
In an example illustrating the flow of communication, a wearer of face mask 302 connected to headset 340 engages in a phone call over network 348 with a wearer of face mask 320 connected with the headset 334. Face mask 302 may connect to a communications device 344 and face mask 320 may connect to a communications device 346 over any short-range communications protocol. When the wearer of face mask 302 speaks, processing circuitry 310 receives data representing changes in shape of the part of the face of the wearer of face mask 302 from the sensors, sensor 304, sensor 306, and sensor 308. The data representing the changes in shape of the part of the face may include distances between the sensors and/or the positions of the sensors. The processing circuitry 310 classifies the data into units of speech using a machine learning model. The communications device 344 may then generate speech from the data representing the units of speech. The generated speech is subsequently communicated to communications device 346 over network 348. Alternatively, the data representing the units of speech is transmitted directly to the communications device 346 over any short-range communications protocol. The communications device 346 then generates the speech from the data representing the units of speech. Communications device 346 then outputs the generated speech using the headset 334.
The face mask 302 may, for example, register each of the wearers of the face mask 302 and create and store a personalized profile for each registered wearer. The personalized profile may store a personal vocabulary specific to the wearer and a personalized machine learning model. The personalized profile associated with the face mask 302 may be, for example stored in the memory 312 and the personalized profile associated with the face mask 320 may be, for example stored in the memory 330. The personalized profile may be transferrable to another face mask 200 by the wearer.
As illustrated in
In an embodiment when the wearer wears the face mask 200, an optional calibration may take place, during which the wearer is requested to speak a few utterances, or a sample set of words. Thereafter, the face mask 200 simply rests on the wearer's face so that machine learning model learns the strain values from the array of sensors 202. In an example, the strain values can then be used as a reference while the face mask 200 captures the wearer's speech. In such a manner, the machine learning model builds the personalized profile for the wearer of the face mask 200. In one embodiment, the personalized machine learning model stored in the personalized profile of the wearer may be collated with personalized machine learning models associated with other wearers to produce a collaborative training model. When the face mask 200 is worn in a different position on the face of the wearer or if there is a malfunction with sensors in the array of sensors 202, or if the face mask 200 detects a deviation higher than a predefined threshold between the measured strain values, the calibration phase can be re-initiated. Such an observation can be prompted as a notification to the communications device 316 communicatively connected to the face mask 200 of the wearer, to re-position the face mask 200 or to re-calibrate the strain values.
The location information of each sensor in the array of sensors 202 is used to identify the measurements produced by the sensor. Alternatively, the measurements from each sensor may be represented alongside the unique identifier of the sensor. This type of data representation is known as a coordinate matrix. Data representation in the form of coordinate matrices typically require less data storage in comparison with data representation in the form of standard dense matrix structures. As an alternate to row and column type representation, the location information of each sensor can be represented relatively from a single point of reference on the grid. For example, data measured from each sensor of the array of sensors 202 is represented alongside the location of each sensor as a difference between the single point of reference on the grid and the sensor.
The electrical characteristics from the sensors are represented with respect to different location coordinates on the face mask 200. Thereafter, a grid-to-graph sequence face modelling is used to reconstruct the image of the wearer's facial muscles. As illustrated in
These graph-word combinations are then used to train the graph-to-syllable model, in step 5. The input to the graph-to-syllable model is a sequence of graphs and the output is a set of syllables which are associated with the facial movements converted from a grid-to-graph sequence. The process involves an encoder and a decoder where the encoder learns to compress a sequence of graphs into a latent space. The decoder then decompresses the sequence of graphs from the latent space to output one or more syllables. In an alternative embodiment, the dataset that is captured in step 1 to step 4 can be transferred to a cloud computing system and a machine learning model is trained in the cloud computing system.
In addition, federated learning techniques may be implemented to train a decentralized version of the machine learning model where data is collected from personalized machine learning models from multiple wearers with common vocabularies. Wearers are grouped by demographics, such as by gender, age, etc. and afterwards a new demography specific machine learning model is produced for wearers in different demographic groups through federated averaging. A sequence diagram illustrating an example federated training process is shown in
It should be noted that alternatively, personalized machine learning models that are associated with face masks that fit into a demographic group may be collated centrally by a cloud service. These personalized machine learning models may then be used to prepare demography specific machine learning models. In an example, classifications from a personalized machine learning model are preferred over a collaborative training model or a demography specific machine learning model as the personalized machine learning model is trained for a wearer. However, in the event of lack of sufficient data from personalized machine learning models, collaborative training model or demography specific machine learning models may be used for classifications.
In an embodiment, the face mask 200 comes pre-installed with graph-to-syllable models that have already been trained to classify the data representing the changes in shape of the part of the face of the wearer into units of speech. For example, a graph-to-syllable model pre-installed in a face mask 200 may be a demography specific machine learning model. The pre-installed model may be re-trained to become more wearer-specific or a separate model is trained and then the pre-installed model is combined with the wearer's separate model. In addition to pre-installed graph-to-syllable models, the face mask 200 can be trained for specialized uses in limited contexts, for example, a niche technical environment, such as a medical environment, that uses frequent and specific vocabulary. The specific vocabulary used in the niche technical environment may be in addition to the personal vocabulary specific to the wearer. In such cases, machine learning model associated with the face mask 200 is trained with predefined vocabularies that may be in regular use in the niche technical environment. Then, the machine learning model associated with the face mask 200 can be further trained by the wearer only for words associated with the niche technical environment. The wearer trains the machine learning model for the niche technical environment by uttering each word while the face mask 200 generates strain maps. The strain maps may then be converted to a graph and then a graph-to-syllable model will learn to associate graphs with syllables, i.e., the syllables used in building words for the specific vocabulary. The face mask 200 which is pre-installed with graph-to-syllable models mitigates the extent of training required to calibrate the face mask 200 to the wearer's speech. In this example the machine learning model is built for the niche technical environment. However, without limitation, it is noted that the machine learning model can similarly be trained by the wearer in generic contexts as well.
In an embodiment, the face mask 200 may be configured to capture expressions of the wearer from changes in shape of a part of a face of the wearer. The expressions of the wearer may include but are not limited to gestures or emotions. The face mask 200 includes the array of sensors 202 adapted to capture the changes in shape of the part of the face of the wearer and a processing circuitry adapted to receive data from the array of sensors 202. The data from the array of sensors 202 represents the changes in shape of the part of the face of the wearer. The data is classified using a machine learning model to capture an expression produced by the wearer, wherein the captured expression corresponds to the captured changes in shape of the part of the face of the wearer. In the context of the sequence-to-sequence model as depicted in
The data is then converted into a graph. The graph captures one or more locations of affected sensors and the corresponding measurements. For example, changes in strain values corresponding to changes in the shape of the part of the face, in a sensor may be measured in voltage.
In an example, the changes in affected sensors as illustrated in
In an embodiment to associate the sequences of graphs to syllables, for example vowel “a”, and later to words, the sequence-to-sequence model as described in conjunction with the description of
In an embodiment, the face mask 200 comprises a gesture-to-talk-interface for initiating transmission of speech of the sender. The sender's facial muscles' lineation representing speech is followed by a gesture, such as a long pause, where the face mask 200 will simply rest on the sender's still face. In response to the gesture, the face mask 200 triggers transmission of the sender's speech to the receiver, in a manner akin to a push-to-talk device.
In an alternative embodiment, the text-to-speech conversion may also be performed in a cloud computing system. A block diagram illustrating a data processing system 1100, where data transmitted between the sender's face mask 902 and the receiver's communication device is processed and text-to-speech is converted in a cloud computing system 1102, is shown in
Potential advantages of one or more of these embodiments may include that the face mask is able to capture speech produced by the wearer of the face mask using sensors adapted to capture changes in shape of a part of a face of the wearer while speaking. Therefore, the need for ambient noise cancellation to mitigate unwanted background noise from the wearer is eliminated. Further, the need for invasive prosthetics is obviated since the speech produced by the wearer is captured from the data received from the sensors in the face mask as opposed to devices mounted within the wearer's body.
In the above description of various embodiments of present inventive concepts, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of present inventive concepts. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which present inventive concepts belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.
When an element is referred to as being “connected”, “coupled”, “responsive”, or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected”, “directly coupled”, “directly responsive”, or variants thereof to another clement, there are no intervening elements present. Like numbers refer to like elements throughout. Furthermore, “coupled”, “connected”, “responsive”, or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another clement/operation. Thus, a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of present inventive concepts. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification.
As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia,” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation “i.e.”, which derives from the Latin phrase “id est.” may be used to specify a particular item from a more general recitation.
Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of present inventive concepts may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as “circuitry.” “a module” or variants thereof.
It should also be noted that in some alternate implementations, the functions/acts noted in the blocks may occur out of the order noted in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Finally, other blocks may be added/inserted between the blocks that are illustrated, and/or blocks/operations may be omitted without departing from the scope of inventive concepts. Moreover, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.
Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present inventive concepts. All such variations and modifications are intended to be included herein within the scope of present inventive concepts. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the presented embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of present inventive concepts.
Number | Date | Country | Kind |
---|---|---|---|
20210100925 | Dec 2021 | GR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SE2022/050220 | 3/8/2022 | WO |