At least one of the present embodiments generally relates to haptics and more particularly to the encoding and decoding of information representative of a haptic effect, wherein a haptic signal is compressed based on a location of where to apply the haptic effect.
Fully immersive user experiences are proposed to users through immersive systems based on feedback and interactions. The interaction may use conventional ways of control that fulfill the need of the users. Current visual and auditory feedback provide satisfying levels of realistic immersion. Additional feedback can be provided by haptic effects that allow a human user to perceive a virtual environment with his senses and thus get a better experience of the full immersion with improved realism. However, haptics is still one area of potential progress to improve the overall user experience in an immersive system.
Conventionally, an immersive system may comprise a 3D scene representing a virtual environment with virtual objects localized within the 3D scene. To improve the user interaction with the elements of the virtual environment, haptic feedback may be used through stimulation of haptic actuators. Such interaction is based on the notion of “haptic objects” that correspond to physical phenomena to be transmitted to the user. In the context of an immersive scene, a haptic object allows to provide a haptic effect by defining the stimulation of appropriate haptic actuators to mimic the physical phenomenon on the haptic rendering device. Different types of haptic actuators allow to restitute different types of haptic feedbacks.
An example of a haptic object is an explosion. An explosion can be rendered though vibrations and heat, thus combining different haptic effects on the user to improve the realism. An immersive scene typically comprises multiple haptic objects, for example using a first haptic object related to a global effect and a second haptic object related to a local effect.
The principles described herein apply to any immersive environment using haptics such as augmented reality, virtual reality, mixed reality or haptics-enhanced video (or omnidirectional/360° video) rendering, for example, and more generally apply to any haptics-based user experience. A scene for such examples of immersive environments is thus considered an immersive scene.
Haptics refers to sense of touch and includes two dimensions, tactile and kinesthetic. The first one relates to tactile sensations such as friction, roughness, hardness, temperature and is felt through the mechanoreceptors of the skin (Merkel cell, Ruffini ending, Meissner corpuscle, Pacinian corpuscle). The second one is linked to the sensation of force/torque, position, motion/velocity provided by the muscles, tendons, and the mechanoreceptors in the joints. Haptics is also involved in the perception of self-motion since it contributes to the proprioceptive system (i.e. perception of one's own body). Thus, the perception of acceleration, speed or any body model could be assimilated as a haptic effect. The frequency range is about 0-1 kHz depending on the type of modality. Most existing devices able to render haptic signals generate vibrations. Examples of such haptic actuators are linear resonant actuator (LRA), eccentric rotating mass (ERM), and voice-coil linear motor. These actuators may be integrated into haptic rendering devices such as haptic suits but also smartphones or game controllers.
To encode haptic signals, several formats have been defined related to either a high level description using XML-like formats (for example MPEG-V), parametric representation using json-like formats such as Apple Haptic Audio Pattern (AHAP) or Immersion Corporation's HAPT format, or waveform encoding (IEEE 1918.1.1 ongoing standardization for tactile and kinesthetic signals). The HAPT format has been recently included into the MPEG ISOBMFF file format specification (ISO/IEC 14496 part 12).
Moreover, GL Transmission Format (glTF™) is a royalty-free specification for the efficient transmission and loading of 3D scenes and models by applications. This format defines an extensible, common publishing format for 3D content tools and services that streamlines authoring workflows and enables interoperable use of content across the industry.
While the topic of kinesthetic data compression has received some attention in the context of bilateral teleoperation systems with kinesthetic feedback, the compression of vibrotactile information on the other hand remains fairly unaddressed. More generally, the adaption of the compression of a haptic signal according to the body part that is stimulated by a haptic actuator rendering the haptic signal has not yet been addressed.
The ongoing standardization process IEEE 1918.1.1 for tactile and kinesthetic signals is a first attempt at defining a standard coded representation.
Embodiments described hereafter have been designed with the foregoing in mind.
Embodiments are related to a device and method for encoding a haptic signal of a haptic effect comprising a compression step, where the compression is based on the location where the haptic effect is to be performed thanks to a mapping between a location where the haptic effect is to be performed and a compression parameter, the location being based on body segmentation, or vertex-based or texture-based. Corresponding device and method for decoding are described.
A first aspect of at least one embodiment is directed to a method for decoding comprising obtaining information representative of a haptic effect, determining a location where to apply the haptic effect, determining a type of haptic effect, determining at least one compression parameter based on obtained location and type, decompressing a haptic signal associated with the haptic effect based on determined at least one compression parameter and decoding the decompressed haptic signal.
A second aspect of at least one embodiment is directed to a method for coding comprising obtaining a location where to apply a haptic effect, obtaining a type of haptic effect, obtaining a haptic signal associated with the haptic effect, determining at least one compression parameter based on obtained location and type, compressing the haptic signal based on the determined at least one compression parameter, generating information representative of the haptic effect and encoding the compressed haptic signal and information generated.
A third aspect of at least one embodiment is directed to an apparatus for decoding a haptic signal comprising a processor configured to obtain information representative of a haptic effect, determine a location where to apply the haptic effect, determine a type of haptic effect, determine at least one compression parameter based on obtained location and type, decompress a haptic signal associated with the haptic effect based on determined at least one compression parameter and decode the decompressed haptic signal.
A fourth aspect of at least one embodiment is directed to an apparatus for encoding a haptic signal comprising a processor configured to obtain a location where to apply a haptic effect, obtain a type of haptic effect, obtaining a haptic signal associated with the haptic effect, determining at least one compression parameter based on obtained location and type, compress the haptic signal based on the determined at least one compression parameter, generate information representative of the haptic effect and encode the compressed haptic signal and information generated.
A fifth aspect of at least one embodiment is directed to a signal comprising information representative of a haptic effect and compressed haptic signal generated according to the second aspect.
According to a sixth aspect of at least one embodiment, a computer program comprising program code instructions executable by a processor is presented, the computer program implementing at least the steps of a method according to the first or second aspect.
According to a seventh aspect of at least one embodiment, a computer program product which is stored on a non-transitory computer readable medium and comprises program code instructions executable by a processor is presented, the computer program product implementing at least the steps of a method according to the first or second aspect.
The haptic rendering device comprises a processor 101. The processor 101 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor may perform data 20 processing such as haptic signal decoding, input/output processing, and/or any other functionality that enables the device to operate in an immersive system.
The processor 101 may be coupled to an input unit 102 configured to convey user interactions. Multiple types of inputs and modalities can be used for that purpose. Physical keypad or a touch sensitive surface are typical examples of input adapted to this usage although voice control could also be used. In addition, the input unit may also comprise a digital camera able to capture still pictures or video. The processor 101 may be coupled to a display unit 103 configured to output visual data to be displayed on a screen. Multiple types of displays can be used for that purpose such as a liquid crystal display (LCD) or organic light-emitting diode (OLED) display unit. The processor 101 may also be coupled to an audio unit 104 configured to render sound data to be converted into audio waves through an adapted transducer such as a loudspeaker for example. The processor 101 may be coupled to a communication interface 105 configured to exchange data with external devices. The communication preferably uses a wireless communication standard to provide mobility of the haptic rendering device, such as cellular (e.g. LTE) communications, Wi-Fi communications, and the like. The processor 101 may access information from, and store data in, the memory 106, that may comprise multiple types of memory including random access memory (RAM), read-only memory (ROM), a hard disk, a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, any other type of memory storage device. In embodiments, the processor 101 may access information from, and store data in, memory that is not physically located on the device, such as on a server, a home computer, or another device.
The processor 101 may be coupled to a haptic unit 107 configured to provide haptic feedback to the user, the haptic feedback being described in a haptic object 192 that is part of a scene description 191 of an immersive scene 190. The haptic feedback describes the kind of feedback to be provided according to the syntax described further hereinafter. Such description file is typically conveyed from the server 180 to the haptic rendering device 100. The haptic unit 107 may comprise a single haptic actuator or a plurality of haptic actuators located at a plurality of positions on the haptic rendering device. Different haptic units may have a different number of actuators and/or the actuators may be positioned differently on the haptic rendering device.
The processor 101 may receive power from the power source 108 and may be configured to distribute and/or control the power to the other components in the haptic rendering device 100. The power source may be any suitable device for powering the device. As examples, the power source may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), and the like), solar cells, fuel cells, and the like.
While the figure depicts the processor 101 and the other elements 102 to 108 as separate components, it will be appreciated that these elements may be integrated together in an electronic package or chip. It will be appreciated that the haptic rendering device 100 may include any sub-combination of the elements described herein while remaining consistent with an embodiment. The processor 101 may further be coupled to other peripherals or units not depicted in
Typical examples of haptic rendering device 100 are haptic suits, smartphones, game controllers, haptic gloves, haptic chairs, haptic props, motion platforms, etc. However, any device or composition of devices that provides similar functionalities can be used as haptic rendering device 100 while still conforming with the principles of the disclosure.
In at least one embodiment, the device does not include a display unit but includes a haptic unit. In such embodiment, the device does not render the scene visually but only renders haptic effects. However, the device may prepare data for display so that another device, such as a screen, can perform the display. Examples of such devices are haptic suits or motion platforms.
In at least one embodiment, the device does not include a haptic unit but includes a display unit. In such embodiment, the device does not render the haptic effect but only renders the scene visually. However, the device may prepare data for rendering the haptic effect so that another device, such as a haptic prop, can perform the haptic rendering. Examples of such devices are smartphones, head-mounted display, or laptops.
In at least one embodiment, the device does not include a display unit nor does it include a haptic unit. In such embodiment, the device does not visually render the scene and does not render the haptic effects. However, the device may prepare data for display so that another device, such as a screen, can perform the display and may prepare data for rendering the haptic effect so that another device configured to render the haptic effect, such as a haptic prop, can perform the haptic rendering. In this case, the prepared data is then provided to the haptic rendering device through a communication channel such as the communication interface 105. Examples of such devices are desktop computers, optical media players, or set-top boxes.
In at least one embodiment, the immersive scene 190 and associated elements are directly hosted in memory 106 of the haptic rendering device 100 allowing local rendering and interactions.
Although the different elements of the immersive scene 190 are depicted in
As discussed above, some devices do not perform the rendering themselves but delegate this task to other devices. In this case, data is prepared for the rendering of the visual element and/or of the haptic effect and transmitted to the device(s) performing the rendering.
In a first example, the immersive scene description 191 may comprise a virtual environment of an outdoor camp site where the user can move an avatar representing him. A first haptic feedback could be a breeze of wind that would be present anywhere in the virtual environment and generated by a fan. A second haptic feedback could be a temperature of 30° C. when the avatar is in proximity of a campfire. This effect would be rendered by a heating element of a haptic suit worn by the user executing the process 200. However, this second feedback would only be active when the position of the user is detected as being inside the haptic volume of the second haptic object. In this case the haptic volume represents the distance to the fire where the user feels the temperature.
In another example, the immersive scene description 191 may comprise a video of a fight between two boxers and, the user wearing a haptic suit, the haptic effect may be a strong vibration on the chest of the user when one of the wrestlers receives a punch.
In this example, a first haptic rendering device is a haptic vest 380 where only the two sleeves comprise haptic actuators to render vibrations. A second haptic rendering device is a haptic chair 390, also able to render vibrations.
First, the haptic effect to be rendered is described in a haptic feedback description file 300. According to at least one embodiment, this file uses the aom file format and syntax. In this example, one haptic object 310 is present in the haptic feedback description file 300. However, as introduced above, a haptic feedback description file may comprise multiple haptic objects.
The haptic object 310 comprises three haptic channels 311, 312, 313. The haptic channel 311 is associated with a geometric model 351 (avatar_ID) selected from the set of standard generic predefined geometric models 350 and more precisely to the left arm of the geometric model 351 (body_part_mask corresponding to left arm). The haptic channel 311 is also associated with the audio file 320 and more particularly with the first channel of the audio file comprising the audio signal 321. Thus, the haptic rendering device 380 is then able to select the audio signal 321 to be applied to the haptic actuators of the left arm. Similarly, for the right arm, as defined by the information of the second haptic channel 312, the audio signal 322 (second channel of the audio file) will be applied to the haptic actuators of the right arm, allowing the render on the haptic vest 380 the vibration as defined in the haptic feedback description file 300.
The same principle applies to the haptic chair 390 with the difference that it uses a custom avatar_ID. Indeed, its geometry is not part of the set of generic geometric models. Therefore, the corresponding geometry is defined as a custom avatar_ID 330 within the haptic feedback description file 300. The third audio signal 323 is selected to be applied to the actuators of the haptic chair 390.
The association between the haptic channels and the audio channels is implicit and is done according to the order of appearance. The first haptic channel of a haptic object will be associated with the first audio channel of the audio file (explicitly) associated with the haptic object.
In a second example (not illustrated) of data organization for a haptic feedback description file according to at least one embodiment, the file comprises two different haptic objects. Therefore, the haptic channels are in different haptic objects. In this case, it is possible to use two different audio files file1.wav and file2.wav.
The set of models 350 typically represent the geometry of human bodies with different levels of details and thus provide different levels of precision. It can be applied to any kind of geometric model (animal, object, etc.). In the figure, the precision of geometric model 351 is much lower than the detailed mesh of geometric model 352.
As seen above, a body part is associated with a binary mask (third column). This provides a convenient way to combine multiple body parts. For example, the upper body corresponds to grouping the body parts with IDs 1 to 14. This combination is performed by a bitwise OR operation over the masks of the different body parts to get the corresponding mask value. Therefore, a binary mask of 00000000001111111 (0x003FFF in hex value) allows to easily group the body parts with IDs 1 to 14 and thus represents the complete upper body in a very efficient manner. This grouping is shown in
In this document, the notion of “location” where the haptic effect is to be applied corresponds to a determined segmentation of the body (such as body parts of
Immersive scenes may comprise multiple haptic effects comprising different haptic signals, such as the signals 321, 322 and 323 of
This principle is illustrated in the
However, in the particular case of kinesthetic data, the Weber fraction depends on the type of kinesthetic data as illustrated in table 1 that shows the sensory resolution and Weber fractions for a range of tactile and haptic stimuli extracted from Jones, L. A. (2012), “Application of Psychophysical Techniques to Haptic Research”.
In this table, the first column lists different types of haptic data. The second column gives the resolution for a type of stimulus. The resolution corresponds to the absolute threshold: it is the smallest amount of stimulus energy necessary to produce a sensation. The third column lists the Weber fraction expressed in percentage. The value of the Weber fraction may vary for different subjects and with various parameters (e.g. location on the body, temperature, humidity, etc.), thus it is expressed as an average or interval.
These methods based on perceptual deadband can be used both for offline compression and for live streaming of haptic compressed data.
In addition, for offline data compression of vibrotactile signals, existing compression methods similar to audio compression techniques may be used, for example relying on Discrete Cosine Transform or Fourier Transform to compress the data by removing unnecessary frequencies. Each type of haptic stimuli (e.g. vibration, kinesthetic, temperature, etc.) is associated with at least one specific mechanoreceptor (Pacinian corpuscles, Meissner's corpuscles, Merkel cells, Ruffini corpuscles) which presents a limited range of perceptible frequencies as illustrated in Table 2. The data can be compressed by discarding non relevant information associated with non-perceivable frequencies (DCT coefficients for instance). Additionally, the remaining data (DCT coefficients of perceived frequencies for instance) may then be quantized based on Weber's law of JNDs.
Table 2 shows the characteristics of mechanoreceptors of the human body. The first column is the name of different mechanoreceptors. For each mechanoreceptor, the second column gives their type: Slowly Adapting (SA) of type 1 or 2 and Rapidly Adapting (RA) of type 1 or 2. The third column lists the frequency range of stimuli achievable, the fourth column specify the spatial accuracy of the receptor on the skin and the fifth column describes their role.
Therefore, in at least one embodiment, the haptic signal of a haptic effect is compressed based on the location where the haptic effect is to be applied. For example, the compression of a haptic signal for an upper arm may be more severe than the compression of a haptic signal for a finger since the sensitivity in this body area is lower than on the finger. This is possible thanks to a mapping, for a type of effect, between locations where the haptic effect is to be performed and compression parameters. The location where the haptic effect is to be performed is for example based on body segmentation, or vertex-based or texture-based. The compression may also take into account the type of signal. Example of compression parameters are the Weber fraction or a maximal frequency of the haptic signal.
In the
In at least one embodiment, it is thus proposed to define a mapping between the different body parts and the associated compression effect.
When the compression parameters mapping is customized, a definition of this mapping may be added to the definition of a haptic object using the OHM file format syntax. This can be done by specifying a compression parameter in the definition of a body part, as illustrated in the syntax of table 3.
This compression parameter mapping may also be added to the definition of a haptic object using the glTF™ file format syntax by the definition of a section dedicated to the mapping, as illustrated in the syntax of table 4.
When building the file, the creator decides which type of compression is the best for a given signal and thus chooses between Weber fraction (JND) or maximal frequency for example.
Table 5 illustrates an example of usage of the mapping for a vibration effect, according to the glTF™ file format syntax. In this example, the signal is using a “somefile.wav” waveform haptic signal that is compressed using a maximal frequency defined in a “mapping” section. The compression parameter is identified as being the “frequency” and the maximal frequencies for a body part are determined in the “parameters” array. These parameters correspond to the elements of the third column of
In at least one embodiment, different mappings may be defined and used to adapt to changes in the virtual or real environment corresponding to different situations. For example, when the temperature increases, the user may start sweating. In such situation, the compression parameters may be adapted since the sensitivity varies with the humidity level.
At least one embodiment relates to mapping of compression parameters for a vertex-based haptic signal. In such embodiment, a mapping of the compression parameters with regard to a vertex of the avatar (i.e. body model) adapts the compression of the haptic signal according to the principles introduced above. This allows a much more precise tuning of the signal compression.
When the compression parameters mapping is customized and the mesh representation of the avatar is provided as an external file, this data can be encoded directly in the mesh for example by using the color information of a vertex. Color is conventionally encoded over a specific range (for example between 0 and 1 or between 0 and 255). To convey compression parameters, it is necessary to specify the correspondence of the values for a type of parameter in order to rescale the data properly.
In at least one embodiment, this correspondence is pre-determined and known both by the encoder and the decoder. The table 6 illustrates a range of possible values for correspondence of the maximal frequency and Weber fraction compression parameters.
Using this table, it is possible to express compression parameters in terms of color values. For example, if a Weber fraction of 8% needs to be carried over for a vertex using an 8-bit color space, the numerical value 20 (=8% of 255) will be indicated as the color for the vertex.
In at least one embodiment, the correspondence of the values for a type of parameter may be customized for specific purposes and provided along with metadata related to the haptic effect. This correspondence may be conveyed in the definition of a haptic object using the glTF™ file format syntax by specifying a section dedicated to the mapping, as illustrated in the syntax of table 7, where the data referenced by the accessor contains the compression parameters associated with a vertex of the mesh.
Table 8 shows an example of compression mapping correspondence information using vertex information based on the glTF™ file format syntax where the maximal frequency of 1000 Hz is set for the vibration.
At least one embodiment relates to mapping of compression parameters using a texture associated with the mesh of the avatar representation. Using a texture instead of only vertex information allows to have an even higher level of details. This can be particularly useful when interacting with virtual environments. Collisions with haptic objects can trigger haptic effect at very precise locations where there might be important variations of sensitivity (on the hands for instance). With a texture mapping, the retrieved compression parameter will be more precise than using only vertex-based information. A compression parameter value can thus be specified for a pixel of the texture associated with the mesh of the avatar representation. Similar to the vertex-based embodiment, correspondence between color and compression parameters maps needs to be specified as illustrated in table 9.
This correspondence may be conveyed in the definition of a haptic object using the glTF™ file format syntax by specifying a section dedicated to the mapping, as illustrated in the syntax of table 10. Each texture is defined by a gltf textureInfo.schema.json that is an ID of a texture in the glTF description file. Note that a custom texture is available where a user can put any kind of data in a texture format, which could be used for future extensions as well. The following modifications to the IDCC_Haptics_avatar glTF schema should also be done to reference the proper haptic maps.
These encoding processes described above could be used not only for offline compression but also for streaming purposes to compress the size of the bitstream, for example in the context of bilateral teleoperation systems with kinesthetic feedback. Indeed, when interacting with a virtual world, the location of haptic effect might change. In such situation, the encoding methods described in the embodiments above allow to dynamically update compression parameters to optimize the compression level while maintaining a sufficient signal quality.
One example of application is the streaming of immersive experiences. Video game streaming is currently extremely popular. It is based on broadcasting the game experience of one player on a streaming platform so that the game session of the player can be experienced by passive users in real time. Currently, such transmission of game experience is still limited to video experiences. However, with the increasing number of devices capable of rendering augmented reality or virtual reality experiences, it is likely that such experiences will also include some haptic feedback in the future. The encoding methods described in the embodiments above would allow to stream haptic data with low bitrates in real time by optimally compressing the data. In such application of streaming of immersive game experiences, the gamer is playing a video game where his avatar interacts with the environment. Some elements in the game environment are associated with haptic signals. When a collision is detected between the avatar and the haptic object, the gamer itself feels the haptic effect conventionally. In addition, the associated haptic signal is obtained, compressed based on the location of the collision on the avatar according to one of the encoding methods described in the embodiments above and the compressed haptic effect is then streamed to the network so that the haptic effect may also be sensed by the passive users. On the client side, a passive user can experience the gameplay using different devices. The gameplay can be streamed as usual on a 2D screen or using any type of device able to render a haptic effect by obtaining the haptic stream, decompressing and rendering the haptic effect on the given device.
Another example of application of these encoding methods is cloud gaming. Cloud gaming is based on running a game on a remote server and using the network to send input information (like controller inputs) from the client device to the game server, compute the corresponding images and stream the resulting video feed to the client. In such context, similarly to video game streaming, when a collision is detected between the avatar of the user and a haptic object, the game server compresses the associated haptic data based on the location of the collision using one of the encoding methods described in the embodiments above and streams the compressed information directly to the client device. The client decompresses and renders the haptic signal on the appropriate device and/or haptic actuator.
The solutions described above for retrieving compression parameters based on body location could also be based directly on the type of haptic device being used. Typically, some haptic devices such as handheld devices, haptic belts or haptic-enabled wristbands are associated to specific body locations. Information on the type of rendering devices could then be used directly to perform the appropriate compression.
Although different embodiments have been described separately, any combination of the embodiments together can be done while respecting the principles of the disclosure.
Although embodiments are related to haptic effects, the person skilled in the art will appreciate that the same principles could apply to other effects such as the sensorial effects for example and thus would comprise smell and taste. Appropriate syntax would thus determine the appropriate parameters related to these effects.
Reference to “one embodiment” or “an embodiment” or “one implementation” or “an implementation”, as well as other variations thereof, mean that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
Additionally, this application or its claims may refer to “determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
Additionally, this application or its claims may refer to “obtaining” various pieces of information. Obtaining is, as with “accessing”, intended to be a broad term. Obtaining the information may include one or more of, for example, receiving the information, accessing the information, or retrieving the information (for example, from memory or optical media storage). Further, “obtaining” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
In variants of first, second, third and fourth aspect:
Number | Date | Country | Kind |
---|---|---|---|
21306318.3 | Sep 2021 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/076519 | 9/23/2022 | WO |