The invention relates to a system and a computer-implemented method for volume rendering of volumetric image data. The invention further relates to a workstation and imaging apparatus comprising the system, and to a computer-readable medium comprising instructions for causing a processor system to perform the computer-implemented method.
Medical image acquisition techniques such as Computed Tomography (CT), Magnetic Resonance (MR), etc., may acquire volumetric image data of an anatomical region of a patient. Here, the term ‘volumetric image data’ refers to image data which represents an image volume. Such volumetric image data may provide a three-dimensional (3D) view of an anatomical region or structure of the patient, and may thereby support medical diagnosis and treatment of the patient.
The volumetric image data may be presented in various ways to a user. For example, if the volumetric image data is represented by a stack of image slices, one of the image slices may be selected for display. Another example is that an oblique slice may be generated using a multi-planar reformatting technique.
Yet another example is that a volume rendering technique be used to generate a two-dimensional (2D) view of the volumetric image data. Several volume rendering techniques are known in the art, which may generally involve assigning opacity values to the image data and projecting the image data onto a viewing plane using the opacity values. The output of volume rendering is typically a 2D image.
It is known to use segmentation to delineate a sub-volume of the image volume and generate a volume rendering of the image data inside the sub-volume. This may allow a specific anatomical region or structure to be visualized.
A publication titled ‘Fast Volume Segmentation With Simultaneous Visualization Using Programmable Graphics Hardware’ by Sherbondy et al. describes a fast segmentation method which is based on seeded region growing with merging criteria based on intensity and gradient values and with gradient sensitivity scaled using nonlinear diffusion. The segmentation algorithm which is used is said to allow interactive visualization and control because of its computation speed and coordination with a hardware accelerated volume renderer.
It is further said that a user may interactively change parameters of the segmentation algorithm and run the segmentation algorithm again.
EP2312534 describes a device which allows a range of volume rendering to be limited using a three-dimensional region of interest (3D-ROI).
The manual of the segmentation plugin of the Medical Imaging Interaction Toolkit (MITK) as consulted via
http://docs.mitk.org/2016.11/org mitk views segmentation.html#org mitk gui qt se gmentationUserManualOverview, describes a segmentation plugin which allows creating segmentations of anatomical and pathological structures in medical images of the human body. Using a correction tool, small corrective changes may be applied to the segmentation, specifically by drawing lines.
Disadvantageously, the known measures such as those described by Sherbondy et al. do not allow a user to easily adjust the sub-volume which is volume rendered.
It would be advantageous to obtain a system and method which allows a user to more easily adjust the sub-volume which is volume rendered.
In accordance with a first aspect of the invention, a system is provided for volume rendering of image data of an image volume showing a type of anatomical structure, the system comprising:
interactively adjust a geometry of the viewing plane of the volume rendering to establish a user-selected viewing plane;
interactively adjust the sub-volume by applying at least one of: a push action and a pull action to the sub-volume, wherein the push action causes a part of the sub-volume to be pulled inwards, wherein the pull action causes the part of the sub-volume to be pulled outwards, and wherein the part of the sub-volume to which the push action or the pull action is applied is selected by the processor based on the geometry of the user-selected viewing plane,
wherein the volume rendering of the sub-volume is updated in the viewport in response to an adjustment of the viewing plane or the sub-volume by the user.
A further aspect of the invention provides a workstation or imaging apparatus comprising the system.
A further aspect of the invention provides a computer-implemented method for volume rendering of image data of an image volume showing a type of anatomical structure, the method comprising:
interactively adjust a geometry of the viewing plane of the volume rendering to establish a user-selected viewing plane;
interactively adjust the sub-volume by applying at least one of: a push action and a pull action to the sub-volume, wherein the push action causes a part of the sub-volume to be pulled inwards, and wherein the pull action causes the part of the sub-volume to be pulled outwards, wherein the part of the sub-volume to which the push action or the pull action is applied is selected by the processor based on the geometry of the user-selected viewing plane; and
A further aspect of the invention provides a computer-readable medium comprising transitory or non-transitory data representing instructions arranged to cause a processor system to perform the computer-implemented method.
The above measures provide an image data interface configured for accessing volumetric image data which may be acquired by various imaging modalities, including but not limited to CT and MRI, positron emission tomography, SPECT scanning, ultrasonography, etc. The volumetric image data may directly formatted as an image volume, e.g., as a 3D array of voxels, but also formatted differently yet represent an image volume. An example of the latter is a set of 2D images, such as image slices, which together represent the image volume.
The above measures further provide a user input interface for receiving user input commands from a user input device operable by the user. In particular, the user may use the user input device, e.g., a computer mouse, keyboard or touch screen, to provide input commands to the system and to control various operational aspects of the system, such as the user interface established by the system. Furthermore, a display output interface is provided which may, during use of the system, be connected to a display to display visual output of the system.
A processor is provided which is configurable by instructions stored in a memory to apply a segmentation model to the image data. Such a segmentation model, which may be defined by model data, may be configured to segment a type of anatomical region or structure. Effectively, the processor may perform a model-based segmentation of the image data by applying the segmentation model to the image data. For that purpose, the instructions stored in the memory may define an adaptation technique. It is noted that the functionality described in this paragraph is known per se from the field of model-based segmentation of medical images.
By applying a segmentation model to the image data, a sub-volume of the image volume may be delineated which normally corresponds to the type of anatomical region or structure represented by the segmentation model. For example, the interior of the ribcage may be segmented including the heart. The image data inside the sub-volume is then volume rendered onto a viewing plane, e.g., using any suitable known type of volume rendering technique, thereby obtaining an output image which represents the volume rendering of the image data inside the sub-volume. This volume rendering of the image data inside the sub-volume is henceforth also simply referred to as ‘volume rendering of the sub-volume’ or in short ‘rendered sub-volume’. The viewing plane may be a geometric construct and may be defined in relation to the image volume, e.g., by plane coordinates.
Display data is then generated which, when displayed on a display, shows the rendered sub-volume in a viewport. For example, the viewport may be shown in a window of a graphical user interface, or in full-screen. The user may interactively adjust a geometry of the viewing plane, for example by adjusting a position or orientation of the viewing plane within the image volume, thereby effectively navigating within the image volume and obtaining volume renderings from different perspectives of the sub-volume. Such type of navigation is known per se.
The inventors have considered that in many medical applications, a (clinical) user may be mainly interested in the volume rendering of the sub-volume, in which case the segmentation model is merely a ‘means to an end’ to select the sub-volume which is to be volume rendered. However, the segmentation may be imperfect, which may, even for small errors, greatly affect the volume rendering.
It is known to allow a user to manually adjust a segmentation obtained from a segmentation model. For example, in case the segmentation model is a mesh model, a mesh part may be re-positioned, or the resolution of a mesh part may be increased. Such techniques are described in, e.g., WO2017001476, which is hereby incorporated by reference in as far as describing the segmentation of a sub-volume by a segmentation model. For adjusting an applied segmentation model, and thereby the segmented sub-volume, WO2017001476 generates a cross-sectional view of the image data which is optimized to allow adjustment of the mesh part. Such a cross-sectional view may be generated by multi-planar reformatting.
However, when the user is mainly interested in the volume rendering of the sub-volume and thus considers the segmentation model a ‘means to an end’ or even is not aware of the system internally using a segmentation model to select the sub-volume, it may not be desirable to adjust the segmentation model in a purposefully generated cross-sectional view as described in WO2017001476.
Not only may such a cross-sectional view distract the user from the volume rendering of the sub-volume, but it may also be difficult to judge how an adjustment of the segmentation model in the cross-sectional view affects the volume rendering. For example,
Sherbondy et al. does show the impact of changes in the segmentation on the volume rendering, but only allows changes in the form of adjustments in segmentation parameters, after which the segmentation algorithm has to be run again. Depending on the type of segmentation parameters which may be adjusted, it may be difficult or even impossible to apply specific adjustments to the segmentation.
To address these problems, the inventors have devised to establish a user interface which enables the user to interactively adjust the sub-volume by applying a push action and/or a pull action to the sub-volume, while updating the volume rendering of the sub-volume in response to an adjustment of the sub-volume by the user. Here, a push action causes a part of the sub-volume to be pulled inwards, and a pull action causes the part of the sub-volume to be pulled outwards. Effectively, the user is enabled to directly adjust the sub-volume by pushing a part of the sub-volume inwards, and/or pulling a part of the sub-volume outwards. The part to which the push and/or pull action is applied may be automatically selected based on the geometry of the viewing plane, as the geometry (e.g., the position and/or orientation with respect to the image volume) is user-adjustable and it is likely that the user, when invoking the push and/or pull action, wishes said action to be applied to a part of the sub-volume which is currently visible, or at least that the adjustment of said sub-volume is visible at the current geometry. For example, the action may be automatically applied to a centrally shown part of the sub-volume. The part to which the push and/or pull action is to be applied is thus automatically selected and may not require explicit user interaction, while being intuitively understandable by the user. As the volume rendering is directly updated after such an adjustment, the user can immediately see how the adjustment affects the volume rendering. It is thus not needed to perform an adjustment in a separate cross-sectional view, nor is the adjustment restricted to segmentation parameters of a segmentation algorithm, e.g., thresholds or sensitivity parameters, which may often be global rather than local.
Optionally, the push action or the pull action is selected and applied by the user based on at least one of: a keyboard command and a predefined mouse interaction. Such type of user input is immediate and allows the user to quickly and intuitively activate the push action and/or pull action (in short push/pull action).
Optionally, the part of the sub-volume to which the push action or the pull action is applied is selected by the processor based on at least one of:
The push/pull action may be applied locally to the segmentation model, e.g., centered around or otherwise positioned relative to a point of the sub-volume. The center of the viewport, or the point closest to the viewing plane, may be automatically selected and does not require explicit user interaction, while being intuitively understandable by the user. Such and other points of the sub-volume may be determined based on the geometry of the user-selected viewing plane.
Optionally, the user interface comprises a manual selection mode in which the part of the sub-volume to which the push action or the pull action is applied is selected by the processor based on a user selection of a point of the sub-volume. Additionally or alternatively to the selection by the processor, the user may manually select a point of the sub-volume, e.g., by clicking in the viewport.
Optionally, the user interface enables the user to select at least one of: a spatial extent and a strength, of the push action or the pull action. The push/pull action may be applied locally within a spatial area, e.g., around the selected point, and have a strength which determines the degree of push/pull deformation. This extent and strength may be selectable by the user, either explicitly, e.g., by specifying said parameters in the user interface, or implicitly, e.g., based on a characteristic of the user input which actives the push/pull action. For example, if the push/pull action is activated by dragging, the distance may determine the strength.
Optionally, the user interface comprises a side adjustment mode in which the pull action and the push action operate in a plane parallel to the user-selected viewing plane of the volume rendering, and wherein, when the side adjustment mode is selected, an outline of a lateral side of the sub-volume is visualized in the viewport. A side adjustment mode may be provided which, when active, may provide the user with a visual aid in the form of a cross-sectional outline of the sub-volume to aid the user when adjusting the sub-volume laterally, e.g., along a plane parallel to the user-selected viewing plane. Such a visual aid has been found to be unobtrusive for this type of adjustment, as it does not obstruct the lateral sides of the rendered sub-volume.
Optionally, when the side adjustment mode is selected, image data surrounding the sub-volume in the image volume is volume rendered and displayed in the viewport in accordance with a different visualization parameter than the image data inside of the sub-volume. In addition to showing the outline, another visual aid may be provided when adjusting the sub-volume laterally, namely in the form of the volume rendering of image data immediately outside the sub-volume being displayed. This allows the user to quickly see the effect of when the sub-volume is enlarged by a pull-action. To nevertheless clearly show the boundaries of the sub-volume, the surrounding image data may be differently visualized, e.g., with a higher opacity or lower intensity. For other types of adjustments, such a visual aid may again rather obstruct, or be obstructed by, the rendered sub-volume.
Optionally, the user interface comprises a frontal adjustment mode in which the pull action and the push action operate in a plane orthogonal to the user-selected viewing plane of the volume rendering. In this mode, the user may be enabled to adjust a frontal side of the sub-volume, e.g., a side facing towards the user.
Optionally, when the frontal adjustment mode is selected, image data surrounding the sub-volume in the image volume is excluded from the volume rendering or assigned a 100% opacity in the volume rendering.
Optionally, the segmentation model is a mesh model. Such mesh models are well-suited for segmenting an anatomical region or structure.
Optionally, the segmentation model defines a closed hull. Such a closed hull may be directly used to select the sub-volume.
It will be appreciated by those skilled in the art that two or more of the above-mentioned embodiments, implementations, and/or optional aspects of the invention may be combined in any way deemed useful.
Modifications and variations of the workstation, the imaging apparatus, the method and/or the computer program product, which correspond to the described modifications and variations of the system, can be carried out by a person skilled in the art on the basis of the present description.
A person skilled in the art will appreciate that the system and method may be applied to image data acquired by various acquisition modalities such as, but not limited to, standard X-ray Imaging, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and Nuclear Medicine (NM).
These and other aspects of the invention will be apparent from and elucidated further with reference to the embodiments described by way of example in the following description and with reference to the accompanying drawings, in which
It should be noted that the figures are purely diagrammatic and not drawn to scale. In the figures, elements which correspond to elements already described may have the same reference numerals.
The following list of reference numbers is provided for facilitating the interpretation of the drawings and shall not be construed as limiting the claims.
The following embodiments involve providing a user interface which enables a user to interactively adjust a sub-volume by applying at least one of: a push action and a pull action to the sub-volume. The push action may cause part of the sub-volume to be pulled inwards, whereas the pull action may cause part of the sub-volume to be pulled outwards. The volume rendering of the sub-volume may be updated in response to a user's adjustment of the sub-volume.
The system 100 is further shown to comprise a user interface subsystem 180 which may be configured to, during operation of the system 100, enable a user to interact with the system 100, for example using a graphical user interface. The user interface subsystem 180 is shown to comprise a user input interface 184 configured to receive user input data 082 from a user input device 080 operable by the user. The user input device 080 may take various forms, including but not limited to a computer mouse, touch screen, keyboard, microphone, etc.
The user interface subsystem 180 is further shown to comprise a display output interface 182 configured to provide display data 062 to a display 060 to visualize output of the system 100. In the example of
The system 100 is further shown to comprise a processor 140 configured to internally communicate with the image data interface 120 via data communication 122, and a memory 160 accessible by the processor 140 via data communication 142. The processor 140 is further shown to internally communicate with the user interface subsystem 180 via data communication 144.
The processor 140 may be configured to, during operation of the system 100, apply a segmentation model to the image data 030 of the image volume, thereby delineating a sub-volume of the image volume, generate the display data 062 to show a viewport which comprises a volume rendering of image data inside the sub-volume, and using the user input interface 184 and the display output interface 182, establish a user interface which enables the user to interactively adjust the sub-volume by applying at least one of: a push action and a pull action to the sub-volume, wherein the push action causes a part of the sub-volume to be pulled inwards, wherein the pull action causes the part of the sub-volume to be pulled outwards, and wherein the volume rendering of the sub-volume is updated in the viewport in response to an adjustment of the sub-volume by the user.
This operation of the system 100, and various optional aspects thereof, will be explained in more detail with reference to
In some embodiments, the user interface established by the system 100 may be or comprise a graphical user interface which is shown on the display 060. For that purpose, the processor 140 may be configured to generate the display data 062 to display the graphical user interface to a user. The graphical user interface may be represented by a set of interface instructions stored as data in a memory accessible to the processor 140, being for example the memory 160 or another memory of the system 100. It is noted that, in other embodiments, the user interface established by the system 100 is not a graphical user interface. For example, the user may use a keyboard to provide input to the system 100, while the viewport may be shown on the display full-screen. Such a configuration of the system 100 may conventionally not be understood as a graphical user interface.
In general, the system 100 may be embodied as, or in, a single device or apparatus, such as a workstation or imaging apparatus or mobile device. The device or apparatus may comprise one or more microprocessors which execute appropriate software. The software may have been downloaded and/or stored in a corresponding memory, e.g., a volatile memory such as RAM or a non-volatile memory such as Flash. Alternatively, the functional units of the system, e.g., the image data interface, the user input interface, the display output interface and the processor, may be implemented in the device or apparatus in the form of programmable logic, e.g., as a Field-Programmable Gate Array (FPGA). In general, each functional unit of the system may be implemented in the form of a circuit. It is noted that the system 100 may also be implemented in a distributed manner, e.g., involving different devices or apparatuses. For example, the distribution may be in accordance with a client-server model, e.g., using a server and a thin-client.
Any known technique for model-based segmentation may be used to delineate a particular sub-volume, e.g., a ‘sub-volume of interest’. The sub-volume may correspond to an anatomical region, such as an interior of the ribcage, an anatomical organ, such as the heart, or any other type of anatomical structure. The model-based segmentation may result in a set of points explicitly or implicitly defining a closed surface, e.g., a hull, which encompasses—and thereby delineates—a sub-volume of the image volume. An example of such model-based segmentation is described in “Automatic Model-based Segmentation of the Heart in CT Images” by Ecabert et al., IEEE Transactions on Medical Imaging 2008, 27(9), pp. 1189-1201.
It will be appreciated that, in general, a segmentation model may define more than an (outer) surface of an anatomical region or structure. For example, a segmentation model may delineate an exterior of an organ as well as internal structures inside of the organ. In such an example, the sub-volume may be delineated by the outer surface, e.g., a hull, of the segmentation model. However, the segmentation model may also directly and solely define such an outer surface.
It will be appreciated that such adjustment may also be desirable in case the segmentation is correct. For example, it may be that the segmentation model does not fully represent the anatomical region or structure of interest, and thus that a ‘correction’ of an—in principle correct—segmentation is desired.
The viewport 330 is shown to comprise a rendered sub-volume similar to the one shown in
In addition, the adjustment tools provided by system 100 of
The direction of the push action and the pull action may be determined in various ways. For example, specific adjustment modes may be provided, in which the push action and the pull action operate in a certain plane. For example, as indicated in
In general, the part of the sub-volume to which the push action 410 or the pull action 400 is applied may be selected to be centered around or otherwise positioned relative to a point 420 of the sub-volume. For example, the point 420 may be selected by the system, e.g., as the point of the sub-volume which is shown in a center of viewport 330, or the point of the sub-volume which is closest to the viewing plane of the volume rendering. It will be appreciated that in such examples, the user may indirectly influence the selection of the point, e.g., by rotating or moving the rendered sub-volume if the system provides such functionality. Alternatively or additionally, the user may manually select the point 420, e.g., via a cursor.
The user interface provided by the system may further enable the user to select a spatial extent 430 and/or a strength of the push action 410 or the pull action 400. This spatial extent, which is shown in
It will be appreciated that the push action and the pull action may be selected and activated in various ways. In some embodiments, the selection and activation may be the same. For example, the push/pull action may be selected and activated by a keyboard command or by mouse interaction, such as scrolling the wheel or dragging the mouse, whereby the inward (‘push’) or outward direction (‘pull’) may be determined based on the direction of the mouse interaction. The selection of the push/pull action may also take place in a graphical user interface, e.g., by selection from a menu, with the activation being, e.g., a mouse click.
Additionally or alternatively, image data surrounding the sub-volume in the image volume may also be volume rendered (reference numeral 320 in
In general, after each push/pull action, the sub-volume may be re-rendered in the viewport. In some embodiments, the sub-volume may be re-rendered while a push/pull action is performed, e.g., already before its completion.
In some embodiments, the adjusted segmentation model may be saved after the interactive adjustment. In some embodiments, the adjustment steps may be stored, and the user interface may provide redo and repeat operations.
It will appreciated that adjustment of a segmentation model or surface representing a sub-volume is known per se. It is therefore within reach of the skilled person to cause a segmentation model to be deformed by a local pull inwards or a local pull outwards in the manner as described in this specification. For example, if the segmentation model is a mesh model, mesh deformation techniques may be used as known per se from the field of computer graphics or mesh modelling.
The method 500 comprises, in an operation titled “ACCESSING IMAGE DATA OF IMAGE VOLUME”, accessing 510 the image data of the image volume. The method 500 further comprises, in an operation titled “APPLYING SEGMENTATION MODEL TO IDENTIFY SUB-VOLUME”, applying 520 a segmentation model to the image data of the image volume, thereby delineating a sub-volume of the image volume. The method 500 further comprises, in an operation titled “VOLUME RENDERING OF IMAGE DATA OF SUB-VOLUME”, generating 530 display data showing a viewport which comprises a volume rendering of image data inside the sub-volume. The method 500 further comprises, in an operation titled “ESTABLISHING USER INTERFACE”, establishing 540 a user interface which enables the user to interactively adjust the sub-volume by applying at least one of: a push action and a pull action to the sub-volume, wherein the push action causes a part of the sub-volume to be pulled inwards, and wherein the pull action causes the part of the sub-volume to be pulled outwards. The method 500 further comprises, in an operation titled “UPDATING VOLUME RENDERING”, updating 550 the volume rendering of the sub-volume in the viewport in response to an adjustment of the sub-volume by the user. It will be appreciated that the above operation may be performed in any suitable order, e.g., consecutively, simultaneously, or a combination thereof, subject to, where applicable, a particular order being necessitated, e.g., by input/output relations.
The method 500 may be implemented on a computer as a computer implemented method, as dedicated hardware, or as a combination of both. As also illustrated in
In accordance with an abstract of the present application, a system and method may be provided for volume rendering of image data of an image volume. A segmentation model may be applied to the image data of the image volume, thereby delineating a sub-volume of the image volume. The image data inside the sub-volume may be volume rendered. A user interface may be provided which enables a user to interactively adjust the sub-volume by applying at least one of: a push action and a pull action to the sub-volume. The push action may cause part of the sub-volume to be pulled inwards, whereas the pull action may cause part of the sub-volume to be pulled outwards. The volume rendering of the sub-volume may be updated in response to a user's adjustment of the sub-volume
Examples, embodiments or optional features, whether indicated as non-limiting or not, are not to be understood as limiting the invention as claimed.
It will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing stage of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically.
The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or stages other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Number | Date | Country | Kind |
---|---|---|---|
17185118.1 | Aug 2017 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/070369 | 7/27/2018 | WO | 00 |