The present invention generally relates to modelling a shaft, for example a manhole of a sewage system. The invention provides in particular a solution allowing for an accurate and qualitative modelling of a shaft, by means of a system which is easily transportable and flexible with respect to its choice of suspension.
Sewage systems consist of an extended underground network of sewage pipes. Manholes and shafts extending downward from the ground surface provide access to the sewage pipes. The condition of such manholes should be regularly inspected, in order to prevent for example ground subsidence at the level of manholes. Moreover, more and more operators make an inventory of the complete sewage system in order to plan renovations and preventive maintenance. Not only the position of the manholes should be included in such inventories, but also important parameters such as inner dimensions. The inspection and measuring of the manholes is often done manually, where a worker descends in the manhole in order to perform inspection and measurement. Such a manual method is time consuming and implies substantial safety risks. The need for a person to descend the manhole may be avoided by using a device which is lowered in the manhole. There exist for example devices on the market which allow to take pictures of the interior of the manhole by means of lenses, often fisheye lenses. An example is given in WO 2019/162643, wherein individual images taken by a camera are combined in a composite image of the internal surface of a conduit using image stitching. However, with such photogrammetry, larger and irregularly shaped manholes are depicted distorted, or the images are exposed insufficiently. Such devices may therefore be deployed for a rough inspection of the manhole, but due to the lack of quality of the images they are insufficiently reliable to derive dimensions from. An accurate parameterization is therefore not possible by means of such devices.
Thus, there is a general need for devices which are not limited to photogrammetry but allow for modelling the interior of the manhole accurately.
A number of solutions are known in the state of the art focused on modelling manholes in 3D. In US2016/0249021, for example, a device is described for 3D imaging and inspection of a manhole. The solution comprises a sensor head having 3D sensors, which illuminate the object to be measured by means of a laser and then determine distances by analyzing the reflected light. Use is made, for example, of a time-of-flight camera or of the projection of a structured light pattern by means of a structured light camera. Cables are attached to three places at the top of the sensor head, which allow the sensor head to be lowered in the shaft. The three cables are attached to a box at their other ends, while the box is typically mounted on a system in a vehicle. Information regarding the height position of the sensor head in the shaft is also collected, for example by means of an encoder by which the number of revolutions at the level of the cable suspensions is counted. However, this solution implies a number of disadvantages. First, the sensor technique used is sensitive to overexposure, so that at a position close to the ground surface or the wall of the shaft proper modelling may possibly not be obtained or a shielding from the outside light should be provided. Additionally, the suspension system is adapted to mounting in a vehicle, making the system cumbersome to move and making it impossible for certain shafts to be reached. Finally, the sensor head requires the combination with a specific suspension system; a telescopic suspension system, for example, does not allow for an encoder to count the number or revolutions, or replacing three cables by one suspension point would imply too much movement of the sensor head. In other words, the assembly is not very flexible. A similar type of solution is found in US2008/0068601, wherein a projected laser ring is used to extract dimensional data of the surface of a manhole, but the delivery system needs to provide sufficient stiffness and stability.
Furthermore, https://cuesinc.com/equipment/spider-scanner proposes a device for 3D manhole scanning. This solution allows for modelling a manhole by means of a point cloud, which is generated by means of stereoscopic cameras and a pattern generator. Additional illumination may be provided as well by switching on LED lamps. Such technology renders the image quality less sensitive to over- or underexposure. The sensor head having cameras is attached to an extendable bar, and the bar is attached to a stand. The stand renders the system better movable than a system which should be mounted in a vehicle. However, also this system offers little flexibility with respect to the suspension, wherein the sensor head should always be combined with the telescopic arm. Although the rigid arm limits the movements of the sensor head while descending, it also limits the maximum extended length and therefore the maximum depth to be measured. Moreover, such a system may be cumbersome in use, since time is always required to extend the telescope with additional pieces, and continuous scanning is therefore not possible. This increases the time and operations necessary per recording. Finally, the telescopic arm renders the assembly heavy and not very compact, especially when large extended lengths are desired. This limits the portability of the system.
It is an object of the present invention to describe an apparatus overcoming one or multiple of the described disadvantages of solutions of the state of the art. More specifically, it is an object of the present invention to describe an apparatus allowing for an accurate and quantitative modelling of a shaft, by means of a system which is easily portable and flexible with respect to the choice of suspension.
According to the present invention, the objects identified above are realized by an apparatus for modelling a shaft in 3D, comprising:
In other words, the invention relates to an apparatus for modelling a shaft in 3D. The shaft is an enclosed space limited by walls and defined by an axial direction. Typically, the shaft forms an elongated opening, and the walls define a substantially axially symmetrical space. The height within the shaft is measured according to the axial direction. A cross section of the shaft may adopt various shapes, such as circular, square, rectangular, etc. A shaft is for example a manhole or sewer shaft of a sewage system, extending from the ground level to the underground pipeline. In another example the shaft is a chimney, pipeline, tank, pipeline shaft, elevator shaft, etc. Modelling in 3D or modelling three-dimensionally refers to obtaining a 3D model of the shaft, comprising a large number of points which are placed in a three-dimensional virtual space. The 3D model for example allows to, by means of a computer, visualize images of the inner surface of the shaft. A 3D model may also allow to derive parameters such as dimensions.
The apparatus comprises a sensor head adapted to be moved axially within the shaft by means of a suspension system. A suspension system is a system allowing the sensor head to be attached thereto and allowing to move the sensor head according to the axial direction of the shaft. A suspension system comprises for example a stand wherein a cable is attached in the top allowing to move the sensor head up and down according to the axial direction. In another embodiment, the suspension system comprises a stand having a bar or telescopic arm to move the sensor head. In yet another embodiment, the suspension system comprises multiple cables, for example to attach the sensor head at multiple points to a box mounted in a vehicle. In yet another embodiment, the suspension system comprises only a cable, allowing to manually move the sensor head axially in the shaft.
The sensor head comprises 3D sensors placed along the circumference of the sensor head, adapted to take images along an inner circumference of the shaft. Thus, the image sensors are placed on the sensor head, at various circumferential positions of the sensor head. The 3D sensors may be placed on the sensor head all at the same level, or their height position may differ between sensors. For example, a calibration is used to determine the height position on the sensor head of sensors. The 3D sensors are adapted to take images along an inner circumference of the shaft. This means that every sensor has a certain field of view and is adapted to observe the environment with a certain viewing angle according to the circumferential and height direction. Thus, a 3D sensor has at a certain position of the sensor head in the shaft a view of a certain part of the inner surface of the shaft, namely a part of a certain dimension along the height and a certain dimension along the circumference of the shaft. The number of 3D sensors, their positioning on the sensor head and their viewing angle are such that the image sensors together have a view on the total circumference of the shaft.
A 3D sensor is adapted to take images. Taking images is defined broadly, in the sense that it refers to collecting data about the part of the inner surface of the shaft which is within the field of view of the 3D sensor. In particular, the 3D sensor is adapted to collect depth information about the part of the inner surface of the shaft within the field of view of that 3D sensor. In the context of an image sensor placed along the circumference of the sensor head, collecting depth information refers to measuring distances from the sensor to a specific point at the wall of the shaft, not to determining a depth in the shaft according to the axial direction. Additionally, color information may be collected as well by the 3D sensor. In an embodiment, the 3D sensor is a 3D camera, using 3D imaging technology. A 3D sensor uses for example stereovision technology, where two cameras take images from different angles to allow depth information to be derived. In another example, a 3D sensor is adapted to take images of a projected structured light pattern, where depth information may be derived from the deformations of the pattern in these images. In yet another example, the 3D sensor uses a combination of stereovision technology and the projection of a structured light pattern.
The sensor head comprises a measuring apparatus adapted to determine a measured height position of the sensor head within the shaft. A height position refers to a position of the sensor head according to the axial direction of the shaft. The measuring apparatus may be adapted to perform a height measurement directly in the shaft, such as for example a downwards pointing one dimensional laser on the sensor head. In another embodiment, the measuring apparatus does not measure a height directly, but the measurement of the measuring apparatus allows to derive a height position of the sensor head. For example, a downwards pointing or upwards pointing 3D sensor may be used to derive the height position, or an Inertial Measurement Unit may be used to track the path followed by the sensor head, wherein such path includes a measured height position. The measuring apparatus is comprised in the sensor head, meaning that it is positioned on or in the sensor head, the latter being axially moved in the shaft. This implies that the height measurement is performed by the sensor head itself and is independent of the suspension system used.
The apparatus comprises a processing unit. A processing unit is for example a computer, CPU, calculating unit, processor, microprocessor, etc. The processing unit may be located on or in the sensor head or may form a separate unit. In the latter case, components will be present to forward the rough data measured by the sensor head to the processing unit, through a wireless or wired connection. In the case the processing unit is located on or in the sensor head, processing of the collected rough data occurs already there, after which results may be forwarded to an external device, through a wireless or wired connection. In another embodiment, processing occurs partially on the sensor head and partially in an external unit, so that the processing unit comprises physically separated unit.
The processing unit comprises a placement module configured to place the images in a virtual space, based on the measured height position and the positioning of the image sensors on the sensor head, resulting in a rough placement of the images. A virtual space may for example be a three-dimensional space, defined by x, y and z coordinates. Placing an image in the virtual space refers then to assigning an (x, y, z)-coordinate to the points forming the image. The rough placement of an image in the virtual space occurs on the one hand based on the positioning of the image sensors on the sensor head. Each image sensor has for example a certain circumferential position on the sensor head, and a certain viewing angle according to the circumferential direction. From this, it may be deduced at which circumferential position in the virtual space the image taken by sensor should be placed. On the other hand, the rough placement of an image in the virtual space occurs based on the measured height position. An image taken at a moment in which the sensor head was at a certain height position, is thus placed at a corresponding height position in the virtual space. Through the rough placement of the images, a first reconstruction of the inner surface of the shaft is obtained, based on a rough estimation. Typically, the images placed according to the rough estimation overlap, according to the circumferential direction, and according to the height direction.
The processing unit comprises a correction module configured to correct the rough placement, based on comparing overlapping images and/or based on a measured deviation of the sensor head with respect to a central axis of the shaft. Correcting the rough placement implies that the placement of images in the virtual space may be adjusted slightly, in order to obtain a more accurate representation of reality.
In an embodiment, the correction occurs based on comparing overlapping images. Overlapping images are images which overlap in the rough placement, according to the height direction and/or the circumferential direction. Use is made from, for example, an Iterative Closest Point Algorithm to search for similar features in overlapping images, and to adjust the placement of the images such that corresponding features coincide as closely as possible. Similar features may for example be searched for in the color information and/or the depth information present in the overlapping images. For example, the circumferential position of the images is corrected in that way, in order to take into account rotations of the sensor head about its own axis during measurement. In such an embodiment, the first reconstruction is based on tracking the sensor head, which may be solely based on height tracking or may include a more sophisticated tracking based on measuring the complete path followed by the sensor head. Afterwards, a correction is done based on an imaging algorithm or computational imaging technique, i.e. based on comparing overlapping images.
In another embodiment, the correction occurs based on a measured deviation of the sensor head with respect to a central axis of the shaft. The sensor head e.g. comprises an Inertial Measurement Unit, IMU, allowing to reconstruct the path followed by the sensor head, thereby determining deviations of the sensor head with respect to the central axis of the shaft. Such deviations occur for example when the sensor head sways. The detected deviations allow to reconstruct from which orientation of the image sensor the image was taken, and to correct in that way the placement of the image. Optionally, an IMU may also allow to detect rotations of the sensor head about its own axis, and to correct in that way the placement of the images for this. In such an embodiment, the first reconstruction as well as the applied correction are based on tracking the sensor head, wherein such tracking includes measuring a height position as well as measuring deviations with respect to a vertical axis.
In yet another embodiment, the correction occurs based on comparing overlapping images and a measured deviation of the sensor head with respect to a central axis of the shaft. The order of these two corrections may vary. Measurements of an IMU are for example be used to correct the placement of the images for swaying, and an ICP algorithm may be used to correct the placement of the images for rotations. In another example, measurements of an IMU are used to correct the placement of the images for swaying of the sensor head and rotations of the sensor head about its own axis. An ICP algorithm may then be employed to adjust the obtained placement for outliers, or certain abnormalities or inaccuracies in the IMU measurements.
The apparatus according to the invention provides multiple advantages. First, the 3D sensors on the sensor head are chosen according to the needs. The use of advanced 3D sensors allows for an accurate modelling of high quality. Moreover, by using multiple sensors along the circumference of the sensor head, the sensor head does not have to rotate during the axial movement in the shaft. Therefore, moving parts are avoided, which contributes to the robustness of the apparatus. A proper first placement of the images in the virtual space may also be obtained based on the circumferential position of the image sensors on the sensor head. This avoids the need for a computationally intensive algorithm to join images along the circumference.
Furthermore, all measuring components, such as the 3D sensors and the measuring apparatus for the height position, are comprised within the sensor head. This means that the sensor head may function completely independently of the type of suspension, which is not the case when for example an encoder is used to count the number of axis rotations in unwinding a cable. In other words, the sensor head may be combined with any type of suspension system. This renders the apparatus to be employable flexibly.
Additionally, the correction module allows to correct the rough placement of the images, for example for swaying of the sensor head during the descent in the shaft, or for rotations of the sensor head about its own axis. A good quality of the modelling is hereby ensured, even if a more free suspension is used, such as a cable attached to the sensor head at only one point, or a manual lowering of the sensor head into the shaft. In other words, it is not a requirement to use a rigid suspension system such as a bar or a telescopic arm, which is heavy and more cumbersome in use. This allows for a compact and lightweight design, contributing to the portability and user-friendliness of the assembly. In summary, despite the increased flexibility due to the suspension-independent sensor head, a reliable and accurate 3D modeling is guaranteed under all conditions.
Finally, corrections may be performed on the rough placement of the images in various ways within the correction module. Using an IMU may for example allow for an accurate tracking of the sensor head, while using computational imaging for the corrections saves the cost of adding an IMU to the sensor head. In another example an IMU is used for the first placement of the images, and then an ICP algorithm for a correction of the placement, leading to an efficient and reliable functioning of this ICP algorithm is obtained. Indeed, the algorithm may depart from a first rough placement, so that only limited adjustments are necessary. The ICP algorithm may also be employed as a backup position, for example in circumstances where the IMU does not function well or gives abnormal measurements. This further contributes to the reliability and quality of the modelling.
Optionally, the apparatus comprises a second measuring apparatus adapted to determine the measured deviation by reconstruction of the path followed by the sensor head. The measuring apparatus is for example an IMU or inertial measurement unit allowing to measure which path, for example in (x, y, z)-coordinates, the sensor head followed during the descent into the shaft from measuring acceleration and rotation. The measured deviation of the sensor head with respect to the central axis may be derived from this path followed. Such a second measuring apparatus allows to take swaying of the sensor head into account in a reliable and accurate manner, contributing to a free choice of suspension without comprising on the quality of the modelling.
Optionally, the placement module is configured to place the images in the virtual space based on the path followed by the sensor head, and the correction module is configured to correct the rough placement based on comparing overlapping images. The sensor head comprises for example an Inertial Measurement Unit or inertial measurement unit allowing to determine which path the sensor head followed during the descent into the shaft. This path is then used, together with the circumferential position of the sensors on the sensor head and the measured height position, to perform a first placement of the images in the virtual space, being a rough placement. For example, the rough placement already takes into account the swaying of the sensor head and rotations about its own axis. Thus, the rough placement and first reconstruction are based on an advanced tracking of the sensor head. This rough placement is then corrected, by comparing overlapping images. An Iterative Closest Point Algorithm is for example used to search for similar features in the overlapping images, and to adjust the placement of the images such that corresponding features coincide as closely as possible. The rough placement is for example adjusted for outliers, or certain abnormalities or inaccuracies in the IMU measurements. This contributes to a reliable modelling, and a lightweight system with free choice of suspension.
Optionally, the second measuring apparatus is an inertia measuring unit and/or the measuring apparatus comprises an accelerometer, and/or the measuring apparatus comprises a gyroscope, and/or the measuring apparatus comprises a magnetometer. This allows to take into account swaying of the sensor head and rotations about its own axis when placing the images in the virtual space.
Optionally, the correction module is configured to correct the placement of the images on a circumferential position in the virtual space, based on comparing images overlapping in the rough placement in the height direction. After rough placement of the images in the virtual space, these overlap for example partially in the height direction. In other words, there is an uninterrupted circumference of overlapping images in the virtual space. In these overlapping parts may then be searched for similar characteristics, by means of an Iterative Closest Point Algorithm. Based on this, the circumferential position of the images in the virtual space is adjusted such that corresponding features coincide as closely as possible. In this way, a placement is obtained taking into account rotations of the sensor head about its own axis. This contributes to an accurate modelling, also when a suspension is used where rotations of the sensor head are possible.
Optionally, the correction module is configured to correct the rough placement by minimizing the difference between point clouds. An Iterative Closest Point Algorithm is for example use, where images are corrected to obtain the best possible coincidence of similar features in overlapping images. Similar features may for example be searched for in the color information and/or the depth information present in the overlapping images.
Optionally, the 3D sensors use 3D imaging technology. An image sensor is for example a 3D camera. 3D imaging technology refers to technology by means of which apart from color information, also depth information is collected. A 3D sensor uses for example stereovision technology, by means of which two cameras take images from different angles to allow to derive depth information. In another example, a 3D sensor is adapted to take images of a projected structured light pattern, where depth information may be derived from the deformations of the pattern in these images. In yet another example, a 3D sensor uses a combination of stereovision technology and the projection of a structured light pattern. The use of 3D imaging technology has the advantage that an accurate modelling is possible, without deformations in the images, and with the possibility to derive dimensional parameters.
Optionally, the 3D sensors use a combination of stereovision technology and the projection of a structured light pattern. Stereovision technology, or stereoscopy, refers to the use of two cameras, which take images under different angles. Image processing afterwards allows to detect common characteristics in both images and to derive depth information or distance information from this. Stereovision technology has as the advantage that it keeps working well in circumstances where a lot of light is present, for example from light coming from the top of the manhole. This contributes to a good modelling of the shaft near the ground level and avoids the need to provide a shielding from the outside light. The projection of a structured light pattern refers to a technology such as structured light. Such a system has except for a camera also a light source, such as a laser, which allows to image a pattern on an object. The regularity of the pattern is disturbed when projecting by irregularities in the surface of the object, and depth information about the surface may then be derived from these deformations, visible in images taken by the camera. The projection of a structured light pattern has the advantage that this technology keeps working well in conditions where little light is present, such as at the bottom of the shaft, or when modelling surfaces where few characteristics are distinguishable. The use of 3D sensors allowing a combination of stereovision technology and the projection of a structured light pattern has as advantage that in all light conditions, both under- and overexposure, a good modelling is obtained. This contributes to a high quality and robust modelling.
Optionally, the 3D sensors are placed on the sensor head according to a same height position on the sensor head. This contributes to a simple placement of images in the virtual space, where per height position of the sensor head, a full inner circumference of the shaft is mapped.
Optionally, the measuring apparatus is adapted to determine the measured height position based on 3D imaging technology. The measuring apparatus is for example a downwards pointing 3D image sensor, directed towards the end of the shaft, away from the ground level. Thus, during the measurement operation, the 3D image sensor is directed towards the bottom end of the shaft. Such a 3D image sensor allows to take a 3D image of the bottom of the shaft, and comprises for example a processor adapted to derive the height position from the taken image by image processing. The use of 3D imaging technology allows an accurate and reliable measurement, because less noise is present in the measurement, and fewer disturbances appear due to a reflecting water surface at the bottom, than when for example a downwards pointing one-dimensional laser is used. In another embodiment, the measuring apparatus is an upwards pointing 3D image sensor, directed towards the shaft end located at the ground level. Thus, during the measurement operation, the 3D image sensor is directed towards the top end of the shaft. By using such an upwards pointing sensor, a more reliable reference point for the height measurement may be obtained, as a static environment is observed by the sensor, and no disturbances due to a moving reference point occur. The latter disturbances may possibly be present when using a downwardly pointing 3D sensor, e.g. due to the observance of a moving or flowing water surface at the bottom of the shaft.
Optionally, the processing unit forms a physical unit with the sensor head. This means that a processing of the collected rough data may take place already on the sensor head, such as placing the images in the virtual space, as well as a correction of this rough placement. The results may then be forwarded to an external device, such as a tablet, mobile phone or computer, through a wired or wireless connection. Processing the data on the sensor head itself has as advantage that the amount of data which has to be sent to the external device is more limited than when all rough data should be forwarded. This reduces the requirements regarding bandwidth of the wireless or wired connection, and contributes to making results available to the user in real time.
Optionally, the processing unit is adapted to determine dimensional parameters based on the placement of the images in the virtual space, the dimensional parameters being derived from the 3D model of the shaft. This means that modelling by means of the apparatus allows for parametrization, where for example sizes may be derived from the 3D model of the shaft. An interface is for example made available where the user may indicate the parameters to be determined. The software allowing to determine the dimensional parameters to be determined and to be depicted may be implemented completely on the sensor head, or may be installed on an external device, such as a tablet, mobile phone or computer.
According to a second aspect of the present invention, the objects identified above are realized by a system for modelling a shaft in 3D, comprising:
an apparatus according to the first aspect of the invention;
The suspension system comprises a mobile component such as a bar, a telescopic arm, or one or multiple cables. Typically, the sensor head is attached to this mobile component at one end. In an embodiment, the other end of the mobile component is held manually during the descent of the sensor head in the shaft. In another embodiment, this other end is connected to a movable positioning system. A positioning system is a system allowing to position the sensor head on the central axis of the shaft. A movable positioning system is for example a stand, a tripod, a system mounted in a vehicle, etc.
According to a third aspect of the present invention, the objects identified above are realized by a method for modelling a shaft in 3D, comprising:
The apparatus comprises a sensor head 101 and a processing unit 500. The processing unit 500, not visible on
In the embodiment of
Images are made by the image sensors 400 during a descent of the sensor head 101 into the shaft 105. In this way, a ring of images becomes available within the shaft 105 at different height positions of the sensor head 101. No rotation of the sensor head 101 is needed for this, so that the presence of moving parts is avoided. Typically, the time point at which the respective image is taken is also registered by an image sensor 400.
In an embodiment of the invention, the image sensors 400 are 3D sensors using a combination of stereovision technology and the projection of a structured light pattern. An 3D sensor 400 comprises herein two cameras, which take images under different angles, and a light source such as a laser, allowing to project a pattern. The 3D sensor also comprises a processor allowing to reconstruct the 3D image through image processing algorithms. A 3D sensor 400 is for example an Intel RealSense Depth Camera D415, or a similar technology. A 3D sensor 400 may also comprise an RGB sensor, to collect color information. The combination of stereovision and structured light allows to obtain an accurate and qualitative modelling in all light conditions. However, other embodiments are also possible, where a different type of 3D sensor is used. Optionally, the sensor head 101 may also contain one or more light sources, to illuminate the wall of the shaft 105 while taking images.
The sensor head 101 comprises furthermore a measuring apparatus 501 adapted to determine a measured height position 507 of the sensor head 101 within the shaft 105, as presented schematically in
In the embodiment of
The processing unit 500 in
In another embodiment, the placement module 503 uses the measured images 506 and the measured height position 507 to perform a rough placement 509 of the images in the virtual space, taking into account the position of the image sensors 400 on the sensor head 101. The rough placement 509 is then corrected by the correction module 504 in two ways. On the one hand, the placement 509 is corrected for swaying of the sensor head, by means of the measured deviation 508, derived from the path measured by the IMU. On the other hand, the rough placement 509 is corrected for rotations of the sensor head about its own axis, by means of an ICP algorithm. The ICP algorithm compares here parts of images overlapping over the height, and searches for similar characteristics in the overlapping parts regarding color and/or depth information. The circumferential position of the images is then corrected, so that corresponding characteristics coincide as closely as possible. Both corrections, of which the order may differ, result in a corrected placement 510 of the images in the virtual space, which may be visualized in a visualization module 505.
Other embodiments are possible, e.g. wherein no IMU 502 is present and corrections are purely based on an ICP algorithm, or wherein an IMU 502 is present in the sensor head 101, while no ICP-based corrections are done.
Finally,
Although the present invention was illustrated by means of specific embodiments, it will be clear for the person skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be executed with different modifications and adaptations without departing from the field of application of the invention. The present embodiments should therefore in all respects be considered as illustrative and not restrictive, wherein the field of application of the invention is described by attached claims and not by the foregoing description, and all modifications which fall within the meaning and scope of the claims are therefore included. In other words, it is understood to include all modifications, variations or equivalents falling within the are of application of the underlying basic principles and of which the essential attributes are claimed in this patent application. Moreover, the reader of this patent application will understand that the words “comprising” or “to comprise” do not exclude other elements or other steps, and that the word “a(n)” does not exclude plural. Possible references in the claims may not be understood as a limitation of the respective claims. The terms “first”, “second”, “third”, “a”, “b”, “c” and the like, when used in the description or in the claims, are used to distinguish between similar elements or steps and do not necessarily describe a successive or chronological order. The terms “top”, “bottom”, “over”, “under” and the like are used in the same way with respect to the description and do not refer necessarily to relative positions. It should be understood that these terms are mutually interchangeable under the right conditions and the embodiments of the invention are able to function according to the present invention in other orders or orientations than those described or illustrated in the above.
Number | Date | Country | Kind |
---|---|---|---|
BE2020/5185 | Mar 2020 | BE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/056658 | 3/16/2021 | WO |