The invention relates to a method for determining the hazard area between a test object and an X-ray inspection system rotating in opposite directions about a rotational axis running through the test object.
Computed tomography (CT) represents an imaging modality with which the inside of objects can be represented non-destructively on the basis of X-radiation. In particular for non-destructive testing technology, CT systems with as many degrees of freedom as possible are of interest for a scan. In the case of movable device parts, such as X-ray detector, X-ray tube as well as object plates, the prevention of a collision of all the items inside the compact system is of the highest priority. In addition to a fixed limitation of the accessible space as collision protection, further procedures also exist. To date, amongst other things, there has been the possibility of monitoring a visual navigation by viewing contact (for example through an integrated lead glass window) which is, however, significantly limited by the corresponding viewing angle.
Alternatively, or additionally, pressure sensors can be used. In the case of a contact, i.e. if a collision has already taken place, the method of the object is interrupted. This type of collision prevention is to be regarded as a possibility of last resort when other measures fail. It is thereby possible to prevent major damage to the object, or to other items within the compact system. However, minor damage due to the collision cannot be ruled out.
Methods are disclosed for determining a hazard area between a test object and an X-ray inspection system include arranging a radiation detector at a predetermined distance from a radiation source. Marginal rays are determined which, at a predetermined angle of rotation between the test object and the arranged radiation source and radiation detector, touch an outer contour of the test object at the predetermined angle of rotation. A hazard radius is determined from the outer contour to the rotational axis of the test object for the predetermined angle of rotation. The determination of the marginal rays is repeated for predetermined angles of rotation which are distributed over 360° and the determination of the hazard radius is repeated for each respective repeated determination of marginal rays. A table is compiled with parameters of the hazard radius obtained for each of the predetermined angles of rotation of the edge of the test object.
The present invention will be described in even greater detail below based on the exemplary figures. The invention is not limited to the exemplary embodiments. All features described and/or illustrated herein can be used alone or combined in different combinations in embodiments of the invention. The features and advantages of various embodiments of the present invention will become apparent by reading the following detailed description with reference to the attached drawings which illustrate the following:
A method based on camera images of the test object is described below. Camera images from different viewing angles are used in order to determine the dimensions of a test object 3. There are different possibilities for this. Firstly, the test object 3 can be rotated about a fixed rotational axis 5. Secondly, the camera 1 can be rotated about the test object 3. The first variant is assumed below. As only the object contours are of interest, shadow images are recorded with the help of backlight. For this, the background 2 forms the only light source, which is as homogeneous as possible. The test object 3 is illuminated from behind with respect to the camera 1, whereby the object contours are silhouetted against the background 2 as a shadow boundary. An example of a shadow image is shown in
A schematic test structure is represented in
For the following approaches, firstly a specific alignment of the camera 1 with respect to the rotational axis 5 is assumed. The main axis 4 of the field of vision of the camera meets the rotational axis 5 at a right angle. The existing geometrical properties are thereby simplified; however, the vertical position of the camera 1 is limited by this condition. It is also possible to carry out a volume recognition with virtually any desired camera position. This is described in more detail below with reference to
An algorithm for volume recognition with the following inputs and outputs is now sought:
By the object radius rmax is meant the radius which takes in a maximum of the test object 3 starting from the rotational axis 5 in one rotation. This parameter is variable depending on height h. Furthermore, the value of the angle of rotation γ is conceivable as a variable for which, together with the height h, a corresponding object radius r exists. In the ideal case, following the volume recognition a three-dimensional object representation should moreover be available on the user's PC in order to guarantee a visual impression of the volume recognition.
For all the degrees of complexity a binarization of the test object 3 firstly takes place, as shown by way of example in
If there is sufficient information about the geometry, a statement can be made about the maximum radius rmax with respect to the height h, with reference to a direct evaluation of the binary images with little computational outlay.
Due to the known geometry it can be assumed that the distance between test object 3 and camera 1, denoted FOD in
Although the number of pixels within the binary image has already been given for determining the shadow size s, the pixel size for the theoretical image plane is still to be determined. It follows from the intercept theorem that
with the camera-specific parameters c for the sensor pixel size and f for the focal length. The assumption that, during the camera recording, the image plane or also the virtual detector is situated at exactly the same distance as the distance from camera 1 to the rotational axis 5, results in the factor 2·FOD. The real shadow size s, or s/2, then results from the pixel size and the total number of pixels.
It is also possible to determine the distance r′ with the intercept theorem
The sought radius rmax then results from the determination of the height of the right-angled triangle, which is spanned by FOD and r′
The individual steps of the first approach are summarized thus:
It is conceivable to observe not only one edge. For example, only half of all the projections can be used if the right and left edge are correspondingly observed. It is also possible to observe both edges and to use the corresponding maximum in order to minimize possible sources of error, such as for example illumination, reflection and noise.
As a second approach, it is possible not to limit the direct evaluation of the binary images introduced above for each height to the maximum radius rmax. If the object edge is determined for each projection image at a varying angle of rotation γ, the previously circular hazard area 7 can be reduced.
For this approach, the angle of rotation γ between r′ and the respective r is of interest. This offset of the angle of rotation γ for the determined radius is denoted co and can be determined by
An example of this broadened approach which provides a radius r for each angle of rotation-height pair (γ, h) can be seen in
The result of this approach is thus a more accurate indication of the hazard area 7, wherein, however, a collision protection is no longer guaranteed for any desired angle of rotation γ. The hazard area 7, as shown in
Moreover, a volumetric display as visual feedback for the user is possible on the basis of
By determining the intersection points of all the straight lines, a more accurate estimation of the convex shell of the test object 3 can then be made (see
In summary, the following steps result for this second approach:
The values for the radii r (γ, h) provide another example of the above-named relevant parameters in the named table. These are finer than the relevant parameters named above in the case of the first approach, as they do not give the same radius over 360°, but give it depending on the angle of rotation (the first approach, on the other hand, gives a coarsened shell end).
The third approach utilizes already-existing CT Feldkamp reconstructions (abbreviated to FDK below). For this, the shadow images are interpreted as projection images of a cone beam recording with X-radiation. All the values equal to zero then correspond to no attenuation of the X-rays. Thus no attenuating objects at all, in particular no part of the test object 3, were situated in this path. All values not equal to zero correspond to an attenuation of the radiation by the test object 3 in the beam path.
The individual steps of the third approach are summarized as follows:
An example of this approach can be seen in
Alternatively to the previously presented methods, the use of any desired camera position is also possible. The position of the camera 1 is determined with reference to a corresponding calibration. For a volume recognition, the beam courses of the camera 1 are then tracked at the respective viewing angle and the strike on an object edge is examined. An example of this is shown two-dimensionally in
Instead of the described higher weighting, in the case of which background artefacts occur in the volume, a process of elimination can also be carried out for each viewing angle. The volume to be reconstructed is observed for each projection image at the corresponding angle. All the pixels that are situated outside the represented object contour are disregarded, as they cannot belong to the test object 3. The first two approaches referred to above can also be extended with this generalized camera position.
In summary, all the approaches offer the possibility of implementing an adequate volume recognition and collision protection based thereon. However, with increasing accuracy and flexibility the methods also require an increasing computational outlay. A staggered approach is therefore conceivable, starting with a simple, cylindrical volume recognition and then, depending on the user's wishes, carrying out further steps for a more accurate statement and volumetric display.
In relation to the special field of computed tomography there is in addition yet another possibility for increasing accuracy. All the methods presented are limited in terms of their resolution to the quality of the camera used. Alternatively, for a sufficiently small area, the actual X-ray image can also be used with the same methods for a volume recognition. As a rule, X-ray detectors have substantially higher resolutions than cameras 1 used as standard.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. It will be understood that changes and modifications may be made by those of ordinary skill within the scope of the following claims. In particular, the present invention covers further embodiments with any combination of features from different embodiments described above and below. Additionally, statements made herein characterizing the invention refer to an embodiment of the invention and not necessarily all embodiments.
The terms used in the claims should be construed to have the broadest reasonable interpretation consistent with the foregoing description. For example, the use of the article “a” or “the” in introducing an element should not be interpreted as being exclusive of a plurality of elements. Likewise, the recitation of “or” should be interpreted as being inclusive, such that the recitation of “A or B” is not exclusive of “A and B,” unless it is clear from the context or the foregoing description that only one of A and B is intended. Further, the recitation of “at least one of A, B and C” should be interpreted as one or more of a group of elements consisting of A, B and C, and should not be interpreted as requiring at least one of each of the listed elements A, B and C, regardless of whether A, B and C are related as categories or otherwise. Moreover, the recitation of “A, B and/or C” or “at least one of A, B or C” should be interpreted as including any singular entity from the listed elements, e.g., A, any subset from the listed elements, e.g., A and B, or the entire list of elements A, B and C.
Number | Date | Country | Kind |
---|---|---|---|
10 2013 017 459.7 | Oct 2013 | DE | national |
This application is a U.S. National Phase application under 35 U.S.C. §371 of International Application No. PCT/EP2014/002840, filed on Oct. 21, 2014, and claims benefit to German Patent Application No. DE 10 2013 017 459.7, filed on Oct. 21, 2013. The International application was published in German on Apr. 31, 2015 as WO 2015/058854 under PCT Article 21(2).
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2014/002840 | 10/21/2014 | WO | 00 |