The present disclosure generally relates to an improved image processing system and method for non-destructive testing and inspection (NDT/NDI) devices and, more particularly, to an image processing system and method that uses a hardware graphics accelerator and associated software to provide high performance image rendering of both inspection scan area and test object's geometric definitions.
NDT/NDI devices have been used in industrial applications for more than sixty years. They are widely used for flaw detection to find hidden cracks, voids, porosity, and other internal discontinuities in solid metals, composites, plastics, and ceramics, as well as for measuring thickness and analyzing material properties. NDT/NDI devices primarily include single element ultrasonic (UT), phased array ultrasonic (PA) and eddy current (EC) devices.
The effort of presenting accurate, rich and high quality display images of inspection signals largely falls into two categories for the array of NDT/NDI devices. The effort representing the first category focuses largely on increasing the density and richness of the inspection data by plotting increasingly more inspection points in an image format, i.e., evolving from a single focal beam generated A-scan to a multi-focal beam generated phased array S-scan. An S-scan provides an advantage for flaw rendering because it enables the inspector to use a stationary transducer to see a virtual two dimensional region inside of the test material rather than just a single point, as is provided by an A-scan measurement.
The other category of the effort that has been seen to improve visualization of the existing inspection data of any scan-type, such as planar C-scan, PA linear scan, end view scan etc., involves highlighting or adding colors to the scanned data, or to certain selected scanned area according to some analysis requirement. Some of these efforts make use of computer graphics tools to render a scanned shape with different colors.
One example in the latter group of effort is presented by a US patent application, US2010-0104132-A1 (later as '132), made by the present Applicant. In '132, an exemplary S-scan scanned area is mapped into vertex coordinates and primitives to create a surface. The surface is then given a color texture representing S-scan signal amplitude information. An efficient commercially available graphics accelerator is used to render color image efficiently based on the input of the vertex coordinates, primitives and the color texture.
With the existing effort focusing on processing inspection signals such as rendering the ultrasonically scanned area image in many types of scans, the efficient visualization of the whole or portion of the test objects and/or its features remains unsolved. In addition, existing display and measurement tools used for displaying test target geometries involving using features such as image grid, gates that select measurement regions, X and Y cursors for point and area measurements and part thickness indicators are not efficiently visualized. It has become difficult to visually differentiate these display features from one another or from the inspection results.
In NDT/NDI operations, the geometric definitions of the test objects, including their cross-sections, are usually retrieved from computer aided design tools (CAD) including embedded instrument software or PC drawing tools. They are not obtained by inspection signals such as ultrasonic scans. In existing solutions, test objects or the cross-sections of the same are simply outlined or delineated by mostly solid lines of different colors. The drawbacks of such approaches include i) the area and/or the shape of the test objects are not as visually identifiable as those objects with the whole shape rendered with certain shades and/or color; ii) the solid outlines of the test objects, their cross-sections and/or operator defined measurement tools visually interfere with images of the defects and iii) the representation of the test objects are not as versatile.
a and 1b show an existing method of delineating the outline of a weld being inspected. As can be seen in both
The embodiments of the present disclosure are intended to address the above drawbacks of the existing solutions and improve the visualization of the geometric features of the test objects while making use of existing techniques presenting inspection signals.
The invention disclosed herein provides a method and system to render views of non-destructive inspection target test object with desirable color and/or opacity, coordinating the image rendering of the inspection signals in real time, and avoiding the existing methods which outline test objects only with solid lines, presenting drawbacks such as lack of visualization and sometimes obscuring small flaws shown from inspection results.
Accordingly, it is a general object of the present disclosure to provide a method and a system suitable for producing both test object images and inspection scan images, each of which with combination of color, opacity and/or fill patterns.
It is further an object of the present disclosure to improve visualization of images of both the test object and the inspection scan data by making use of efficient and powerful graphics accelerators so that it can be deployed in real-time and on hand-held devices.
It is further another object of the present disclosure to allow the test object to be displayed with an adjustable level of transparency to improve the visualization of both test object and inspection scan images, and to not obscure the detected flaws, particularly when flaws are small in size.
It is further another object of the present disclosure to allow the NDT/NDI operator to mark and select any area of test object, apply any desirable level of color, opacity, fill-patterns or any combination of them to improve the visual effect of the inspection result.
The foregoing and other objectives, advantages and features of the present invention will become more apparent upon reading of the following non restrictive description of illustrative embodiments, given for the purpose of illustration only with reference to the appended drawings.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the US Patent and Trademark Office upon request and payment of the necessary fee.
a and 1b present prior-art ultrasonic scans of a typical weld, with the geometry of the weld delineated only by solid lines.
a and 8b are colored screen shots inspection results from the inspection instrument which employs the presently disclosed embodiments treating test object and scan area with varied color and/or opacity.
The following describes a method and an NDT/NDI system (not shown) employing the method to improve the visualization of any portion or cross section of a test object according to the present disclosure. A double V weld is used as an exemplary NDT/NDI inspection test object in the following description. It should be appreciated that the test object and its associated defects and characteristics, such as thickness, can be in many geometric forms.
Referring to
One of the principal objectives of the present disclosure is to prepare the exemplary shape weld 4 enclosed by P0, P1, . . . , P7 as a surface. As shown in
Turning vertices into surfaces and subsequently rendering the surfaces with different alpha values for graphic attributes, such as color, patterns and opacity is well known in the arts. Some exemplary basic steps of how to render surfaces of any size and shape into primitives and how to give a surface textures are illustrated in details in a book “Real Time rendering in DirectX” (later as “DirectX”) by Kelly Dempski”, published by Premier Press, in pages 134, 135, 136, 137, 138, 194, 195 and 196, the contents of which are annexed hereto as pages 13-20.
It can be further seen in
Continuing with
With the established surface, vertexes and predetermined textures as input, a graphics accelerator can be used to render images of any combination of color, opacity and/or fill-in patterns very efficiently.
One of the novel aspects of the present disclosure include the steps of 1) converting the geometry of the target or test object into surface with primitives, 2) applying predetermined alpha values to the test object primitives, then, 3) converting the inspection scan areas into primitives and give the scan area primitives a texture by mapping the colorized or other alpha values corresponding to inspection signal information onto corresponding scan area primitives, 4) overlapping, or overlaying the test object primitives and the scan area primitives, and finally 5) making use of a graphics accelerator to generate the alpha images both on the test object primitives and the scan area primitives on an electronic display.
Step 3) of the above described method is further elaborated by the aforementioned co-pending US patent application, US2010-0104132-A1 ('132), made by the present Applicant, the entirety of which is herein incorporated by reference. In '132, an exemplary S-scan scanned area is mapped into vertex coordinates and primitives to create a surface. The surface is then given a colored texture representing the S-scan signal amplitude information over the corresponding primitives.
Reference is now made to
x′=x cos β−y sin β+a
y′=x sin β+y cos β+b Eq. 1
The scan area coordinate system x′-y′ is preferably used as the ‘primary’ coordinate system. The coordinate values of primitives of the test object, such as those of P0(x0, y0), P1(x1, y1), . . . , P7(x7, y7), are therefore converted to coordinate values in x′-y′ system in the manner, according to Eq. 1.
In the last step, the surface representing the inspection signals and the surface representing the test target 4, both with established primitives and texture is provided to a graphics accelerator so that both inspection scan image and the test target can be rendered and presented on a display very efficiently. It should be noted that the alpha value for the test target surface can be assigned by the user to seek any desired combination of opacity and color to better visualize the image of the inspection result. The alpha values for inspection scan signals and scan area are determined by the received inspection signal for each corresponding primitive.
Use of a commercially available graphics accelerator and a graphics software, instead of a custom proprietary designed solution, reduces considerably the time needed to design the graphics system solution, and the complexity of the resulting hardware and software design solution is also reduced.
Reference is now made to
Scan area surface treatment module 20 further includes a scan data acquisition module 22, scan surface generator 24 and a scan surface texture generator 26. Test object geometry treatment module 30 further includes a test object geometry data loader 32, a user defined display requirement generator 32a, a test object surface generator 34, a test object texture assigner 36 and a coordinated system translator 38.
It should be noted that the coordinate system translator 38 can be optionally included either by scan area surface treatment module 20 or test object geometry treatment module 30.
User Interface Module 10 is a keypad and/or remote control console provided to the NDT/NDI device.
One can refer to the previously referred co-pending US patent application, US2010-0104132 for further details on the process pertaining to the operation within scan area surface treatment module 20.
Continuing with
User defined special display requirements for selecting a desired portion of the test object for display are executed by user defined display requirement generator 32a. As described in the foregoing section, in NDT/NDI inspection operations, markers and/or areas selected by certain gate criteria are often used to select a portion of the test object, such as the portion encircled by P2 P8 P9 P10 (in
Continuing with
Reference is now made to
Moving to
At the conclusion of step 606, coordinate system translator 38 in
Turning now to
It should be appreciate that other computer graphics tools deemed fit for the purpose can be deployed.
A functional block diagram of the Image Rendering Module 40 is shown in
In practice, once a testing session is set up for a test object, routines in scan surface generator 22 do not need to be changed for each scan. Similarly, when the subject of interest in the whole test object is determined, the routines in test object surface generator 34 do not need to be changed for each scan either. However, each time there is a change of interest in the views, cross-section, or the marked area for viewing in test object, routines in test object surface generator 34 (602˜606 in
Reference is now made to
Very importantly, it can be seen that the flaws 6 and other matter of interest of the scan result on scan area 2 are not blocked or obscured by color of test object 4 due to the transparency applied to weld area 4.
It can also be seen that the system and method according to the present embodiments can provide visually versatile marking tools to display inspection images. In
Accordingly, with the capability of presenting any view of the test object in many combinations of color, opacity and/or patterns, the NDT/NDI image data are presented in a much more versatile background. Accommodating the imaging of an NDT/NTI scans, the improved display of the test object, or a portion of the test object significantly improves the visualization of NDT/NDI inspection results. The presently disclosed method of preparation of both inspection scan surface and test object surface in a fashion that commercial graphics tools can be employed, enables highly efficient and real-time display even in hand-held instruments.
The following discussion is taken from a reference book on drawing and introduces the following concepts.
Vertices represent positions in space. However, interesting objects occupy many positions in space, and they are most often represented in 3D graphics by their outer surfaces. These outer surfaces are usually represented by triangles. In the case of curved surfaces, you can use sets of triangles to approximate the surface to varying degrees of accuracy. Also, when talking about surfaces, it makes sense to talk about surface normals (vectors that are perpendicular to each surface).
If you are using smooth shading, surface normals are actually represented as vertex normals, where the normal vector for each vertex is the average of the normal vectors for all the triangles that share that vertex.
The standard DirectX lighting model lights surfaces per vertex. This means that the math for lighting is computer for each vertex. Because each triangle has three vertices, the device must interpolate the shaded values across each triangle. The combination of averaged normals as shown in the diagram and interpolated shading across each surface creates the smooth shading shown in most of the renderings in a certain book.
Because you want to add this new piece of information about normals to your vertices, you have to expand your vertex format. You do this by redefining your FVF with the D3DFVF_NORMAL flag. This, along with the position and color information, makes up the minimum format for rendering lit surfaces. Once you revise your vertex format, one can begin talking about how you actually render the triangles themselves.
Processing vertices can be expensive if you have too many of them, so the challenge of rendering surfaces becomes how you represent a given surface with a set of triangles and how to do it in the most efficient manner.
It turns out that it is not so easy. For instance, if you are modeling a cylinder, the sides of that cylinder must consist of a collection of flat sides. If you use too few sides, cylinder appears blocky with visible edges. If you use too many sides, you might end up using more data than is actually visible to the eye, causing unnecessary processing. This first problem is usually one an artist must solve using a modeling program, the constraints of the given project, and a little experimentation. Once you know what your geometry is, how do you render that in the optimal way?
You know that vertices are stored in vertex buffers. You also know that you can draw the contents of the vertex buffer by calling DrawPrimitive. You have been using this to draw sets of vertices, but now it's time to talk about triangles. You can draw three types of triangle primitives: the triangle list, the triangle fan, and the triangle strip. Let's look at each type individually and explore the pros and cons of each.
Rendering with Triangle Lists
The triangle list is the easiest of the triangle primitives to understand. Each triangle is represented in the vertex buffer by a set of three vertices. The first three vertices represent the first triangle, the second three vertices represent the second triangle, and so on.
You do this with the following call to DrawPrimitive:
Note that the number of primitives specified in the third parameter is the number of triangles drawn (2), not the number of vertices used (6). This is the easiest way to represent triangles, but
Rendering with Triangle Fans
One way of reusing vertices is to use triangle fans. A triangle fan uses the first vertex as a shared vertex for the rest of the vertices.
This is the first example of reusing vertices, and the following code draws two triangles:
Notice that when drawing two triangles, you still specify two primitives even though the number of vertices used drops from six to four. However, this is not terribly useful because it only applies well to circular or fan-shaped objects. Although you can use triangle fans to produce rectangular shapes, it's usually not the easiest solution. A more general solution is a triangle strip.
Rendering with Triangle Strips
Triangle strips provide a way to reuse vertices by rendering long strips in sequences.
Because vertices are reused, this is a better way of drawing sets of triangles than the triangle list. The code to do this is the same as earlier, with the different primitive type:
The important thing to remember about strips is that the order matters. Because every new vertex is coupled with the previous two, you need to make sure that the order makes sense.
Another thing to consider with triangle strips is that sharing vertices does have some drawbacks. For instance, in a hard edged corner, each side has a different surface normal. However, the shared vertex can have only one normal vector. This presents a problem because an averaged normal vector doesn't produce the correct hard edge for the lighting. One way to work around this is to create degenerate triangles. A degenerate triangle is not really visible, but provides a way to transition between vertices by smoothing the normals around the corner. For example, the two sides of the corner have different surface normals, so instead of the two sides sharing different vertices, one can insert a third thin face between them. If this face were larger and actually visible, it would show the effect of the different normals, but because it is extremely thin, you never see it. It is not meant to be visible, only to provide a transition between the faces.
One last thing to consider is that the strips are usually not easy to derive in complex models. There are utilities for breaking models into efficient strips, but they can sometimes complicate the authoring process, and the techniques are not perfect. In a sample code for this embodiment, it is easy to create strips for simple geometric shapes, but the task becomes harder for organic or complex objects such as characters or vehicles. So you have to look for ways to get the vertex to reuse strips and fans without the complication of authoring strips. And again, you can do that.
The D3DXCreateTextureFromFileEx function exposes the parameters from CreateTexture along with some new ones. When you set the width and height, a value of D3DX_DEFAULT tells D3DX to use the size of the source image. The filter parameters describe how the image is to be filtered when it is being resized to fit the texture or to build mip maps. If a color key is specified, that color is transparent in the loaded texture. You can use the D3DXIMAGE_INFO structure to retrieve information about the source image. Finally, you can use the palette structure to a set a palette. Because you are using 32-bit textures, this parameter should be set to NULL.
The D3DX texture creation functions are capable of reading several different file formats, but remember that the amount of texture memory used by the texture depends on the pixel format of the texture, not the size of the file. For instance, if you load a JPEG file as a texture, chances are that the texture will take up much more memory than the size of the JPEG file.
The D3DX functions create the new texture in managed memory. They also try to create a valid texture size for the image. For instance, if the image is 640×480, it might try to create a 1,024×512 texture to satisfy the powers-of-two requirement, or it might try to create a 256×256 texture to satisfy a size limitation of the hardware. In either case, the image is stretched to fill the created texture. This can be advantageous because you are almost guaranteed that you can load images of nearly any size, but stretching can produce artifacts or other undesirable side effects. The best way to prevent this is to size textures appropriately when you are creating the files. That way, you can get the best quality textures and use space as efficiently as possible.
How to create the texture was discussed above, but the texture isn't really worth much if you can't use it with your vertices. So far, the rendering you have done has used simple colored triangles. This is because your vertex format has included only color information. To use textures, you need to augment the vertex format with information about how the texture will be mapped onto the geometry. You do this with texture coordinates.
Texture coordinates map a given vertex to a given location in the texture. Regardless of width and height, locations in the texture range from 0.0 to 1.0 and are typically denoted with u and v. Therefore, if you want to draw a simple rectangle displaying the entire texture, you can set the vertices with texture coordinates (0.0, 0.0), (1.0, 0.0), (0.0, 1.0) (1.0, 1.0), where the first set of coordinates is the upper-left corner of the texture and the last set is the lower-right corner. In this case, the shape of the texture on the screen depends on the vertices, not the texture dimensions. For instance, if you have a 128×128 texture, but the vertices are set up to cover an entire 1,024×768 screen, the texture is stretched to cover the entire rectangle. In the general case, textures are stretched and interpolated between the texture coordinates on the three vertices of a triangle.
Texture coordinates are not limited to the values of 0.0 or 1.0. Values less than 1 index to the corresponding location in the texture. In a diagram you can map a texture using different values. In these examples, every piece of data is the same except for the texture coordinates.
Textures coordinates are not limited to the range of 0.0 to 1.0 either. In the default case, values greater than 1 result in the texture being repeated between the vertices. In the next chapter, you'll look at some ways that you can change the repeating behavior, but repeating the texture is the most common behavior. A diagram can show how you can use this to greatly reduce the size of your texture if the texture is a repeating pattern. Imagine a checkerboard fills the screen for a simple game of checkers. You can create a large texture that corresponds to the screen size, but it is better to have a small texture and let the device stretch it for you. Better yet, because of the nature of a checkerboard, you can have a very small texture that is a small portion of the board and then repeat it. By doing this, the texture is 1/16 the size of the full checkerboard pattern and a lot smaller than the image that appears on the screen. This reduces the amount of data that needs to move through the pipeline.
These are just some simple examples of how texture coordinates work, but the concepts hold true in less straightforward cases. If you create a triangle shaped like a tiny sliver and you map the texture onto that, the texture is stretched and pulled to cover the triangle. The next chapter talks a little more about how the device processes the texture when it is being stretched.
Now that you have looked at how texture coordinates work, let's look at how to add them to your vertex format. A device can use up to eight different textures (although this might be limited by the specific hardware you're using). The following FVF definition defines your vertex as having one set of texture coordinates. D3DFVF_TEX1 is used for one texture, D3DFVF_TEX2 is used for two, and so on:
So far, the discussion has been limited to 2D textures because those are the most widely used, but it is possible to have a 1D texture, which is just like any other texture, but with a height of 1. 1D textures can be useful with vertex or pixel shaders. The format for a vertex with a 1D texture coordinate follows. In this case, the D3DFVF_TEXCOORDSIZEx flag tells the device there is only one texture coordinate:
Although the present invention has been described in relation to particular embodiments thereof, many other variations and modifications and other uses will become apparent to those skilled in the art. For example, such variation might include but not limited to using the presently disclosed method to produce test target and scan images of inspection signals generated by all types of NDT/NDI instruments. It is preferred, therefore, that the present invention not be limited by the specific disclosure herein, but only by the appended claims.