Embodiments of the invention relate generally to borescopes and more particularly to a borescope having an accurate position tracking function for a detection head of the borescope in a mechanical device to be detected, and a navigating method thereof.
Borescopes are commonly used in the visual inspection of a mechanical device such as aircraft engines, industrial gas turbines, steam turbines, diesel engines, and automotive and truck engines. Gas and steam turbines require particular inside attention because of safety and maintenance requirements.
A flexible borescope is more commonly used in inspecting a complex interior surface of the mechanical device. In some cases, a detection head is assembled at a distal end of a flexible insertion tube of the borescope. More specifically, the detection head may include a miniature video camera and a light for making it possible to capture video or still images deep within dark spaces inside of the mechanical device. As a tool for remote visual inspection, the ability to capture video or still images for subsequent inspection is a huge benefit. A display in a hand-held operation apparatus shows the camera view, and a joystick control or a similar control is operated to control or steer the motion of the detection head for a full inspection of the interior elements of the mechanical device.
However, a full inspection usually takes several days to cover every region of interest (ROI) in the detected mechanical device through applying a borescope. One of the challenges that causes this long inspection time is the difficulty in navigating the detection head (borescope tip) to some ROI and navigating the miniature video camera of the detection head in a desired direction. The navigating view is the most important information available to the operator to judge the position of the detection head. Under some circumstances, an accurate pose (position and orientation) of the detection head in the detected mechanical device is desired in order to predict potential failure at the detected location and for the operator's reference for further operation.
For these and other reasons, there is a need for providing a new borescope and a navigating method thereof which can capture accurate position information of a detection head of the borescope, to better navigate the detection head.
In accordance with an embodiment of the invention, a borescope is provided. The borescope includes an insertion tube, a first image processor, a model store unit, a pose calculator, a second image processor, a navigation image calculator and a display. The insertion tube includes a detection head and at least one sensor for receiving signals in the insertion tube and generating sensed signals. The first image processor is for calculating a first image based on first image signals captured by the detection head. The model store unit is for storing a predetermined model of a mechanical device to be detected. The pose calculator is for calculating an initial pose of the detection head based on the sensed signals. The second image processor is for adjusting the initial pose to a navigation pose until a difference between the first image and a second image calculated based on the navigation pose and the predetermined model falls in an allowable range. The navigation image calculator is for calculating a navigation image based on the navigation pose and the predetermined model. The display is for showing the navigation image.
In accordance with another embodiment of the invention, A method for navigating a detection head of a borescope is provided. The method includes receiving first image signals from the detection head and sensed signals from at least one sensor. The method includes calculating an initial pose of the detection head based on the sensed signals. The method includes calculating a first image based on the first image signals and an initial second image based on the initial pose and a predetermined model. The method includes calculating an initial difference between the first image and the initial second image. The method includes adjusting the initial pose to a navigation pose gradually until a difference between the first image and a second image calculated based on the navigation pose and the predetermined model falls in an allowable range. The method includes calculating a navigation image based on the predetermined model and the navigation pose. The method includes showing the navigation image.
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Unless defined otherwise, technical and scientific terms used herein have the same meaning as is commonly understood by one of ordinary skill in the art to which this invention belongs. The terms “first”, “second”, and the like, as used herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Also, the terms “a” and “an” do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items, unless otherwise noted, are merely used for convenience of description, and are not limited to any one position or spatial orientation.
Referring to
In some non-limiting embodiments, the insertion tube 24 includes a detection head 242 at a distal end thereof. The detection head 242 may include a miniature video camera (not labeled) for capturing video or still images of the internal elements of the mechanical device 70. In some embodiments, the detection head 242 may include a light (not labeled) which makes it possible to capture video or still images deep within dark spaces of the mechanical device 70.
The hand-held operation apparatus 22 includes an operation part 222 (e.g., joystick control) for at least controlling or steering the motion of the detection head 242. The hand-held operation apparatus 22 further includes a display 220 which may include a first display part 224 for showing a corresponding video or still image 313 of an internal element 501 captured by the miniature video camera and a second display part 226 for showing a navigation image 3251 or 3252 of the detection head 242 in the mechanical device 70. In some embodiments, a construction 3253 of the insertion tube 24 and/or a navigation pose (e.g., position T and orientation R) of the detection head 242 are shown in the navigation image 3251. In some embodiments, only a point 3254 representing the detection head 242 and the information (T, R) of the detection head 242 are shown in the navigation image 3252. The navigation images 3251 and 3252 can be shown in two-dimension or in three-dimension views. The position T and the orientation R are three dimension vectors, for example, T=[Tx′, Ty′, Tz′], R=[Rx′, Ry′, Rz′] in the spatial coordinate system (x′, y′, z′). Although a hand-held operation apparatus 22 is shown for purposes of example, any type of apparatus enabling the functions described herein may be used.
Usually, an accurate navigation pose of the detection head 242 in the mechanical device 70 to be detected is desired in order to predict potential failure at the detected position and provide reference for the operator's subsequent operation.
Referring to
As an example, the shape sensing cable 246 includes a cross-shaped flexible support rod 2462 and four sensing leads 2463 respectively arranged four corners of the support rod 2462. In one embodiment, the support rod 2462 may be made of, but not limited to glass fiber or carbon fiber. The shape of the support rod 2462 can be changed, and the number of the sensing leads 2463 can be adjusted in other embodiments.
Referring to
In some embodiments, each sensing point 2464 may include a strain gage or other piezo material based sensor. Each sensor at each sensing point 2464 is used to receive a strain change signal and generating sensed signals for calculating the shape of the insertion tube 24 based on appropriate algorithms.
In other embodiments, the sensor 2465 includes an accelerator and a gyroscope which are attached to the detection head 242. Therefore, the pose of the detection head 242 can be calculated based on sensed signals of the accelerator and the gyroscope by implementing inertial based navigation algorithms.
Referring to
For implementing the video or image viewing function, the processor 300 includes a detection head controller 302 and a first image processor 304. The detection head controller 302 is used to receive control signals from the operation part 222 and control the detection head 242 in the detection process, such as adjusting imaging angle, forward direction, and lighting grade, etc.
After capturing the image of the internal element 501 as shown in
The first image processor 304 further includes a first image calculator 305 for calculating a first image F1(x, y) 312 based on the first image signals 311. Herein, F1(x, y) is a function in the planar coordinate system (x, y). Referring to
Referring back to
The model store unit 306 is used to store a predetermined model 318 which is determined according to the detected mechanical device 70. Namely, the configuration of the predetermined model 318 is the same as the configuration of the detected mechanical device 70. The model store unit 306 may store many models corresponding to different kinds of mechanical devices to be detected. In some embodiments, the predetermined model 318 may be a two-dimensional or a three-dimensional model.
The pose calculator 307 is used to receive the sensed signals 316 from the at least one sensor 2465 and calculate an initial pose 317 of the detection head 242 based on the sensed signals 316. The initial pose 317 includes an initial position T1 and an initial orientation R1 of the detection head 242 in the detected device 70. Usually, the initial pose (T1, R1) 317 of the detection head 242 is not accurate enough due to an error accumulation with the inserting length of the insertion tube 24 in the mechanical device 70. Therefore, the initial pose (T1, R1) 317 needs to be adjusted in order to navigate the detection head 242 more accurately.
The second image processor 314 is used to gradually adjust the initial pose (T1, R1) 317 to a navigation pose (Tnav, Rnav) 324 until a difference between the first image F1(x, y) 312 and a second image F2Tnav, Rnav (x, y) 322 falls in an allowable range. Herein the second image F2Tnav, Rnav (x, y) 322 is calculated based on the navigation pose (Tnav, Rnav) 324 and the predetermined model 318. In some embodiments, a second image F2Tnav, Rnav (x′, y′, z′) in the special coordinate system (x′, y′, z′) can be directly calculated based on the navigation pose (Tnav=[Tx′, Ty′, Tz′], Rnav=[Rx′, Ry′, Rz′]) and the three dimension predetermined model 318. Then after a conversion from the special coordinate system (x′, y′, z′) to the planar coordinate system (x, y), the second image F2Tnav, Rnav (x, y) 322 in the planar coordinate system (x,y) can be calculated.
In a more specific application, the second image processor 314 includes a second image calculator 308 and an image analysis unit 309. Referring to
In some embodiments, the image analysis unit 309 is used to calculate an initial difference E(T1, R1) between the first image F1(x, y) 312 and the initial second image F2T1, R1 (x, y) 3221. The image analysis unit 309 is further used to calculate an adjusted difference E(Tk+1, Rk+1) (k≧1) between the first image F1(x, y) 312 and the adjusted second image F2Tk+1, Rk+1 (x, y) 322 such as F2T2, R2 (x, y) 3222, F2T3, R3 (x, y) 3223 and F2T4, R4 (x, y) 3224. The image analysis unit 309 is further used to calculate a variation ΔEk between the initial difference E(T1, R1) and the adjusted difference E(Tk+1, Rk+1). The image analysis unit 309 is further used to determine whether the variation ΔEk falls in the allowable range, and gradually adjust the initial pose (T1, R1) 317 to the adjusted pose (Tk+1, Rk+1) 323 if the variation ΔEk falls out of the allowable range until the variation ΔEk falls in the allowable range. Once the variation ΔEk falls in the allowable range, the corresponding adjusted pose (Tk+1, Rk+1) 323 is outputted as the navigation pose (Tnav, Rnav) 324, such as (T4, R4) shown in
In other embodiments, the image analysis unit 309 is used to determine whether the difference E(T1, R1) between the first image 312 and the initial second image 3221 or the E(Tk+1, Rk+1) difference between the first image 312 and the adjusted second image 3222, 3223, 3224 falls in the allowable range, and gradually adjust the initial pose (T1, R1) 317 to the adjusted pose (Tk+1, Rk+1) 323 if the difference E(T1, R1), E(Tk+1, Rk+1) falls out of the allowable range until the difference E(T1, R1), E(Tk+1, Rk+1) falls in the allowable range. Once the difference E(T1, R1), E(Tk+1, Rk+1) falls in the allowable range, the initial pose (T1, R1) 317 or the corresponding adjusted pose (Tk+1, Rk+1) 323 is outputted as the navigation pose (Tnav, Rnav) 324, such as (T4, R4) shown in
The navigation image calculator 310 is used to receive the navigation pose 324 and the predetermined model 318, and then calculate a navigation image 325 based on the navigation pose 324 and predetermined model 318. The illustrated navigation image 325 is a two-dimensional image, or a three-dimensional image. Then, the navigation image 325 is shown in the second part 226 of the display 220 to predict potential failure at the detected location and provide reference information to the operator for subsequent operation. In some embodiments, the navigation pose 324 is also shown in the display 220 to provide more reference information.
Referring to
In some embodiments, at least one adjustment process is implemented in the second image processor 314. The difference E(Tk, Rk) between the first image F1 (x, y) 312 and the second image F2Tk, Rk (x, y) 322 can be calculated according to the following equation:
E(Tk, Rk)=Σn=1N|F1(x.y)−F2Tk,Rk(x, y)|n(k≧1) (1),
Wherein the difference E(Tk, Rk) is calculated by an accumulation of the error between each point 1, 2, 3, . . . , n of F1(x, y) and the corresponding point 1′, 2′, 3′, . . . n′ of F2Tk, Rk (x, y). In other embodiments, E(Tk, Rk) is a function that can be used to describe the error between the first image F1(x, y) 312 and the second image F2Tk, Rk (x, y) 322.
As an example, the first image F1(x, y) and the initial second image F2T1, R1 (x, y) 3221 are calculated as shown in the (a) part of
The image analysis unit 309 is used to adjust the initial pose (T1, R1) 317 to an adjusted pose (T2, R2) 323. The second image F2T1, R1 (x, y) 3221 can be re-calculated to F2T2, R2 (x, y) 3222 based on the adjusted pose (T2, R2) 323 as shown in the (b) part of
Then the image analysis unit 309 is used to adjust the initial pose (T1, R1) 317 to another adjusted pose (T3, R3) 323. The second image F2T1, R1 (x, y) 3221 can be re-calculated to F2T3, R3 (x, y) 3223 as shown in the (c) part of
Then the image analysis unit 309 is used to adjust the initial pose (T1, R1) 317 to another adjusted pose (T4, R4) 323. The second image F2T1, R1 (x, y) 3221 can be re-calculated to F2T4, R4 (x, y) 3224 as shown in the (d) part of
In some embodiments, the adjusted pose (Tk+1, Rk+1) 323 is calculated by adding a compensation position ΔTk and a compensation orientation ΔRk to the initial position T1 and the initial orientation R1 respectively as the following equations.
Tk+1=T1+ΔTk(k≧1) (2),
Rk+1=R1+ΔRk(k≧1) (3).
In some embodiments, the step length of at least one of ΔTk and ΔRk are fixed and the direction of the at least one of ΔTk and ΔRk are variable. For example, ΔT1=[0.005, −0.0005, 0.0005], ΔR1=[0.5°, −0.5°, 0.5°] and ΔT2=[0.005, −0.0005, −0.0005], ΔR2=[0.5°, −0.5°, −0.5°,]. In some embodiments, ΔTk and ΔRk are variable. In some embodiments, ΔTk and ΔRk are calculated by a convergence algorithm such as the Levenberg-Marquard algorithm for accelerating a convergence speed of the difference E(Tk, Rk) to a value of zero.
In some embodiments, the adjusted difference E(Tk+1, Rk+1) is always expected to be less than the initial difference E(T1, R1). If E(Tk+1, Rk+1) is large than E(T1, R1), it means the adjustment is not in the right direction. In this case, the adjustment should change the direction, for example, ΔTk=(0.0005, 0.0005, 0.0005) and ΔTk+1=(−00005, 0.0005, 0.0005). If E(Tk+1, Rk+1) is less than E(T1, R1) and ΔEk remains out of the allowable range at the same time, it means the adjustment is in the right direction. The adjustment should continue or change the step length, for example, ΔTk=(−0.0005, 0.0005, 0.0005) and ΔTk+1=(−0.0001, 0.0001, 0.0001).
Referring to
At block 601, during a detection operation, first image signals 311 are received from the detection head 242, and sensed signals 316 are received from the at least one sensor 2465.
For implementing the video or image viewing function, steps 621 and 623 are further included. At block 621, a corresponding video or still image 313 is calculated based on the first image signals 311. At block 623, the corresponding video or still image 313 is shown in the display 220.
For implementing the navigation function, steps 603˜613 are included.
At block 603, an initial pose (T1, R1) 317 of the detection head 242 is calculated based on the sensed signals 316.
At block 605, a first image 312 is calculated based on the first image signals 311, and an initial second image 322 is calculated based on the initial pose (T1, R1) 317 and the predetermined model 318.
At block 607, an initial difference E (T1, R1) between the first image F1(x, y) 312 and the initial second image F2T1, R1 (x, y) 3221 is calculated.
At block 609, the initial pose (T1, R1) 317 is gradually adjusted to a navigation pose (Tnav, Rnav) 324 until a corresponding difference E(Tnav, Rnav) between the first image F1(x, y) 312 and the second image F2Tnav, Rnav (x, y) 322 falls in an allowable range.
At block 611, a navigation image 325 is calculated based on the predetermined model 318 and the navigation pose (Tnav, Rnav) 324.
At block 613, the navigation image 325 is shown the display 220.
Referring to
At block 6091, the initial pose (T1, R1) 317 is adjusted to an adjusted pose (Tk+1, Rk+1) (k≧1) 323.
At block 6092, the adjusted difference E(Tk+1, Rk+1) is calculated based on the first image F1(x, y) 312 and the adjusted second image F2Tk+1, Rk+1 (x, y) 322. The adjusted second image F2Tk+1, Rk+1 (x, y) 322 is calculated based on the adjusted pose (Tk+1, Rk+1) 323.
At block 6093, a variation ΔEk between the adjusted difference E(Tk+1, Rk+1) and the initial difference E(T1, R1) is calculated.
At block 6094, the variation ΔEk is determined whether it falls in a predetermined range [El, Eh]. If not the process goes back to block 6091, if yes the process goes to block 6095.
At block 6095, the adjusted pose (Tk+1, Rk+1) 323 is outputted as the navigation pose (Tnav, Rnav) 324.
Referring to
At block 6096, the initial difference E(T1, R1) or the adjusted difference E(Tk+1, Rk+1) (k≧1) is determined whether it falls in a predetermined range [El, Eh]. If not the process goes to block 6097, if yes the process goes to block 6099.
At block 6097, the initial pose (T1, R1) 317 is adjusted to an adjusted pose (Tk+1, Rk+1) 323.
At block 6098, the adjusted difference E(Tk+1, Rk+1) is calculated based on the first image F1(x, y) 312 and the adjusted second image F2Tk|1, Rk|1 (x, y) 322. The adjusted second image F2Tk−1, Rk+1 (x, y) 322 is calculated based on the adjusted pose (Tk+1, Rk+1) 323. And the process goes back to block 6096.
At block 6099, the initial pose (T1, R1) 317 or the adjusted pose (Tk+1, Rk+1) 323 is outputted as the navigation pose (Tnav, Rnav) 324.
Since the initial pose (T1, R1) 317 is adjusted gradually, the corresponding second image 322 is adjusted accordingly. A difference between the first image 312 and the second image 322 is reduced gradually. Then an accurate navigation image 325 can be achieved through above method 600, the operator can accurately identify the interested detection channel/point through monitoring the navigation image 325 of the inserting tube 24 or the location of the detection head 242 in the detected mechanical device 70.
It is to be understood that a skilled artisan will recognize the interchangeability of various features from different embodiments and that the various features described, as well as other known equivalents for each feature, may be mixed and matched by one of ordinary skill in this art to construct additional systems and techniques in accordance with principles of this disclosure. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
Further, as will be understood by those familiar with the art, the present invention may be embodied in other specific forms without depending from the spirit or essential characteristics thereof. Accordingly, the disclosures and descriptions herein are intended to be illustrative, but not limiting, of the scope of the invention which is set forth in the following claims.
Number | Date | Country | Kind |
---|---|---|---|
201410180922.5 | Apr 2014 | CN | national |