The present disclosure relates to an ultrasonic image sensor configured to generate an aggregate two-dimensional image from a plurality of scan locations, and to a method of operating a probe to generate an aggregate two-dimensional image from a plurality of scan locations.
Ultrasonic image sensors have been used in various material testing or measurement applications. For example, ultrasonic imaging has been used in non-destructive testing applications such as the testing of the properties of manufactured materials (e.g., testing for corrosion in aircraft wings). Ultrasonic imaging has further been used in medical imaging applications such as human soft tissue diagnosis.
Known ultrasonic image sensors used to perform inspection or testing are, however, limited to providing static images for each separate scanning operation, which is limited in area to the capabilities of the sensor used to perform the scanning operation. In the area of non-destructive inspection (NDI), it is common for the sensor or probe to be smaller than the area under inspection (e.g., an area of damage). For example, in the case of corrosion or welds in pipelines or damage to an airplane fuselage, the damage can encompass several square inches, while the sensor used in the inspection has an inspection area that is an inch or two in size.
An exemplary embodiment of the present disclosure provides an ultrasonic image sensor which includes an ultrasonic source configured to output ultrasound, and a probe for emitting the ultrasound onto a scan area. The probe is moveable relative to at least two adjacent scan locations of the scan area such that the ultrasound will be focused on each of the at least two scan locations as the probe is moved relative to the scan area to provide an array of scanned images. The exemplary ultrasonic image sensor also includes an ultrasonic, two-dimensional array receiver configured to receive ultrasound reflected from each of the at least two scan locations. In addition, the exemplary ultrasonic image sensor includes a processing unit configured to generate, for a first of the two scan locations, a two-dimensional image of the first scan location based on an intensity of the reflected ultrasound from the first scan location, and to generate an aggregate two-dimensional image for the first scan location which integrates plural two-dimensional images generated using reflected ultrasound of the at least two scan locations based on a position of the probe relative to the at least two scan locations, respectively.
An exemplary embodiment of the present disclosure provides a method of operating an ultrasonic image sensor, in accordance with the exemplary embodiments described above. The exemplary method includes outputting ultrasound from a probe onto a scan area, moving the probe relative to at least two adjacent scan locations of the scan area such that the ultrasound will be focused on each of the at least two scan locations as the probe is moved relative to the scan area to provide an array of scanned images, and receiving ultrasound reflected from each of the at least two scan locations. In addition, the exemplary method includes generating, for a first of the two scan locations, a two-dimensional image of the first scan location based on an intensity of the reflected ultrasound from the first scan location, and generating an aggregate two-dimensional image for the first scan location which integrates plural two-dimensional images generated using reflected ultrasound of the at least two scan locations based on a position of the probe relative to the at least two scan locations, respectively.
An exemplary embodiment of the present disclosure also provides a non-transitory computer-readable medium that has tangibly recorded thereon a computer program that, when executed, causes a processor of an ultrasonic image sensor to perform operations including: (i) outputting ultrasound from a probe onto each scan location of a scan area over which the image sensor is moved, such that the ultrasound will be focused on each of the at least two scan locations as the image sensor is moved relative to the scan area to provide an array of scanned images; (ii) receiving ultrasound reflected from each of the at least two scan locations; (iii) generating, for a first of the two scan locations, a two-dimensional image of the first scan location based on an intensity of the reflected ultrasound from the first scan location; and (iv) generating an aggregate two-dimensional image for the first scan location which integrates plural two-dimensional images generated using reflected ultrasound of the at least two scan locations based on a position of the probe relative to the at least two scan locations, respectively.
An exemplary embodiment of the present disclosure provides an ultrasonic image sensor which includes an ultrasonic source configured to output ultrasound, and a probe for emitting the ultrasound onto a scan area. The probe is moveable relative to at least two adjacent scan locations of the scan area such that the ultrasound will be focused on each of the at least two scan locations as the probe is moved relative to the scan area to provide an array of scanned images. In addition, the ultrasonic image sensor includes an ultrasonic, two-dimensional array receiver configured to receive ultrasound transmitted through each of the at least two scan locations. The exemplary ultrasonic image sensor also includes a processing unit configured to generate, for a first of the two scan locations, a two-dimensional image of the first scan location based on an intensity of the ultrasound transmitted through the first scan location, and to generate an aggregate two-dimensional image for the first scan location which integrates plural two-dimensional images generated using ultrasound transmitted through the at least two scan locations based on a position of the probe relative to the at least two scan locations, respectively.
Additional refinements, advantages and features of the present disclosure are described in more detail below with reference to exemplary embodiments illustrated in the drawings, in which:
In principle, identical or similarly functioning parts are provided with the same reference symbols in the drawings.
The image sensor 100 also includes an ultrasonic source 120 that is configured to output ultrasound 122 as acoustic energy. In accordance with an exemplary embodiment, the ultrasonic source 120 may be a source transducer for generating the ultrasound. The ultrasonic source 120 may be formed from a piezoelectric material. The piezoelectric material may be composed of ceramic or polymer, or composites thereof. Any suitable piezoelectric ceramic- or polymer-containing material can be utilized. The material should be capable of generating a pulse of acoustic energy in response to an electrical stimulus. As such, the ultrasonic source 120 is in electrical communication with a device which provides an electrical pulse thereto (not shown). Optionally, the piezoelectric polymer-containing material is flexible and thus conformable to the surface of an object being imaged. According to an exemplary embodiment, the piezoelectric polymer-based containing material includes polyvinylidene difluoride (PVDF). According to another exemplary embodiment, the piezoelectric polymer-containing material includes a copolymer of polyvinylidene difluoride, such as a polyvinylidene difluoride-trifluoroethylene (PVDF-TrFE). According to an exemplary embodiment, the piezoelectric ceramic-containing material includes lead zirconate titanate (PZT).
The ultrasonic source 120 of the present disclosure is capable of emitting broadband acoustic energy. For example, the ultrasonic source 120 is capable of emitting acoustic energy with a frequency band of 0.1-20 MHz. Through material selection and/or physical design, one or more transducers can be provided capable of emitting the above-mentioned band.
According to the illustrated embodiment, the ultrasonic source 120 is positioned proximate to a first longitudinal end 122A of the probe 110, and emits ultrasound along an imaging path AL which corresponds to a longitudinal axis of the probe 110.
In the example of
The image sensor 100 also includes an ultrasonic, two-dimensional array receiver 130 and a processing unit 140. The two-dimensional array receiver 130 is configured to receive ultrasound reflected from each scan location over which the probe 110 is moved. As shown in the exemplary embodiment of
The two-dimensional array receiver 130 includes piezoelectric material that is configured to convert ultrasound acoustic energy which is incident thereon to electrical signals which can then be utilized to generate an appropriate output, such as an image. That is, ultrasound acoustic energy reflected from one of the scan locations and incident on the piezoelectric material is converted into electrical signals that can be processed by the processing unit 140. The two-dimensional array receiver 130 may include the processing circuitry and image processing techniques, including data acquisition, digital signal processing, and video/graphics hardware and/or software, as disclosed in U.S. Pat. No. 5,483,963, the disclosure of which is incorporated by reference herein in its entirety. According to an exemplary embodiment, the two-dimensional array receiver 130 can include any number of piezoelectric arrays that are known in the art. An array of PZT detectors, as described in the above-mentioned U.S. Pat. No. 5,483,963, can be used to form an imaging array. As additional examples, arrays of piezoelectric polyvinylidene difluoride (PVDF) polymers described in U.S. Pat. No. 5,406,163 or U.S. Pat. No. 5,283,438, the disclosures of which are hereby incorporated by reference in their entirety, can also be used.
In accordance with an exemplary embodiment, the two-dimensional array receiver 130 includes a plurality of sensors arranged in n rows and n columns, where n is greater than or equal to two. For example, the two-dimensional array receiver 130 may include a plurality of sensors arranged in 120 rows by 120 columns, such that 14,400 independent ultrasound receivers convert received ultrasound to pixel voltages.
In accordance with an exemplary embodiment, the ultrasonic source 120 may also be a two-dimensional array source configured to output the ultrasound two-dimensionally.
In the illustrated example of
In accordance with an exemplary embodiment, the ultrasonic source 120 and two-dimensional array receiver 130 may be combined into a single transceiver, as disclosed in U.S. Pat. No. 8,662,395, the entire disclosure of which is hereby incorporated by reference. In this configuration, the focusing mechanism 150 may include an electronic beamformer as disclosed in U.S. Pat. No. 8,662,395 for focusing the ultrasound output from the transceiver onto one of the scan locations on which the probe is moved, and/or focus the ultrasound respectively reflected from each of the scan locations relative to which the probe is moved onto the transceiver.
As illustrated in
In the exemplary embodiment of
The image sensor 100 may also include a first acoustic lens 210, as well as an optional second acoustic lens 220. The first acoustic lens 210 and the second acoustic lens 230, if present, are movably mounted on guides or rails 260 such that their position can be changed along the longitudinal direction. A suitable mechanism, such as a motor 250 can be provided to adjust the longitudinal position of the first acoustic lens 210 and the second acoustic lens 220. According to the present disclosure, additional lenses may be included in the image sensor 100.
As illustrated in
The first acoustic lens 210 and the second acoustic lens 220, if present, act to focus acoustic energy onto the two-dimensional array receiver 130. As described above with respect to
Since both the ultrasonic source 120 and the two-dimensional array receiver 130 of the image sensor 100 of
In above-described configuration where the ultrasonic source 120 and the two-dimensional array receiver 130 are combined into a single transceiver, the focusing mechanism 150 may include an electronic beamformer, as described above. The electronic beamformer may be configured to focus the ultrasound output from the transceiver onto one of the scan locations on which the probe is moved, and/or focus the ultrasound respectively reflected from each of the scan locations relative to which the probe is moved onto the transceiver.
If programmable logic is used, such logic may execute on a commercially available processing platform or a special purpose device. A person having ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multi-core multiprocessor systems, minicomputers, mainframe computers, computers linked or clustered with distributed functions, as well as pervasive or miniature computers that may be embedded into virtually any device. For instance, at least one processor device and a memory may be used to implement the above described embodiments.
A processor device as discussed herein may be a single processor, a plurality of processors, or combinations thereof. Processor devices may have one or more processor “cores.” The terms “computer program medium,” “non-transitory computer readable medium,” and “computer usable medium” as discussed herein are used to generally refer to tangible media such as a removable storage unit 318, and a hard disk installed in hard disk drive 312.
Various embodiments of the present disclosure are described in terms of the functions of the processing unit 140. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the present disclosure using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter.
Processor 304 may be a special purpose or a general purpose processor device. The processor device 304 may be connected to a communication infrastructure 306, such as a bus, message queue, network, multi-core message-passing scheme, etc. The network may be any network suitable for performing the functions as disclosed herein and may include a local area network (LAN), a wide area network (WAN), a wireless network (e.g., WiFi), a mobile communication network, a satellite network, the Internet, fiber optic, coaxial cable, infrared, radio frequency (RF), or any combination thereof. Other suitable network types and configurations will be apparent to persons having skill in the relevant art. The processing unit 140 may also include a main memory 308 (e.g., random access memory, read-only memory, etc.), and may also include a secondary memory 310. The secondary memory 310 may include the hard disk drive 312 and a removable storage drive 314, such as an optical disk drive, a flash memory, etc.
The removable storage drive 314 may read from and/or write to the removable storage unit 318 in a well-known manner. The removable storage unit 318 may include a removable storage media that may be read by and written to by the removable storage drive 314. For example, if the removable storage drive 314 is a universal serial port, the removable storage unit 318 may be a portable flash drive, respectively. In one embodiment, the removable storage unit 318 may be non-transitory computer readable recording media.
In some embodiments, the secondary memory 310 may include alternative means for allowing computer programs or other instructions to be loaded into the processing unit 140, for example, the removable storage unit 318 and an interface 320. Examples of such means may include a program cartridge and cartridge interface (e.g., as found in video game systems), a removable memory chip (e.g., EEPROM, PROM, etc.) and associated socket, and other removable storage units 318 and interfaces 320 as will be apparent to persons having skill in the relevant art.
Data stored in the processing unit 140 (e.g., in the main memory 308 and/or the secondary memory 310) may be stored on any type of suitable computer readable media, such as optical storage (e.g., a compact disc, digital versatile disc, Blu-ray disc, etc.) or magnetic tape storage (e.g., a hard disk drive). The data may be configured in any type of suitable database configuration, such as a relational database, a structured query language (SQL) database, a distributed database, an object database, etc. Suitable configurations and storage types will be apparent to persons having skill in the relevant art.
The processing unit 140 may also include a communications interface 324. The communications interface 324 may be configured to allow software and data to be transferred between the processing unit 140 and external devices. Exemplary communications interfaces 324 may include a modem, a network interface (e.g., an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via the communications interface 324 may be in the form of signals, which may be electronic, electromagnetic, optical, or other signals as will be apparent to persons having skill in the relevant art. The signals may travel via a communications path 326, which may be configured to carry the signals and may be implemented using wire, cable, fiber optics, a phone line, a cellular phone link, a radio frequency link, etc.
Computer program medium and computer usable medium may refer to memories, such as the main memory 308 and secondary memory 310, which may be memory semiconductors (e.g., DRAMs, etc.). These computer program products may be means for providing software to the processing unit 140. Computer programs (e.g., computer control logic) may be stored in the main memory 308 and/or the secondary memory 310. Computer programs may also be received via the communications interface 324. Such computer programs, when executed, may enable processing unit 140 to implement the present methods as discussed herein. In particular, the computer programs, when executed, may enable processor device 304 to implement the operative functions of the image sensor as discussed herein. Accordingly, such computer programs may represent controllers of the processing unit 140. Where the present disclosure is implemented, at least in part, using software, the software may be stored in a non-transitory computer readable medium and loaded into the processing unit 140 using the removable storage drive 314, interface 320, and hard disk drive 312, or communications interface 324. Lastly, the processing unit 140 may also include a display interface 302 that outputs display signals to a display unit 330, e.g., LCD screen, plasma screen, LED screen, DLP screen, CRT screen, etc. The display unit 330 can be a separate component connected to the probe 110 of the image sensor 100.
Returning to
In addition, the processing unit 140 is configured to generate an aggregate two-dimensional image for the first scan location (e.g., scan location 1 in
In operation, the processing unit 140 of the image sensor is configured to generate a two-dimensional image of each scan location over which the probe 110 is moved, based on an intensity of the reflected ultrasound from that respective scan location. As the probe 110 is moved to additional scan locations, the processing unit 140 is configured to generate an aggregate two-dimensional image which integrates the respective two-dimensional images generated for two or more of the scan locations. In the example of
In practice, the probe 110 is configured to be moved relative to at least one of the scan locations a plurality of times before the probe is moved to another scan location. For instance, the processing unit 140, in concert with the ultrasonic source 120 and two-dimensional array receiver 130, is configured to generate two-dimensional images for each scan location at a rate of 30 frames per second, for example. Thus, in operation, the processing unit 140 is configured to generate a new two-dimensional image for at least one of the scan locations each time the probe is moved relative to that scan location so as to generate a plurality of two-dimensional images for that scan location. For example, with reference to
As noted above, the processing unit 140 includes a memory unit (e.g., main memory 308, secondary memory 310). The memory unit is configured to store therein each of the two-dimensional images generated for a corresponding one of the scan locations and the generated aggregate, two-dimensional image. Thus, the processing unit 140 can replace any of the two-dimensional images with a stored image to integrate into the aggregate, two-dimensional image.
In accordance with an exemplary embodiment, the processing unit 140 is configured to implement a pixel placement algorithm in determining which two-dimensional image, or portion of a two-dimensional image, of a specific scan location to include in the aggregate, two-dimensional image of a scan area.
In the example of
In accordance with an exemplary embodiment, the processing unit 140 can be configured to select which pixels from which two-dimensional image for a given scan location to integrate into the aggregate, two-dimensional image, based on a score attributed to pixels in the corresponding images. For example, with reference to
The processing unit 140 first finds the center column and row of the active area in the Cscan frame (e.g., SL1). As used herein, the term “active area” means the area in the Cscan frame in which the ultrasound is detected. The processing unit 140 then assigns an initial score of zero for all pixels in the aggregate scan area. While the scan is active, a corresponding two-dimensional image is respectively generated for each of the scan locations SL1-SL6, and the position of each pixel in the scan locations SL1-SL6 is recorded for each individual two-dimensional image. For each pixel in each of the corresponding two-dimensional images SL1-SL6, the processing unit 140 then assigns a score based on that pixel's distance to the center row and column of the active area in the corresponding two-dimensional image SL1-SL6. For example, the pixel represented by the black dot in
The processing unit 140 then computes the position of the pixel in the scan area for which the aggregate two-dimensional image is to be generated, by using the position of the corresponding two-dimensional image SL1-SL6 as well as the position of the pixel in that image. Then, the processing unit 140 compares the pixel score in one of the images SL1-SL6 to the pixel score in another one of the images. If the pixel score is higher in one of the images (e.g., image SL2) than it is in another one of the images (e.g., image SL4), the processing unit 140 utilizes the pixel in image SL2 for generating the pixel in the corresponding location of the aggregate, two-dimensional image. In case the processing unit has already generated the aggregate, two dimensional image utilizing, for example, image SL2 and subsequently determines that the pixel in image SL4 has a higher score value than in image SL2, the processing unit 140 can replace the pixel in the aggregate, two-dimensional image with the corresponding pixel in image SL4.
To illustrate the pixel placement algorithm in more detail,
A flat surface provides more flexibility in the calculation of the most desirable pixel for inclusion in the area scan.
According to an exemplary embodiment, the flat surface pixel selection algorithm can be refined by eliminating areas of distorted imaging. With reference to
In the example of
As shown in
The example of
Thus, in the example of
Further, in the example of
The processing unit 140 can provide various graphical effects to different features illustrated in an aggregate, two dimensional image for a scan area. For example, with reference to
Accordingly, the processing unit 140, in generating the aggregate two-dimensional image, is configured to assign at least one of different intensities and different colors to different ranges of depth of objects contained in the scan area. The processing unit 140 is configured to generate the aggregate two-dimensional image to contain at least one of different intensities and different colors to represent the different ranges of depth of objects contained in the scan area.
The processing unit 140 is also configured to determine a thickness from the probe 110 to at least one object contained in a corresponding one of at least two scan locations as the probe 110 is moved relative to the scan locations. The probe 110 is configured to be moved in a direction substantially parallel (e.g., longitudinal) to a surface of an object to be scanned. In addition, the probe 110 is configured to be held at a shear wave angle (e.g., a predetermined angle) relative to a surface of an object to be scanned, and to focus the ultrasound on at least one scan location at the shear wave angle, as disclosed in U.S. Pat. No. 7,370,534, the entire disclosure of which is hereby incorporated by reference in its entirety.
As described above, a positioning system is utilized in the present disclosure to determine the placement of the probe relative to different scan locations in a scan area, for use in generating the two-dimensional images for each scan location and the integration of the two-dimensional images in the aggregate, two-dimensional image.
For example, the image sensor 100 can include a wheeled position encoder attached to the probe 110, where the wheeled position encoder has at least one wheel configured to rotate across the scan area as the probe 110 is moved relative to the scan area. The processing unit 140 is configured to derive a position of the probe on the scan area based on an amount of rotation of the at least one wheel of the wheeled position encoder, and to generate the aggregate, two dimensional image based on the derived position of the probe on the scan area.
As another example, the image sensor 100 can include a string position encoder attached to the probe 110, where the string position encoder has at least two strings attached to the probe 110 in perpendicular directions to one another, and at least two string encoders arranged in quadrature. The at least two string encoders are configured to generate Cartesian position information of the probe on the scan area based on a movement of the probe 110 relative to the scan area. The processing unit 140 is configured to derive a position of the probe 110 on the scan area based on the Cartesian position information generated by the at least two string encoders, and to generate the aggregate, two dimensional image based on the derived position of the probe on the scan area.
As another example, the image sensor 100 can include a wireless position encoder attached to the probe 110, where the wireless position encoder includes at least two reflectors attached to the probe 110 on perpendicular sides of the probe 110, and at least two wireless sources arranged in quadrature and configured to output wireless signals toward the probe 110 and respectively receive reflected wireless signals from the at least two reflectors. The wireless position encoder is configured to generate Cartesian position information of the probe on the scan area based on a movement of the probe relative to the scan area. In addition, the processing unit 140 is configured to derive a position of the probe 110 on the scan area based on the Cartesian position information generated by the wireless position encoder, and to generate the aggregate, two dimensional image based on the derived position of the probe on the scan area.
In accordance with an exemplary embodiment, the processing unit 140 is configured to generate the aggregate, two dimensional image after the probe 110 is moved relative to a predetermined number of the scan locations in the scan area. The predetermined number of scan locations can be defined differently based on operator control.
In accordance with an exemplary embodiment, the processing unit 140 is configured to generate the aggregate, two dimensional image after the probe 110 is moved over all scan locations in the scan area.
In the exemplary embodiment of
In the exemplary embodiment of
It is to be understood that notwithstanding the different positions of the ultrasonic source in the embodiments of
In addition to the exemplary image sensors as described above, the present disclosure also provide a method of operating an ultrasonic image sensor, in accordance with the exemplary embodiments described above. For example, the method of the present disclosure includes outputting ultrasound from a probe onto a scan area, moving the probe relative to at least two adjacent scan locations of the scan area such that the ultrasound will be focused on each of the at least two scan locations as the probe is moved relative to the scan area to provide an array of scanned images, determining the position of the probe relative to the scan locations in the scan area over which the probe is moved, generating position information indicating the position of the probe relative to each scan location, and receiving ultrasound reflected from each of the at least two scan locations. In addition, the exemplary method includes generating, for a first of the two scan locations, a two-dimensional image of the first scan location based on an intensity of the reflected ultrasound from the first scan location, and generating an aggregate two-dimensional image for the first scan location which integrates plural two-dimensional images generated using reflected ultrasound of the at least two scan locations based on the generated position information.
The present disclosure also provides a non-transitory computer-readable medium (e.g., removable storage unit 318, and a hard disk installed in hard disk drive 312 in
While the present disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The present disclosure is not limited to the exemplary embodiments described above. Other variations to the disclosed exemplary embodiments can be understood and effected by those skilled in the art in practicing the claimed disclosure, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” or “including” does not exclude other elements or steps, and the independent article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
It will be appreciated by those skilled in the art that the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restricted. The scope of the invention is indicated by the appended claims rather than the foregoing description and all changes that come within the meaning and range and equivalence thereof are intended to be embraced therein.