The present disclosure relates to a method, a system, an apparatus and a computer program for inspecting, detecting, monitoring, analyzing or assessing assets using ultrasound imaging, including detecting, identifying, monitoring, analyzing or assessing aberrations in the assets.
Corrosion of metal assets is a serious problem in many industries, including, among others, construction, manufacturing, petroleum and transportation. In the petroleum industry, for instance, corrosion tends to be particularly pervasive and problematic since the industry depends heavily on carbon steel alloys for its metal structures such as pipelines, supplies, equipment, and machinery. The problem of corrosion in such industries can be extremely challenging and costly to assess and remediate due to the harsh and corrosive environments within which the metal structures must exist and operate. Age and the presence of corrosive materials, such as, for example, oxygen (O2), water (H2O), hydrogen sulfide (H2S), carbon-dioxide (CO2), sulfates, carbonates, sodium chloride, potassium chloride, or microbes in oil and gas production can exacerbate the problem.
Because corrosion of metal assets can be a serious and costly problem to remediate, there has been a significant push in industries to replace metallic assets with nonmetallic alternatives that are resistant to corrosion, thereby cutting corrosion-related costs and increasing revenues. However, the industries have been resistant to such replacements due to the lack of a cost-effective inspection or failure detection technology that can reliably identify and localize aberrations in nonmetallic assets, including failures and mechanical deformations, such as, for example, surface microcracks, propagation of failure, fractures, liquid or gas leaks, among many others. Resultantly, both metallic and nonmetallic assets are commonly employed in the industries without a technology solution that can effectively or efficiently detect and evaluate aberrations in metallic or nonmetallic assets.
Since both metallic and non-metallic assets are commonly used in a variety of industries, there exists a great unfulfilled need for a cost-effective and reliable technology solution for inspecting, detecting, monitoring, analyzing or assessing aberrations in either or both metallic or nonmetallic assets.
The instant disclosure provides a cost-effective, reliable technology solution for inspecting, detecting, identifying, monitoring, analyzing or assessing aberrations in ultrasound images of either, or both, metallic or nonmetallic assets, such as, for example, used in the oil and gas industries. The technology solution includes a method, system, apparatus and computer program for inspecting, detecting, monitoring, analyzing or assessing assets using ultrasound imaging, including detecting, identifying, monitoring, analyzing or assessing aberrations in the assets.
According to a non-limiting embodiment of the solution, a computer-implemented method is provided for analyzing a sequence of noisy or incoherent ultrasound scan images of an asset comprising a composite material having internal defects or voids and diagnosing a health condition of a section of the asset. The method comprises: receiving, by an input-output interface, an ultrasound scan image of the section of the asset that contains noise or incoherence resulting from signal attenuation due to the composite material in the section of the asset; preprocessing, by a denoising unit, the ultrasound scan image to remove the noise or incoherence and output a denoised ultrasound image; analyzing, by a machine learning platform, the denoised ultrasound scan image to detect any aberrations in the section; evaluating, by the machine learning platform, any detected aberrations; generating, by the machine learning platform, a degree of health of the section of the asset based on any detected aberrations; and generating, by an image rendering unit, an image rendering signal to cause a computer resource asset to display the denoised ultrasound scan image on a display device.
In the method, the denoising unit can comprise a machine learning model.
The method can comprise training or tuning the machine learning model by a computer-implemented process. The process can comprise: receiving raw ultrasound scan image data of a test section comprising the material having internal defects or voids; sending an image rendering signal to cause a computer resource asset to display an ultrasound scan image based on the raw ultrasound scan image data; and receiving a label corresponding to the ultrasound scan image, the label including an aberration type, an aberration location or an aberration dimension of each aberration on the test section, wherein the aberration type comprises a harmful or potentially harmful aberration.
In the method, the aberration type can comprise a benign aberration.
In the method, the computer-implemented process can comprise: building an ultrasound scan dataset that includes the label; or splitting the ultrasound scan dataset into a training dataset and a testing dataset; or training the machine learning model to segment an ultrasound scan image into conration category image blocks and nonration category image blocks; or training the machine learning model to assign a numerical value to one or more pixels in a conration category image block; or testing the machine learning model to determine performance of the model in detecting an aberration.
In the method, the numerical value can denote at least one of a location, a dimension or a severity level of an aberration.
In the method, the computer-implemented process can comprise determining completion of training of the machine learning model based on the determined performance and pushing the machine learning model into production.
According to another non-limiting embodiment of the solution, a non-transitory computer readable storage medium is provided that contains computer program instructions for analysis of a sequence of noisy or incoherent ultrasound scan images of an asset comprising a composite material having internal defects or voids and diagnosis of a health condition of a section of the asset, the program instructions, when executed by a processor, causing the processor to: receive, by an input-output interface, an ultrasound scan image of the section of the asset that contains noise or incoherence resulting from signal attenuation due to the composite material in the section of the asset; preprocess, by a denoising unit, the ultrasound scan image to remove the noise or incoherence and output a denoized ultrasound image; analyze, by a machine learning platform, the denoised ultrasound scan image to detect any aberrations in the section; evaluate, by the machine learning platform, any detected aberrations; generate, by the machine learning platform, a degree of health of the section of the asset based on any detected aberrations; and generate, by an image rendering unit, an image rendering signal to cause a computer resource asset to display the denoised ultrasound scan image on a display device.
In the non-transitory computer readable storage medium, the denoising unit can comprise a machine learning model.
In the non-transitory computer readable storage medium, the program instructions, when executed by the processor, can cause the processor to train the machine learning model by a computer-implemented process. The computer-implemented process can comprise: receiving raw ultrasound scan image data of a test section comprising the material having internal defects or voids; sending an image rendering signal to cause a computer resource asset to display an ultrasound scan image based on the raw ultrasound scan image data; and receiving a label corresponding to the ultrasound scan image, the label including an aberration type, an aberration location or an aberration dimension of each aberration on the test section, wherein the aberration type comprises a harmful or potentially harmful aberration.
In the non-transitory computer readable storage medium, the aberration type can comprise a benign aberration.
In the non-transitory computer readable storage medium, the computer-implemented process can comprise: building an ultrasound scan dataset that includes the label; or splitting the ultrasound scan dataset into a training dataset and a testing dataset; or training the machine learning model to segment an ultrasound scan image into conration category image blocks and nonration category image blocks; or training the machine learning model to assign a numerical value to one or more pixels in a conration category image block.
In the non-transitory computer readable medium, the numerical value can denote at least one of a location, a dimension or a severity level of an aberration.
Additional features, advantages, and embodiments of the disclosure may be set forth or apparent from consideration of the detailed description and drawings. Moreover, it is to be understood that the foregoing summary of the disclosure and the following detailed description and drawings provide non-limiting examples that are intended to provide further explanation without limiting the scope of the disclosure as claimed.
The accompanying drawings, which are included to provide a further understanding of the disclosure, are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the detailed description serve to explain the principles of the disclosure. No attempt is made to show structural details of the disclosure in more detail than may be necessary for a fundamental understanding of the disclosure and the various ways in which it may be practiced.
The present disclosure is further described in the detailed description that follows.
The disclosure and its various features and advantageous details are explained more fully with reference to the non-limiting embodiments and examples that are described or illustrated in the accompanying drawings and detailed in the following description. It should be noted that features illustrated in the drawings are not necessarily drawn to scale, and features of one embodiment can be employed with other embodiments as those skilled in the art would recognize, even if not explicitly stated. Descriptions of well-known components and processing techniques may be omitted to not unnecessarily obscure the embodiments of the disclosure. The examples are intended merely to facilitate an understanding of ways in which the disclosure can be practiced and to further enable those skilled in the art to practice the embodiments of the disclosure. Accordingly, the examples and embodiments should not be construed as limiting the scope of the disclosure. Moreover, it is noted that like reference numerals represent similar parts throughout the several views of the drawings.
Assets such as slabs, pipes, pipelines, connectors, joints, tees, bends, valves, nozzles, tanks, and vessels, among other things, are commonly used in many industries like construction, manufacturing, petroleum and transportation. The assets tend to be made of either, or both, metallic or nonmetallic materials. Regardless of the material used in the asset, the asset can include an aberration that can lead to failure of the asset over time, which can occur at the location of the aberration or at a different location as a result of the aberration, such as, for example, at another asset that interacts with or is interdependent with the asset comprising the aberration.
The aberration can include either a harmful or potentially harmful aberration or a benign or harmless aberration. A harmful or potentially harmful aberration can include, for example, a defect, a crack, a hydrogen-induced-cracking (HIC) defect, a step-wise-cracking (SWC) defect, a blister, inner wall corrosion, a surface crack, a surface microcrack, a local thinned area, or any other defect type, including, for example, those specified in the Fitness-For-Service publication, API 579-1/ASME FFS-1, published jointly by The American Society of Mechanical Engineers and the American Petroleum Institute, June, 2016. Some of the questions the API 579 seeks to answer is whether a particular asset can continue to operate and whether it should be de-rated, repaired or replaced. A harmful or potentially harmful aberration can lead to a fracture or leak, or a catastrophic failure in the asset, to name only a few potential conditions that can result over time due to the aberration. As noted earlier, an aberration can exist or develop over time in an asset comprising either metallic or nonmetallic materials.
On the other hand, a benign or harmless aberration can include, for example, an internal defect or void that is commonplace in composite material structures, such as, for example, oil or gas pipelines that include composite materials. Such aberrations do not result in damage or harm to the underlying structure, or the performance or longevity of the structure.
The technology solution provided by this disclosure can effectively and efficiently inspect and analyze ultrasound scan images of either, or both, metallic or nonmetallic assets and detect, identify and assess aberrations in the assets, as well predict failure or damage in the assets as a function of time. The technology solution includes a machine learning platform that can analyze, by a machine learning (ML) model, an ultrasound scan image of an asset, generate an aberration label for each aberration in a section of the asset, generate a section condition label for that section of the asset, and generate a diagnosis that indicates the degree of health of that section of the asset under inspection. The machine learning platform can analyze the ultrasound scan image and determine at least one of an aberration area ratio, a total number of aberrations and an aberration label for each label in the section. The machine learning platform can detect or predict and render each aberration with its respective aberration label, including an aberration type, location and dimensions. Each aberration label can include a determined or predicted location or dimensions of the aberration as a function of time, which can be based on sequence of ultrasound scan images captured of the same section of the asset over time.
A non-limiting embodiment of the solution operates with ultrasonic testing (UT) scan images, such as, for example, those attained by transducer devices placed inside, around or nearby to pipelines that use ultrasonic beams to inspect flaws caused by changes in pipe wall surfaces or pipe wall thickness. The UT images can include UT scans that are generated by, for example, pulse-echo transducer devices, pitch-catch transducer devices, phased array transducer devices, composite transducer array devices, or any other type of transducer device or technology capable of capturing ultrasound images of assets. The solution can analyze the UT scan images and detect or predict aberrations in the areas under observation, whether it be in metallic or nonmetallic assets, including, for example, assets containing composite materials, such as, for example, glass fiber-based composites, epoxy resin-based composites, or fiberglass-reinforced plastic (FRP) composites. The solution satisfies an urgent and unmet need for a mechanism that can effectively, efficiently and accurately predict damage or failure in assets, regardless of whether the assets are made of a metallic or nonmetallic material, such as, for example, a composite material. The solution can analyze UT images and detect an aberration in an area of an asset under observation in the images. The solution can, based on the characteristics or parameters of the aberration, and predict failure or long-term damage to the asset that can result from or due to the aberration.
In a non-limiting embodiment, the solution can work with UT scan image data, such as, for example, C-scan image data. The UT image data can include, for example, A-scan ultrasound image, B-scan ultrasound image data, 0-degree advanced C-scan image data, angled C-scan image data, or D-scan ultrasound image data. The solution can be asset-material-agnostic. That is, the solution can be agnostic of the type of material under observation, and the solution need not be concerned with whether the images are from a metal or a composite material but can work well with either, so long as the UT images are clear. This embodiment of the solution can work especially well with UT images of assets containing metallic or high quality composite materials. However, the embodiment might provide less than optimal performance if the UT images are less clear, as can sometimes occur when investigating assets made of composite materials that are of lower quality and, resultantly, have many benign aberrations that, due to resulting signal attenuation, show up as noise in the UT images (for example, noisy UT image 503N, shown in
In another non-limiting embodiment, the solution includes a denoising solution that can provide optimal performance for inspection of assets that contain composite materials, such as, for example, those commonly used in oil or gas industry pipelines. The denoising solution can be arranged to filter out noise that can result from benign aberrations, such as, for example, air pockets, blemishes or other benign aberrations that do not materially affect the asset or its health, performance or longevity. Since in many practical applications clear UT images of composite materials can be difficult to obtain, the denoising solution can operate to remove noise from such UT images (for example, noisy UT image 503N, shown in
Fitness for service engineering evaluation procedures have been used in industries such as oil and gas for a long time. In the petroleum industry, for example, the procedure is commonly known as Fitness-For-Service (or “FFS”); whereas in the gas pipeline industry the procedure is commonly known by the standard-setting body's publication ASME B31.G. The American Petroleum Institute (API) and the American Society of Mechanical Engineers (ASME) have jointly published a document they identified as API RP 579-1/ASME FFS-1, which summarizes a Fitness-For-Service assessment standard used by the oil and gas industries. The publication provides the refining and petrochemical industries with a compendium of consensus methods for assessing the structural integrity of equipment containing identified flaws or damage. The API RP 579 was written to be used in conjunction with the refining and petrochemical industries' existing codes for pressure vessels, piping and aboveground storage tanks (API 510, API 570 and API 653). The standardized Fitness-For-Service assessment procedures presented in API RP 579 provide technically sound consensus approaches that ensure the safety of plant personnel and the public while aging equipment continues to operate, and can be used to optimize maintenance and operation practices, maintain availability and enhance the long-term economic performance of plant equipment.
Ultrasound (UT) scan imaging is commonly used for non-destructive testing and evaluation, and structural health monitoring of structural assets in FFS assessments. Because of its excellent long-range diagnostic capability, ultrasound can be effective in detecting and assessing the condition of an asset for aberrations such as, for example, among other things, brittle factures, cracks, crack-like flaws, metal loss, pitting corrosion, hydrogen blisters, HIC, SWC, weld misalignments, shell distortions, dents, gauges, or other damage, defects or flaws. However, in practical applications the UT scan images of a single asset under observation can include large numbers of aberrations, especially where the asset comprises a lower quality composite material, thereby necessitating highly trained human users to spend significant amounts of time to analyze each individual scan and characterize the aberration, quantify the characteristics or extent of the aberration and distinguish between different types of aberrations. This process can be extremely tedious, lengthy, resource-intensive, and prone to human error as inconsistencies can arise from human judgments of different operators. For example, UT images of damaged assets can contain a large number of aberrations, thereby making it extremely difficult and time-consuming for highly trained human users to analyze each individual UT image, characterize the aberration, quantify the extent of damage and distinguish between, for example, an HIC or SWC type of aberration. Hence, in mature field or plant operations that include large numbers of assets or span expansive geographical areas, the need for timely assessment of assets can quickly outpace available human resources, thereby risking catastrophic conditions where critical assets might fail if not timely replaced or repaired. The solution addresses such needs by providing a technology platform that can minimize or eliminate the need for human intervention in detecting and assessing aberrations.
The technology solution provided by this disclosure includes a fully-automated solution that can effectively and efficiently detect, monitor, identify, analyze or assess aberrations in assets, regardless of the scale or number of assets or amounts of UT images in need of analysis and assessment. The solution includes a machine learning platform that can implement a machine learning (ML) model to analyze large numbers of UT scan images and monitor, detect or identify aberrations in each section of an asset. The solution can, based on its analysis of the aberrations in a section of the asset, assess characteristics of each aberration in that section and determine or diagnose a degree of health or health condition of that section. The solution can generate an aberration label for each detected or predicted aberration in that section of the asset, including the aberration type (for example, is it an HIC or SWC?), location(s) (for example, x, y, z Cartesian coordinates) of the aberration and dimensions (for example, height, width, length, depth, diameter) of the aberration. The solution can generate a section condition label for that section, which can be based each aberration label for that section. The section condition label can include an aberration area ratio and the total number of aberrations in that section, as well as each aberration label for that section. The machine learning platform can, by the ML model, analyze the UT images and assess aberrations in the asset under observation. The solution can predict an aberration over its entire life cycle, from its initial formation through its development, and ultimately the resultant damage or failure of the affected asset that might occur if not mitigated.
The solution can build or store a training dataset for the machine learning platform. The training dataset can be input to the machine learning platform to build the ML model, or to tune the ML model by updating parametric values in the model, including, for example, hyper-parameter tuning, depending on the input UT images. The solution can include a feedback mechanism to the machine learning platform to tune the model parameters as the solution operates on input UT images for an asset under observation. The feedback mechanism can include a label tuning command that is generated during interaction with an operator, such as, for example, a command signal from a graphic user interface (GUI).
The asset 10 can include a metallic or nonmetallic material, such as, for example, a low quality composite material used in pipelines or a very high quality composite material used in aerospace applications, or any other composite material used in assets such as those found in manufacturing, wastewater treatment, utilities, plants, factories, pipelines, or oil and gas industries. In the non-limiting example shown in
The NDE transducer 20 can include an ultrasound transducer device (not shown), such as, for example, a straight beam transducer, an angle beam transducer, a multi-element transducer, a delay line transducer, an immersion transducer, or any other type of transducer capable of emitting or capturing ultrasonic scan data of an area of the asset 10 under observation. The ultrasound transducer device (not shown) can be positioned on the NDE transducer 20 and arranged to scan the asset 10 one section at a time, for example, along its longitudinal axis (Y-axis) and transverse axis (X-axis), which in this example is around the diameter of the pipe, perpendicular to the Y-axis. The NDE transducer 20 can include a computing device or a communicating device. The ultrasound transducer device (not shown) can be arranged to use any combination of, for example, straight or direct beam ultrasound energy or angular-beam ultrasound energy. The NDE transducer 20 can be arranged to scan an area of the asset 10 under observation and capture a resultant sequence of UT scan images, including, for example an ultrasound testing (UT) scan file for a unique section (or area) of the asset 10. The UT scan images can be stitched together by compositing the sequence of UT scan images to form a composite UT image of the asset 10. The NDE transducer 20 can be arranged to capture and record each UT scan image of a section of the asset 10 as a UT scan file, having a multidimensional array of pixels—for example, a two-dimensional (2D) image array or a three-dimensional (3D) image array of pixels. The NDE transducer 20 can include, or it can be arranged to communicate with the technology solution provided by this disclosure, including, for instance, an aberration detection and assessment (ADS) system 100 (shown in
The ultrasound transducer device (not shown) can include a stand-alone device that can be positioned, for example, manually, to capture UT images of a section of the asset 10 as a function of time, or it can be included on a movable tool, such as, for example, the NDE transducer 20 (shown in
The ADS system 100 (or DADS system 400, shown in
An aberration label can be included in the image rending signal for each aberration in the section 15. The display device (for example, shown in
The image rendering signal can include a section condition label for the section 15. The section condition label can be based on each determined aberration in the section 15. The section condition label can include an aberration area ratio, the total number of aberrations in the section 15, as well as the aberration label for each aberration in the section 15. The display device can, in response to the image rendering signal, display the section condition label for the section 15. The section condition label can additionally include, for example, the dimensions of the section 15, the physical location of the section 15, the material contained in the section 15, or any characteristic that can be utilized in assessing the location and condition of the section 15.
The annotation display regions 50B or 50C can include, for example, a list of aberration types that might exist in the particular type of asset 10 under observation. For instance, the list of aberrations in display region 50C for the section 15 can include, for example, “no defect”, “HIC defect”, “SWC defect”, “blister”, “inner wall corrosion”, “surface crack”, “local thinned area”, among others. The display regions 50B or 50C can include a list of asset types that can be investigated by the ADS system 100, such as, for example, a metallic oil pipeline, a composite nonmetallic oil pipeline, or a hybrid-composite-metallic oil pipeline having composite pipe with metallic joints. The display regions 50B or 50C can display the aberration label for each aberration on the section 15 and the section condition label for that section.
In this non-limiting example, the UT image of the section 15 can be rendered in the display region 50A, including all aberrations that are detected or predicted in the section 15, and an aberration label for each aberration that identifies, as determined by the ADS system 100, the type of aberration, its dimensions and location(s). The section condition label can also be rendered with the UT image, including the aberration area ratio and the total number of aberrations in the section 15. Each aberration can be rendered such that the displayed image accurately depicts or predicts the size, shape, and location of the aberration.
In the non-limiting example in
Referring to
Accordingly, through interaction with the computer 50 (or an operator via IO interface 140, shown in
The ADE stack 160 can include a feature extraction unit 162, a classification unit 164, an aberration predictor 166, and a labeler unit 168. The ADE stack 160 can include a machine learning (ML) platform, including, for example, one or more feedforward or feedback neural networks. The ML platform can include, for example, an artificial neural network (ANN), a convolutional neural network (CNN), a deep convolutional neural network (DCNN), a recurrent convolutional neural network (RCNN), a Mask-RCNN, a deep convolutional encoder-decoder (DCED), a recurrent neural network (RNN), a neural Turing machine (NTM), a differential neural computer (DNC), a support vector machine (SVM), or a deep learning neural network (DLNN). The ML platform can include the ML model for the ADE stack 160. Alternatively, the ML platform can include the ADE stack 160, image rending unit 170 and MTT unit 180.
The ADE stack 160 can analyze UT images of the asset 10 (shown in
The processor 110 can include any of various commercially available computing devices, including for example, a central processing unit (CPU), a graphic processing unit (GPU), a general-purpose GPU (GPGPU), a field programmable gate array (FGPA), an application-specific integrated circuit (ASIC), a manycore processor, multiple microprocessors, or any other computing device architecture can be included in the processor 110.
The ADS system 100 can include a non-transitory computer-readable storage medium that can hold executable or interpretable computer program code or instructions that, when executed by the processor 110 or one or more computer resource assets in the ADS system 100, causes the steps, processes or methods in this disclosure to be carried out. The computer-readable storage medium can be included in the storage 120.
The storage 120, including any non-transitory computer-readable media, can provide nonvolatile storage of data, data structures, and computer-executable instructions. The storage 120 can accommodate the storage of any data in a suitable digital format. The storage 120 can include one or more computing resources, such as, for example, program modules or software applications that can be used to execute aspects of the architecture included in this disclosure. The storage 120 can include a read-only-memory (ROM) 120A, a random-access-memory (RAM) 110B, a disk drive (DD) 120C, and a database (DB) 120D.
A basic input-output system (BIOS) can be stored in the non-volatile memory 120A, which can include a ROM, such as, for example, an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM) or another type of non-volatile memory. The BIOS can contain the basic routines that help to transfer information between the computer resource assets in the ADS system 100, such as during start-up.
The RAM 120B can include a high-speed RAM such as static RAM for caching data. The RAM 120B can include, for example, a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous DRAM (SDRAM), a non-volatile RAM (NVRAM) or any other high-speed memory that can be adapted to cache data in the ADS system 100.
The DD 120C can include a hard disk drive (HDD), an enhanced integrated drive electronics (EIDE) drive, a solid-state drive (SSD), a serial advanced technology attachments (SATA) drive, or an optical disk drive (ODD). The DD 120C can be arranged for external use in a suitable chassis (not shown). The DD 120C can be connected to the bus 105 by a hard disk drive interface (not shown) or an optical drive interface (not shown), respectively. The hard disk drive interface (not shown) can include a Universal Serial Bus (USB) (not shown), an IEEE 1394 interface (not shown), or any other suitable interface for external applications. The DD 120C can include the computing resources for the ADE stack 160. The DD 120C can be arranged to store data relating to instantiated processes (including, for example, instantiated process name, instantiated process identification number and instantiated process canonical path), process instantiation verification data (including, for example, process name, identification number and canonical path), timestamps, incident or event notifications.
The database (DB) 120D can be arranged to store UT images in digital format, including UT image frames 30 (shown in
The DB 120D can be arranged to be accessed by any of the computer resource assets 105 to 180. The DB 120 D can be arranged to receive queries and, in response, retrieve specific records or portions of records based on the queries and send any retrieved data to the computer resource asset from which the query was received, or to another computer resource asset at the instruction of the originating computer resource asset. The DB 120D can include a database management systems (DBMS) that can interact with the computer resource assets 105 to 180. The DBMS can be arranged to interact with computer resource assets outside of the ADS system 100, such as, for example, the computer 50 (shown in
One or more computing resources can be stored in the storage 120, including, for example, an operating system (OS), an application program, an application program interface (API), a program module, or program data. The computing resource can include an API such as, for example, a web API, a Simple Object Access Protocol (SOAP) API, a Remote Procedure Call (RPC) API, a Representational State Transfer (REST) API, or any other utility or service API. One or more of the computing resources can be cached in the RAM 120B as executable sections of computer program code or retrievable data.
The network interface 130 can be arranged to connect to a computer resource asset (for example, computer 50, shown in
The IO interface 140 can receive commands or data from an operator or an external computer resource asset, including, for example, the ultrasound transducer device (not shown) included in the NDE transducer 20 (shown in
The driver unit 150 can include an audio driver 150A and a video driver 150B. The audio driver 150A can include a sound card, a sound driver (not shown), an interactive voice response (IVR) unit, or any other device that can render a sound signal on a sound production device (not shown), such as for example, a speaker (not shown). The video driver 150B can include a video card (not shown), a graphics driver (not shown), a video adaptor (not shown), or any other device necessary to render an image signal on a display device (not shown).
In the ADE stack 160, the feature extraction unit 162 can be arranged to extract features from the received UT image data for the asset 10. The feature extraction unit 162 can interact with the aberration predictor 164. The extracted features can be compared to model or healthy features for the same or similar asset as the asset 10. The feature extraction unit 162 can be arranged to extract features from sequences of UT image frames, so as to extract features for the asset under observation as a function of time. Features related to aberrations in the UT image data can be extracted using a pixel-by-pixel comparative analysis of the UT image data for the asset 10 under inspection with known or expected features (reference features), including reference features from a controlled or clean asset. For instance, features relating to a characteristic of an aberration, such as, for example, a dimension (for example, width, length, depth, height, radius, diameter), a location (for example, Cartesian coordinates x, y, z), or a shape (for example, a hair-line fracture, a pin-hole, or a circular indent) can be compared to the features of a corresponding characteristic of a non-damaged asset. This allows the ADE stack 160 to populate the DB 120D with historical data that can be used to train or tune the ML model to detect, identify, assess or predict aberrations that might exist or develop in the asset 10 and to generate a diagnosis of the degree of health of the asset 10.
In a non-limiting embodiment, the ADE stack 160 includes a CNN or DCNN, in which case the ADE stack 160 can analyze every pixel in the UT image data (for example, by the feature extraction unit 162), classify the image data (for example, by the classification unit 164) and make a prediction at every pixel (for example, the aberration predictor 166) regarding the presence of an aberration. In this regard, the UT image data can be formatted by the feature extractor unit 162 into h×c pixel matrix data, where h is the number of rows of pixels in a pixel matrix and c is the number of columns of pixels in the same pixel matrix. After formatting the UT image data into h×c pixel matrices, the feature extraction unit 162 can filter (or convolute) each pixel matrix using an a×a pixel grid filter matrix, where a is greater than 1 but less than h or c. According to a non-limiting embodiment, a=2 pixels. The feature extraction unit 162 can slide and apply one or more a×a filter matrices (or grids) across all pixels in each h×c pixel matrix to compute dot products and detect patterns, creating convolved feature matrices having the same size as the a×a filter matrix. The feature extraction unit 162 can slide and apply multiple filter matrices to each h×c pixel matrix to extract a plurality of feature maps of the UT image data for the asset 10 under inspection.
Once the feature maps are extracted, the feature maps can be moved to one or more rectified linear unit layers (ReLUs) in a CNN to locate the features. After the features are located, the rectified feature maps can be moved to one or more pooling layers to down-sample and reduce the dimensionality of each feature map. The down-sampled data can be output as multidimensional data arrays, such as, for example, a two-dimensional (2D) array or a three-dimensional (3D) array. The resultant multidimensional data arrays output from the pooling layers can be flattened (or converted) into single continuous linear vectors that can be forwarded to the fully connected layer. The flattened matrices from the pooling layer can be fed as inputs to the classification unit 164 or aberration predictor 166.
The classification unit 164 can include a fully connected neural network layer, such as, which can auto-encode the feature data from the feature extraction unit 162 and classify the image data. The classification unit 164 can include a fully connected layer that contains a plurality of hidden layers and an output layer. The output layer can output the classification data to the aberration predictor 166.
The aberration predictor 166 can be arranged to receive the resultant image cells and predict aberrations that might exist in the asset 10, including, for example, on an outer surface, in a wall portion, or an inner surface of the asset 10. The aberration predictor 166 can generate a confidence score for each image cell that indicates the likelihood that a bounding box includes an aberration. The aberration predictor 166 can interact with the classification unit 164 and perform bounding box classification, refinement and scoring based on the aberrations in the image represented by the UT image data. The aberration predictor 166 can determine location data such as, for example, x-y-z Cartesian coordinates with respect to the asset 10. The location data can be determined for the aberration and the bounding box. Dimensions (for example, height, width, length, depth, radius, diameter), shape, geospatial orientation (for example, angular position or attitude) and location of the aberration can be determined, and probability data that indicates the likelihood that a given bounding box contains or will develop the aberration can be determined by the aberration predictor 166. The aberration predictor 166 can be arranged to determine a prediction score that indicates the likelihood that an aberration exists or will develop over time on the asset. The prediction score can range from, for example, 0% to 100%, with 100% being a detected aberration, and 0% to 99.99% being a prediction that an aberration exists or will develop in a highlighted area on the asset 10.
In the ADE stack 160, the feature extraction unit 162, classification unit 164 and aberration predictor 166 can be implemented using one or more CNNs having a number of convolutional/pooling layers (for example, 1 or 2 convolutional/pooling layers) and a single fully connected layer, or it can be implemented using a DCNN having many convolutional/pooling layers (for example, 10, 12, 14, 20, 26, or more layers) followed by multiple fully connected layers (for example, two or more fully connected layers). The ADE stack 160 can include an RNN, such as, for example, a single stack RNN or a complex multi-stack RNN. The CNN can be applied to stratify the received UT image data into abstraction levels according to an image topology, and the RNN can be applied to detect patterns in the images over time. The ADE stack 160 can detect areas of interest and aberrations that might exist or develop over time in the asset 10, as well as capture the creation or evolution of the aberration as it develops over time.
The labeler unit 168 can be arranged to (for example, together with the feature extraction unit 162, classification unit 164, and aberration predictor 166) receive and analyze UT image data, and detect, identify, assess or predict an aberration and its location in the asset 10. The ADE stack 160 can analyze sequences of UT images of a section or the entire asset 10 captured by the NDE transducer 20 (shown in
The ADE stack 160 can interact with the image rendering unit 170, which can be arranged to generate image rendering commands or data that can be used by, or cause a computer resource asset, such as, for example, the computer 50 (shown in
The MTT unit 180 can be arranged to interact with the machine learning platform to train the ML model using a training dataset, in which case the training dataset can be received from an external source (not shown) or created by the ADS system 100, as described below, with respect to the training process 200 (shown in
Referring to
All the image blocks can be rendered, for example, by the image rendering unit 170, on a display device to display the original UT image from which they were derived (Step 210). The image rendering unit 170 can include a computing device or, as previously noted, a computer resource that can be executed by the processor 110. The UT image frame can be rendered locally on the display device (not shown) via the IO interface 140 or driver unit 150, or communicated to the computer 50, where the image frame can be rendered on the display device of the computer 50 (shown in
In Step 225, the annotated image blocks can be separated into two category groups—that is, conration category and nonration category image blocks. The conration category comprises all image blocks that were selected by the user as containing a confirmed aberration (“conration”). The nonration category comprises all image blocks that were confirmed and selected by the user as not containing any aberration (“nonration”)—in other words, image blocks that are confirmed to correspond to only healthy parts of the asset under observation. For all image blocks that are determined to be nonration category (or healthy) image blocks (YES at Step 230), metadata can be generated for each such image block identifying it as a nonration category image block (Step 235) and the image block can be labeled by associating the metadata with the image block or embedding the metadata in the image block (Step 240). The labeled nonration category image blocks can be stored (Step 270), for example, in the storage 120 (shown in
On the other hand, all image blocks that are determined to be conration category image blocks (NO at Step 230) can be identified as containing confirmed aberrations and the user can be prompted to provide aberration-specific data for each such image block (Step 245). The conration category image blocks can be identified by, for example, highlighting each aberration on the display device, for example, as seen for aberrations 52, 54, 56 (shown in
As seen in
The GUI can be arranged to receive additional aberration-specific parameters for each aberration, including, for example, dimensions (for example, height, width, length, depth, radius, diameter) and location (for example, x, y, or z Cartesian coordinates). The GUI can be arranged to allow the user to operate a cursor (for example, using a mouse or stylus) to mark a plurality of points on the display screen (for example, shown in
The annotations made by the user for each aberration can be communicated from the GUI to the MTT unit 180 (shown in
The generated metadata can include indexing data for each aberration, which can identify each conration category image block that contains a portion of the aberration. The generated metadata can include section indexing data for each asset under observation, including, for example, the aberration area ratio and the number of aberrations, as a function of time, for a section (for example, section 15, shown in
The aberration area ratio can be determined by the MTT unit 180 by summing the total area of each aberration in a section of the asset, determining the total area of that section, and dividing the resultant sum of aberration areas by the total area of the section. The number of aberrations can be determined by the MTT unit 180 by adding the number of aberrations that appear in that same section of the asset. For example, the Defect-Area-Ratio and Number of Defects can be measured during the classification stage at the classification unit 164 (shown in
Each conration category image block can be labeled or stored with its corresponding metadata (Step 260). A determination can be made whether all conration category image blocks have been labeled in the UT image frame (Step 265). If it is determined that all conration category image blocks have been labeled (YES at Step 265), then all the labeled conration category image blocks can be stored with the nonration category image blocks for the UT image frame (Step 270), otherwise (NO at Step 265) the user can be prompted to enter annotations for any unlabeled conration category image blocks remaining, which can be used as, or to update, parametric values in the ML model (Step 245). The labeled UT image frame, including all conration and nonration category image blocks with metadata, can be stored in the storage 120 (shown in
A determination can be made, for example, by the MTT unit 180 (shown in
The training dataset, which includes an accumulation of labeled UT scan images, can be used to create a training database in DB 120D (shown in
The ML model in the ADS system 100 can include the latest modelling parameters, which can be used, for example, by the aberration predictor 166, to predict aberrations and aberration types in the section of asset under observation (Step 320), based on the extracted features and object classifications. The aberration predictor 166 can use historical UT image data for the section of asset under observation (for example, section 15, shown in
On the basis of the section condition label information, including each aberration label, a degree of health condition of the section can be determined, for example, by the labeler unit 168 (shown in
The labeled UT image data, including the raw UT image data and all annotations provided for that UT image, can be communicated, for example, by the image rendering unit 170, and the UT image rendered and displayed with a corresponding section condition label and an aberration label for each aberration (Step 330). The labeled UT image can be rendered, for example, on a computer resource asset operated by a field crew and displayed on a display device, so that members of the field crew can utilize information learned from the labeled UT image to identify or schedule tasks relating to the assets under observation, including, for example: repair or replace a section of the asset that has been damaged or is likely to become damaged or fail; or to place the section of the asset on a watch list, so as to monitor one or more aberrations over their respective life cycles.
Alternatively, in place of a field crew, the solution can be automated and the remediation or monitoring tasks can, instead, be performed by an automated tool (not shown), such as, for example, a robot, in which case the tool can be arranged to receive the labeled UT image data and schedule or execute remediation or monitoring tasks for the section of asset under observation based on the labeled UT image data, including the diagnosed degree of health condition of the section and section condition label.
After the UT image data is rendered by the GUI on the display device (for example, shown in
By carrying out the process 300, the ADS system 100 (or DADS system, shown in
As noted previously, the ADS system 100 can analyze individual UT images or a plurality of UT scan images from the same section of the asset taken at different times. In the latter instance, the ADS system 100 can track individual aberrations across different UT scans (taken at different times), thereby tracking changes in location, dimensions or shape of the aberration over longer periods of time, such as, for example, months, years, or decades. The ultrasound scans can include 0-degree AUT C-scans. The ADS system 100 can facilitate or perform, for example, (1) assessment of the fitness for service of an asset under observation in near real time using, for example, API 579, (2) determining an inspection frequency for a section of the asset or the entire asset, or (3) identifying or scheduling any needed maintenance activity to address the specific aberration being observed.
The ADS system 100 can operate with a variety of types of UT scan images, including conventional or advanced UT images. The ADE stack 160 can detect each aberration, classify the aberration and quantify the dimensions of the aberration for different types of aberrations. The ADE stack 160 can analyze tens, hundreds, thousands or more UT images efficiently and effectively to timely identify and evaluate aberrations, including the most dangerous or largest defects that might exist or develop in assets, and generate a diagnosis for the degree of health condition of a section or the entire asset.
While the ADS system 100 and processes 200 or 300 can be agnostic of the material under observation and can operate with a variety of ultrasound scan image types, the system and processes can operate especially well with clear C-Scan UT images, including 0-degree advanced UT (AUT)C-scans. However, where the material under observation is a material like the composite materials frequently employed in oil or gas industry pipelines as of the date of this disclosure, the received UT images can be less than optimal and, therefore, challenging to analyze for aberrations. In those instances, clear AUT C-Scan images can be obtained directly or indirectly through, for example, creation by post-processing of “noisy” or incoherent data as will be understood by those skilled with UT image data processing.
For instance, when an ultrasound scan image is analyzed and assessed according to the process 300 (shown in
The DADS system 400 can work with ultrasound C-scans, 0-degree advanced ultrasound (AUT)C-scans, angled advanced ultrasound (AUT)C-scans (that is, having angle greater or less than 0-degrees), conventional ultrasound scan images or other types of ultrasound scan images. The DADS system 400 can analyze UT images that are not entirely clear or that are of lower quality or resolution than, for example, 0-degree AUT C-scan images. As seen in
The denoising unit 190, which can include a computing device or a computer resource that is executable on the processor 110 as one or more computer resource processes, can preprocess and denoise each UT scan image of asset comprising a composite material to output a denoised and clear UT image (for example, UT image 503C, shown in
After the UT scan images are denoised by the denoising unit 190, the image data can be analyzed to detect or predict aberrations and evaluate the aberrations in the same manner as discussed above with respect to
The denoising unit 190 can include an ML platform, such as, for example, an ANN, a CNN, a DCNN, an RCNN, a Mask-RCNN, a DCED, an RNN, an NTM, a DNC, an SVM, a DLNN, or any combination of the foregoing. The denoising unit 190 can be included in the machine learning platform of the ADS system 100 (shown in
In an alternative embodiment, the denoising unit 190 can be combined with or integrated in the ADE stack 160. For example, in the non-limiting embodiment where the ADE stack 160 comprises computing resources that are executable by the processor 110 to perform the processes 200, 300 or 500 (shown in
An important reason that nonmetallic initiatives in industries such as oil and gas have been slow to replace metallic assets with nonmetallic alternatives is the lack of a fast, safe and cost-effective testing solution that can provide timely assessments of the quality and condition of composite assets—that is, assets comprising composite materials. While inspection technologies such as radiography or thermography can be effective, they have not been practical due their significant costs. Other technologies, such as electro-capacitive tomography, are under development but are not sufficiently mature to be viable alternatives. Ultrasound testing (UT) technologies, on the other hand, are fast, safe and cost-effective, but they have been ineffective and unusable in industries such as oil and gas. An important reason that UT technologies have been ineffective or unusable in such industries is the industries' use of lower quality polymers in making the composite assets, which typically contain large numbers of internal defects or voids that cause significant signal attenuation, thereby rendering most UT images of composite assets noisy, incoherent and, resultantly, unusable. The solution provided by this disclosure, including the DADS system 400, allows for use of conventional ultrasound inspection technologies to investigate and evaluate composite assets, including those made of lower quality polymers that typically include large amounts of aberrations such as defects or voids.
The solution, including the DADS system 400, can operate with conventional UT images of assets containing composite materials, such as, for example, composite slabs, pipes or pipelines, tees, joints, bends, valves, nozzles, or vessels, to name a few, thereby enabling their inspection and evaluation. The solution can process UT images received from tried and tested non-destructive testing technologies of (low quality) composite assets to produce clear ultrasound C-scan images from “noisy” UT images. The denoising unit 190 can be arranged to analyze a UT image frame, identify or detect benign aberrations and filter such aberrations from the UT image frame to output a clear UT image frame of comparable or higher quality than traditional 0-degree AUT C-scan images of metallic assets.
Referring to
In an alternative embodiment, an experimental methodology, such as, for example, that used for tensile testing, fatigue testing, accelerated aging, among others, can be used to create or induce the artificial aberration that can form or develop in the asset to be investigated.
Alternatively, an expected geometry of an artificial aberration can be determined based on, for example, a geometry described in the literature or simulated using finite element modelling, as will be understood by those skilled in the art.
Once alteration of the test section 501 is complete (Step 510), such as, for example, where machining of the holes is completed, the dimensions of each artificial aberration can be measured (Step 515), which in the case of the section 501 includes measuring the location, diameter and depth of each hole 502 using, for example, a profilometer. The measurement values (including location, height, width, length, depth, diameter, radius, angle) for each artificial aberration can be stored (Step 520), such as, for example, in the storage 120 (shown in
A determination can be made whether the baseline dataset is complete (Step 530). If it is determined that the baseline dataset is incomplete (NO at Step 530), such as, for example, where UT scan data is needed for additional artificial aberrations, then another test section 501 can be created (Step 505) and the process 500A repeated, otherwise (YES at Step 530) all saved UT scan data for the completed baseline data set can be exported (Step 535), such as for example, for long term storage in DB 120D or for use by the process 500B (shown in
Referring to
Once the dataset is curated (in Step 555), it can be split into a training dataset and a testing dataset (Step 560). The training dataset can then be used to train the ML model in, for example, the ADS system 100 (shown in
The testing dataset can be applied to the ML model to test the model's performance (Step 570). The testing dataset can be applied and the ML model caused to render a UT image based on the testing dataset (Step 575). Based on the performance of the ML model, a determination can be made whether training of the ML model is complete (Step 580), for example, by comparing the rendered UT image, including labels for each aberration in the UT image, to the original UT image and labels. If the rendered UT image, including machine generated labels, mimics the original UT image and labels within an acceptable range (YES at Step 580), then it can be determined the model has been successfully trained (Step 585), otherwise (NO at Step 580) the process 500B can return and repeat from Step 550, including tuning of the parametric values of the ML model.
Once the model is complete (Step 585), the model can be pushed into production (Step 590), such as, for example, in the ADE stack 160 (shown in
The terms “a,” “an,” and “the,” as used in this disclosure, means “one or more,” unless expressly specified otherwise.
The term “aberration,” as used in this disclosure, means an abnormality, an anomaly, a deformity, a malformation, a defect, a fault, a delamination, an airgap, a dent, a scratch, a cracks, a hole, a discolorations, or an otherwise damaged portion or area of an asset that could have a negative or undesirable effect on the performance, durability, or longevity of the asset 10.
The term “backbone,” as used in this disclosure, means a transmission medium that interconnects one or more computing devices or communicating devices to provide a path that conveys data signals and instruction signals between the one or more computing devices or communicating devices. The backbone can include a bus or a network. The backbone can include an ethernet TCP/IP. The backbone can include a distributed backbone, a collapsed backbone, a parallel backbone or a serial backbone.
The term “bus,” as used in this disclosure, means any of several types of bus structures that can further interconnect to a memory bus (with or without a memory controller), a peripheral bus, or a local bus using any of a variety of commercially available bus architectures. The term “bus” can include a backbone.
The term “communicating device,” as used in this disclosure, means any hardware, firmware, or software that can transmit or receive data packets, instruction signals, data signals or radio frequency signals over a communication link. The communicating device can include a computer or a server. The communicating device can be portable or stationary.
The term “communication link,” as used in this disclosure, means a wired or wireless medium that conveys data or information between at least two points. The wired or wireless medium can include, for example, a metallic conductor link, a radio frequency (RF) communication link, an Infrared (IR) communication link, or an optical communication link. The RF communication link can include, for example, WiFi, WiMAX, IEEE 802.11, DECT, 0G, 1G, 2G, 3G, 4G, or 5G cellular standards, or Bluetooth. A communication link can include, for example, an RS-232, RS-422, RS-485, or any other suitable serial interface.
The terms “computer,” “computing device,” or “processor,” as used in this disclosure, means any machine, device, circuit, component, or module, or any system of machines, devices, circuits, components, or modules that are capable of manipulating data according to one or more instructions. The terms “computer,” “computing device” or “processor” can include, for example, without limitation, a processor, a microprocessor (μC), a central processing unit (CPU), a graphic processing unit (GPU), an application specific integrated circuit (ASIC), a general purpose computer, a super computer, a personal computer, a laptop computer, a palmtop computer, a notebook computer, a desktop computer, a workstation computer, a server, a server farm, a computer cloud, or an array or system of processors, μCs, CPUs, GPUs, ASICs, general purpose computers, super computers, personal computers, laptop computers, palmtop computers, notebook computers, desktop computers, workstation computers, or servers.
The terms “computing resource” or “computer resource,” as used in this disclosure, means software, a software application, a web application, a web page, a computer application, a computer program, computer code, machine executable instructions, firmware, or a process that can be arranged to execute on a computing device or a communicating device.
The term “computing resource process,” as used in this disclosure, means a computing resource that is in execution or in a state of being executed on an operating system of a computing device. Every computing resource that is created, opened or executed on or by the operating system can create a corresponding “computing resource process.” A “computing resource process” can include one or more threads, as will be understood by those skilled in the art.
The terms “computer resource asset” or “computing resource asset,” as used in this disclosure, means a computing resource, a computing device or a communicating device, or any combination thereof.
The term “computer-readable medium,” as used in this disclosure, means any non-transitory storage medium that participates in providing data (for example, instructions) that can be read by a computer. Such a medium can take many forms, including non-volatile media and volatile media. Non-volatile media can include, for example, optical or magnetic disks and other persistent memory. Volatile media can include dynamic random-access memory (DRAM). Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. The computer-readable medium can include a “cloud,” which can include a distribution of files across multiple (e.g., thousands of) memory caches on multiple (e.g., thousands of) computers.
Various forms of computer readable media can be involved in carrying sequences of instructions to a computer. For example, sequences of instruction (i) can be delivered from a RAM to a processor, (ii) can be carried over a wireless transmission medium, or (iii) can be formatted according to numerous formats, standards or protocols, including, for example, WiFi, WiMAX, IEEE 802.11, DECT, 0G, 1G, 2G, 3G, 4G, or 5G cellular standards, or Bluetooth.
The term “database,” as used in this disclosure, means any combination of software or hardware, including at least one computing resource or at least one computer. The database can include a structured collection of records or data organized according to a database model, such as, for example, but not limited to at least one of a relational model, a hierarchical model, or a network model. The database can include a database management system application (DBMS). The at least one application may include, but is not limited to, a computing resource such as, for example, an application program that can accept connections to service requests from communicating devices by sending back responses to the devices. The database can be configured to run the at least one computing resource, often under heavy workloads, unattended, for extended periods of time with minimal or no human direction.
The terms “including,” “comprising” and their variations, as used in this disclosure, mean “including, but not limited to,” unless expressly specified otherwise.
The term “network,” as used in this disclosure means, but is not limited to, for example, at least one of a personal area network (PAN), a local area network (LAN), a wireless local area network (WLAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a metropolitan area network (MAN), a wide area network (WAN), a global area network (GAN), a broadband area network (BAN), a cellular network, a storage-area network (SAN), a system-area network, a passive optical local area network (POLAN), an enterprise private network (EPN), a virtual private network (VPN), the Internet, or the like, or any combination of the foregoing, any of which can be configured to communicate data via a wireless and/or a wired communication medium. These networks can run a variety of protocols, including, but not limited to, for example, Ethernet, IP, IPX, TCP, UDP, SPX, IP, IRC, HTTP, FTP, Telnet, SMTP, DNS, ARP, ICMP.
The term “server,” as used in this disclosure, means any combination of software or hardware, including at least one computing resource or at least one computer to perform services for connected communicating devices as part of a client-server architecture. The at least one server application can include, but is not limited to, a computing resource such as, for example, an application program that can accept connections to service requests from communicating devices by sending back responses to the devices. The server can be configured to run the at least one computing resource, often under heavy workloads, unattended, for extended periods of time with minimal or no human direction. The server can include a plurality of computers configured, with the at least one computing resource being divided among the computers depending upon the workload. For example, under light loading, the at least one computing resource can run on a single computer. However, under heavy loading, multiple computers can be required to run the at least one computing resource. The server, or any if its computers, can also be used as a workstation.
The term “transmission” or “transmit,” as used in this disclosure, means the conveyance of data, data packets, computer instructions, or any other digital or analog information via electricity, acoustic waves, light waves or other electromagnetic emissions, such as those generated with communications in the radio frequency (RF) or infrared (IR) spectra. Transmission media for such transmissions can include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor.
The terms “UT scan image” or “UT image,” as used in this disclosure, means an ultrasound image of an asset or a section of an asset under observation, such as, for example, an ultrasound scan or ultrasound image captured or recorded by a pulse-echo transducer device, pitch-catch transducer device, phased array transducer device, composite transducer array device, or any other type of transducer device or technology capable of capturing or recording ultrasound images or scans of the asset or section of asset under observation.
The term “UT image frame,” as used in this disclosure, means ultrasound image data for an area or section under observation of an asset under inspection, comprising image data that can be rendered as a one-dimensional image (for example, single line with varying brightness), two-dimensional image (as seen in
Devices that are in communication with each other need not be in continuous communication with each other unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.
Although process steps, method steps, or algorithms may be described in a sequential or a parallel order, such processes, methods and algorithms may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described in a sequential order does not necessarily indicate a requirement that the steps be performed in that order; some steps may be performed simultaneously. Similarly, if a sequence or order of steps is described in a parallel (or simultaneous) order, such steps can be performed in a sequential order. The steps of the processes, methods or algorithms described in this specification may be performed in any order practical.
When a single device or article is described, it will be readily apparent that more than one device or article may be used in place of a single device or article. Similarly, where more than one device or article is described, it will be readily apparent that a single device or article may be used in place of the more than one device or article. The functionality or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality or features.
The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes can be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the invention encompassed by the present disclosure, which is defined by the set of recitations in the following claims and by structures and functions or steps which are equivalent to these recitations.