Aspects of the present disclosure relate to anomaly detection, and, more specifically, to performing image-based anomaly detection using feature matching models.
Early identification of defects in equipment or components is important for ensuring the safety and reliability of airline operations. Non-Destructive Inspection (NDI) is one preferred method for this purpose since it may evaluate a component's properties without causing any damage to the original part. For example, when inspecting a component within an aircraft, such as the wing panel, NDI techniques may deploy ultrasonic waves into the component to detect faults within the material.
The present disclosure provides a method in one aspect, the method including receiving a plurality of normal images, generating a plurality of patch features by processing each of the plurality of normal images, generating a coreset comprising one or more coreset samples, where the one or more coreset samples are selected from the plurality of patch features, receiving a test image, generating one or more test patch features by processing the test image, and generating an anomaly score by comparing the one or more test patch features with the one or more coreset samples.
Other aspects of this disclosure provide one or more non-transitory computer-readable media containing, in any combination, computer program code that, when executed by the operation of a computer system, performs operations in accordance with one or more of the above methods, as well as systems comprising one or more computer processors and one or more memories containing computer-executable instructions that, when executed by the one or more computer processors, perform operations in accordance with one or more of the above methods.
So that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example aspects, some of which are illustrated in the appended drawings.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially used in other aspects without specific recitation.
The present disclosure relates to anomaly detection, and more specifically, to detecting anomalies using models trained with a feature matching approach.
Many non-destructive inspection (NDI) techniques produce a vast amount of highly complex data that is challenging to evaluate. For example, ultrasonic techniques may generate images or other visual representations about the internal structure of a component, but faults or other concerns within such data are often difficult or impossible to visually detect. The present disclosure provides methods and techniques for detecting anomalies in images obtained from NDI scanning or other similar techniques.
In some aspects, before performing anomaly detection on test images, feature matching techniques may be incorporated to pre-train a model for anomaly detection using a normal dataset. As used herein, the normal dataset may consist of images (or other visual representations) obtained through NDI or other scanning methods that depict a component without any defects. The pre-training may ensure that the model establish a clear baseline of what constitutes a “normal” state for the component, allowing the model to distinguish anomalies within the test images more efficiently and accurately. In some aspects, to further improve the accuracy and reliability of the anomaly detection models, noisy data within the normal dataset may be identified and removed early in the training process. The present disclosure introduces methods and techniques for denoising normal dataset, either at the image level, the patch level, or both, before incorporating feature matching techniques for pre-training the anomaly detection models. By implementing the two-tiered denoising process, in some aspects, the anomaly detection model can capture the inherent patterns or characteristics of the normal dataset more accurately, and thus improve the model's efficiency and precision in detecting anomalies.
In the illustrated example, a set of normal images 105 (e.g., depicting a standard or normal aircraft wing panel or other component without defects) are provided to an anomaly detection model 115 for pretraining. Upon receiving the normal images 105, the model 115 performs feature extraction to generate a coreset 120. In some aspects, the coreset 120 may comprise one or more coreset samples selected from a large collection of patch features extracted from normal images. Each patch feature may capture specific patterns and/or characteristics of the normal images. These coreset samples may be selected based on their representativeness and significance within the overall normal dataset, and may serve as baselines for anomaly detection. As illustrated, the coreset 120 is stored in a memory bank 125 or a database, and can be quickly accessed by the anomaly detection model 115 when a test image is received.
In the illustrated example, a test image 110 (e.g., depicting an aircraft wing panel with potential defects) is introduced for inspection. The anomaly detection model 115 processes the test image 110, extracting patch features from the image and comparing them to the coreset 120 stored in the memory bank 125. The anomaly detection model 115 then identifies the nearest coreset samples aligned with the test patch features. Based on the degree of deviation/dissimilarity between the test patch features and the nearest coreset samples, the anomaly detection model 115 determines the likelihood of the presence of anomalies within the test image (e.g., cracks or corrosions in the wing panel). The deviation may be measured by various distance metrics, such as Euclidean distance or cosine similarity, or other predefined criteria. In some aspects, the larger the deviation, the higher the likelihood that the test image is abnormal and/or contains anomalies.
As illustrated, the anomaly detection model 115 generates anomaly outputs 130 for the test image. In some aspects, the anomaly outputs 130 may include a determination indicating whether the test image 110 is normal (e.g., the wing panel within the test image is normal) or contains anomalies (e.g., the wing panel within the test image has defects). In some aspects, the anomaly outputs 130 may also include an anomaly score, which quantifies the likelihood that the test image contains or depicts anomalies. In some aspects, the anomaly outputs 130 may include a visual representation of the test image (e.g., a heat map), with different colors highlighting areas with varying likelihood of containing anomalies. For example, areas, patches, and/or pixels with a high likelihood of containing anomalies may be highlighted in red, whereas areas, patches, and/or pixels with a moderate likelihood of containing anomalies may be highlighted in green or yellow, and areas, patches, and/or pixels with a low likelihood of containing anomalies may be highlighted in blue.
In some aspects, the normal images 105 and the test image 110 (e.g., for an aircraft wing panel) may be generated using advanced techniques such as Non-Destructive Inspection (NDI). When NDI is used to inspect a component (e.g., an aircraft wing panel or an aircraft engine), ultrasonic waves may be emitted by a testing component. As these ultrasonic waves travel through the component, they may interact with the internal structures or materials of the component. For example, the ultrasonic waves may be reflected, refracted, or absorbed by the internal structures or materials. Based on the received signals (received by the testing component or by a different component), images (e.g., normal images 105 and test image 110) depicting or representing the internal structure of the component may be generated. The received signals may refer to the returning ultrasonic waves following the interactions with the internal structures or materials. Variations in the received signals, which may be caused by inconsistencies or potential defects (e.g., cracks, corrosions, deformations) within the components, may be reflected as visual variances in these images.
The anomaly detection model 115 may include various computing modules, each configured for a different task, such as the feature extraction module 315 of
In the illustrated example, a plurality of normal images 205 are received for pretraining an anomaly detection model (e.g., 115 of
As illustrated, the image-level denoising begins by dividing the normal images 205 into three folds or groups, 205-1, 205-2, and 205-3. Each fold contains an equal number of images. The three folds of normal images are then used for cross-validation.
In the first iteration of the cross-validation process, fold 1 (205-1) is assigned as the validation dataset, and fold 2 (205-2) and fold 3 (205-3) are grouped together as the training dataset. During the first iteration, fold 2 and fold 3 are provided to train the outlier detection model 210 (e.g., PaDim). During the training phase, the model 210 assumes that the images within the training dataset are normal, and learns the common patterns and features from these images. Once the training phase ends, the trained outlier detection model 210 is deployed to process the images from fold 1 within the validation dataset. During the validation phase, the trained model 210 compares the patterns and features of a test image (e.g., from fold 1 in the validation dataset) with what it has learned from the training dataset. Based on the comparison, the model 210 generates an outlier probability (also referred to in some aspects as an outlier probability score) for each image within the validation dataset. The value of the outlier probability indicates how likely an image deviates from what the model learns as normal based on the training dataset. A high outlier probability (e.g., above a defined threshold) may suggest that the image may be ambiguous or potentially anomalous. At the end of the first iteration, a set of outlier probabilities 215-1 for images in fold 1 is generated.
As illustrated, in the second iteration of the cross-validation process, fold 1 (205-1) and fold 3 (205-3) are combined as the training dataset, and fold 2 (205-2) is designated as the validation dataset. Following a similar procedure as the first iteration, the outlier detection model 210 is trained on the images from fold 1 and fold 3. The model 210, again, learns the patterns and features from the training dataset, and recognize them as “normal” patterns for outlier detection. During the validation phase, the trained model 210 processes each image from fold 2 (205-2), and assign an outlier probability to each image based on its deviation from the learned normal patterns. A set of outlier probabilities 215-2 for the images in fold 2 is generated at the end of the second iteration.
In the third iteration, fold 1 (205-1) and fold 2 (205-2) are aggregated and assigned as the training dataset, and fold 3 (205-3) is set as the validation dataset. The outlier detection model 210 is trained with the combined images from fold 1 and fold 2, and then deployed on the images from fold 3 to generate a set of outlier probabilities 215-3. With the third iteration complete, the outlier probabilities for each image in all three folds (e.g., 205-1, 205-2 and 205-3) have been generated.
The illustrated example, which divides the normal images into three groups or folds, is provided merely for conceptual clarity. In some aspects, the normal images may be divided into any number (e.g., N) of folds or groups. The cross-validation process may be iterated any number (e.g., N) of times to obtain a full list of outlier probabilities for all images within the normal dataset.
In the illustrated example, following the cross-validation process, the outlier probabilities 215-1, 215-2 and 215-3 from all three iterations are concatenated into a single list 220. The list 220 includes all images from the normal dataset 205 and their corresponding outlier probabilities. The images are then arranged in descending order based on their outlier probabilities, where those with the highest outlier probabilities are placed at the front of the sorted list 225.
In the illustrated example, an established criteria is applied to the sorted list 225, in order to filter out noisy data from the normal dataset. For example, in some aspects, the criteria may instruct the removal of the top n % of images with the highest outlier probabilities (e.g., 230). The value n can be adjusted based on specific requirements of the denoising system, the quality and characteristics of the normal dataset, and other relevant factors. By applying the criteria, images that are most likely to be ambiguous or potentially anomalous are removed. The remaining images 235 are then provided for subsequent pretraining of the anomaly detection model. In some aspects, the system may set a threshold for the outlier probability. Images whose probabilities are equal to or exceed the threshold (e.g., 230) may be removed from further processing.
In the illustrated example, a dataset of normal images 305 (which may correspond to the filtered normal images 235 of
In the illustrated example, the segmented images (e.g., 310-1) are passed through a feature extraction module 315 to extract high-dimensional feature maps (e.g., from 320-1-1 to 320-1-N) with dimensions of (C×H′×W′). These high-dimensional feature maps are then processed by a dimension reduction module 325 to generate feature maps with a reduced dimension of (C×H×W) (where H is less than H′ and/or W is less than W′) (e.g., from 330-1-1 to 330-1-N). A total of (N×n) features maps 330 are generated for the normal dataset (e.g., from 330-1-1 to 330-n-N). In some aspects, the feature extraction module 315 may include one or more convolutional layers followed by activation functions (e.g., a convolutional neural network (CNN) or similar deep learning model), which are designed to capture hierarchical patterns and features within the normal images. In some aspects, the dimension reduction module 325 may include one or more pooling layers (e.g., max pooling or average pooling), which are configured to reduce the spatial dimensions of the feature maps derived from the normal images.
In the illustrated example, the feature maps 330 with reduced dimensionality are provided to a patch feature generation module 335, which compiles patch features for each position across the segmented images. Given that each feature map 330 contains (H×W) positions, and each of the N normal images is divided into n subsections, a total of (n×H×W) positions are identified. For each position within the normal images, a set of patch features (e.g., from 340-1 to 340-P) with a dimension of (N×C) is generated.
Turning to
In the illustrated example, the patch feature denoising module 355 aggregates the outlier scores for all patch features (from all sets 340) to create a list 360. The patch features are then sorted in descending order based on their corresponding outlier scores (e.g., depicted by the sorted list 365). A filter criteria is then applied on the list 365 to perform patch-level noise elimination. The criteria may be percentage-based (such as removing the top n % of patch features with the highest outlier scores) or threshold-based (such as removing patch features whose outlier scores are equal to or exceed a defined threshold). Following the implementation of the filter criteria, patch features 370 having higher outlier scores (e.g., that are more likely to be noisy) are removed, and the refined or filtered patch features 375 are then provided to a coreset subsampling module 380 for coreset selection.
In some aspects, instead of pooling the patch features from all sets together, the patch features may be projected individually for each set. For example, the patch feature projection module 345 may project the patch feature set (e.g., 340-1) for a specific position into a feature space 350, and an outlier score may be computed for each patch feature within the specific set (e.g., 340-1). Based on the sorted outlier scores, the patch feature denoising module 355 may apply a filter criteria to further refine the patch features within the specific set (e.g., 340-1). For example, the module 355 may remove the top n % of patch features with the highest scores within the set (e.g., 340-1). The process may be repeated for each set of patch features (e.g., from 340-1 to 340-P), to ensure each one is individually analyzed and denoised. In some aspects, an outlier in one set may be a normal feature in another given the varied characteristics of each set. By processing each set of patch features individually, the path feature denoising component 355 may focus on more localized noise elimination (e.g., denoising patch features for a specific position). This approach may improve the accuracy and efficacy of the noise removal process.
In the illustrated example, after the patch-level denoising, the refined or filtered patch features 375 are transmitted to a coreset subsampling module 380 for coreset selection. The coreset selection process aims to identify a subset of data points (or patch features) that can effectively and accurately represent the overall patterns or characteristics of the normal dataset (e.g., normal images 305). Various methods may be used for coreset selection, including but not limited to unsupervised machine learning (ML) models (e.g., k-means clustering) or greedy algorithms. In some aspects, such as when the k-means clustering model is utilized, the coreset subsampling module 380 may categorize the filtered patch features 375 into several clusters, and select the centroids of each respective cluster as the coreset samples.
In the illustrated example, the selected coreset samples (collectively referred to as a coreset 390-1), along with their corresponding outlier scores 390-2, are stored in a memory bank 385 (which may correspond to the memory bank 125 of
In the illustrated example, a test image 405 is provided for inspection. As illustrated, the test image depicts an aircraft wing panel that may or may not have any defects (e.g., cracks, corrosions, deformations, holes, or punctures). The test image 406 is divided into multiple subsections or patches (e.g., from 410-1 to 410-n). In some aspects, the number of subsections in the test image corresponds to the number of subsections into which the normal images are divided. For example, if the normal images are segmented into 10 subsections, the test image may also be divided into 10 subsections for consistency in comparison. As illustrated, a total of (1×n) segmented test images are generated.
In the illustrated example, feature maps (e.g., from 415-1 to 415-n) for each of the segmented test images are generated. In some aspect, the segmented test images may first pass through a feature extraction module (e.g., 315 of
In the illustrated example, the test feature maps (e.g., from 415-1 to 415-n) are then provided to a patch feature generation module (e.g., 335 of
In the illustrated example, after the nearest coreset samples 430 are identified for each patch features 420, the nearest neighbor search module 425 generates an anomaly score 435 for each subsection 410 and/or for the test image 405. For example, in some aspects, the module 425 may first compare each patch feature (e.g., 420-1) with its closest coreset sample (e.g., 430) to generate a distance (e.g., Euclidean distance or cosine similarity). For each patch feature 420, the distance may then be weighted by the outlier score 390-2 of its corresponding coreset sample. Following that, these weighted distances are then aggregated to produce the final anomaly score 435 for the test image 405. In some aspects, the anomaly score may range from 0 to 1. The higher the anomaly score, the higher the likelihood that test image contains anomalies (e.g., representing that the wing panel within the test image is abnormal and contains potential defects).
In some aspects, the anomaly score 435 may be compared with a defined threshold. If the anomaly score is equal to or exceeds the threshold, it suggests that the test image contains anomalies, and an inspection decision may be generated indicating that the wing panel within the test image is abnormal and/or contains potential defects. If the score falls below the threshold, it suggests that the test image is normal, and an inspection decision may be generated stating that wing panel with the test image is normal. In some aspects, upon determining that the test image 403 contains anomalies (e.g., the score surpasses the threshold), a visual representation of the test image may be generated with areas of concern highlighted (e.g., areas or patches with a high probability of containing anomalies are marked as red). These highlighted regions provide a clear and direct view of potential defect locations on the wing panel, and can help inspectors or relevant maintenance personnel to focus their attention on these specific areas for further evaluation and analysis.
As illustrated, the input window 505 of the anomaly detection model provides a streamlined interface for users. The input window 505 includes an “Upload” button 515, a “Reset” button 520, and an “Inspection” button 525. The “Upload” button 515 allows users to upload their test images into the system. Once clicked, in some aspects, the system may open a file explorer window to choose the image file. Once a test image is uploaded, the image may be displayed within the input window (e.g., 535), which allows the users to visually confirm the image they have uploaded. Once the image is uploaded and confirmed, users may click on the “Inspection” button 525 to initiate the anomaly detection process. The results will be displayed in the output window 510. The “Reset” button 520 offers a simple way for users to clear the uploaded test images and any related results. Below the three buttons, the input window also includes a block 530 that displays the name or version of the model being used.
As illustrated, the output window 510 is designed to display the anomaly detection results to users. The first block 540 in the output window 510 is used to display the inspection result. For example, depending on the model's assessment, the block 540 may include text such as “Inspection Result: Abnormal” if anomalies are detected within the test image. If the test image is determined to be normal, the block 540 may include text such as “Inspection Result: Normal.” The second block 545 in the output window 510 shows the anomaly score generated for the uploaded test image. For example, the second block 545 may include “Inspection Score: 0.4323.” The anomaly score displayed within the second block 545 may provide users with a direct insight into the degree to which the test image deviates from the normal dataset. The third block 550 in the output window 510 displays the name or version of the model being used. Below the third block 550, the fourth block 555 displays the test image where areas with a high probability of containing anomalies 560 are highlighted. In some aspects, areas with varying probabilities within the test image may be highlighted using different colors. For example, areas with a high probability 560 may be marked in red, areas with a moderate probability may be colored green or yellow, and areas with a low probability may be colored blue.
In some aspects, users may upload multiple images collectively. For example, after clicking the “Upload” button 515 within the input window 505, a file explorer window is prompted, allowing users to choose a batch of images from their files. Depending on system capabilities, the uploaded images may be accessed and processed either sequentially (handling one image at a time), or in parallel (processing multiple images simultaneously). Once the processing is complete, in some aspects, the system may aggregate and display the final results collectively in the output window 510.
In some aspects, the example method 600 may be performed by one or more computing devices, such as the anomaly detection model 115 as illustrated in
At block 605, a computing system (e.g., 115 of
At block 610, the computing system evaluates each normal image (e.g., 205 of
At block 615, the computing system checks if every image within the received set of normal images (e.g., 205 of
At block 620, the computing system performs image-level denoising based on the generated outlier probability scores for images within the received normal dataset. In some aspects, the computing system may first arrange the images according to their outlier probability scores in descending order. Images with relatively high scores may be placed at the front of the list (e.g., 225 of
At block 625, after image-level denoising, the cleaned or refined normal images (e.g., 235 of
At block 630, the computing system processes the refined normal images (e.g., 305 of
At block 635, the computing system calculates outlier scores for each patch feature using a density-based approach (e.g., LOF). The LOF algorithm measures the local density deviation of a patch feature from its neighbors. A LOF score for a patch feature that is significantly larger than 1 suggests that this patch feature is in a sparser region (within the feature space) compared to its neighbors, which may indicate that the patch is an outlier. In contrast, a LOF score that is closer to 1 indicates that the patch feature has a similar density to its surrounding neighbors. After the calculation, each patch feature is associated with a respective outlier score that represents its deviation from the norm.
At block 640, the computing system performs patch-level denoising. In some aspects, the computing system may first sort all patch features in descending order based on their corresponding outlier scores. Patch features at the front of the ordered list (e.g., 365 of
At block 645, the denoised patch features (e.g., 375 of
At block 650, the computing system saves the identified coreset samples (e.g., 390-1 of
In some aspects, the example method 700 may be performed by one or more computing devices, such as the anomaly detection model 115 as illustrated in
At block 705, a computing system (e.g., 115 of
At block 710, the computing system divides the test image into multiple sections (e.g., from 410-1 to 410-n). The segmentation is optional and may allow the system to perform more granular and localized anomaly detection by assessing each smaller section of the test image independently. In some aspects, the test image may be divided in the same manner and into the same number of sections as the normal images during the pretraining phase. By doing so, it may ensure consistency in the subsequent feature extraction and nearest neighbor search processes.
At block 715, the computing system extracts patch features from the test image. For example, in some aspects, the computing system may first pass the test image or its segmented sections through a feature extraction module (e.g., 315 of
At block 720, the computing system accesses the memory bank (e.g., 385 of
At block 725, for each patch feature, the computing system computes the distance between the feature and its nearest coreset sample. Various distance metrics, such as Euclidean distance or cosine similarity, may be used based on the specific implementation.
At block 730, the computing system aggregates the computed distances and generates an anomaly score (e.g., 435 of
At block 735, the computing system compares the anomaly score (e.g., 435 of
At block 805, a computing system (e.g., 115 of
At block 810, the computing system generates a plurality of patch features (e.g., 340 of
At block 815, the computing system generates a coreset (e.g., 390-1 of
At block 820, the computing system receives a test image (e.g., 405 of
At block 825, the computing system generates one or more test patch features (e.g., 420 of
At block 830, the computing system generates an anomaly score (e.g., 435 of
In some aspects, the computing system may further divide each of the plurality of normal images (e.g., 305 of
In some aspects, the computing system may further generate an outlier probability for each of the plurality of normal images, comprising dividing the plurality of normal images into a plurality of groups (e.g., 205-1, 205-2, and 205-3 of
In some aspects, the computing system may, upon determining that one or more normal images from the plurality of normal images have outlier probabilities that meet a first criteria, generate a subset of normal images (e.g., 235 of
In some aspects, each respective coreset sample, within the one or more coreset samples, may be stored into a database (e.g., 385 of
In some aspects, the computing system may further divide the test image (e.g., 405 of
As illustrated, the computing device 900 includes a CPU 905, memory 910, storage 915, one or more network interfaces 925, and one or more I/O interfaces 920. In the illustrated aspect, the CPU 905 retrieves and executes programming instructions stored in memory 910, as well as stores and retrieves application data residing in storage 915. The CPU 905 is generally representative of a single CPU and/or GPU, multiple CPUs and/or GPUs, a single CPU and/or GPU having multiple processing cores, and the like. The memory 910 is generally included as being representative of a random access memory. Storage 915 may be any combination of disk drives, flash-based storage devices, and the like, and may include fixed and/or removable storage devices, such as fixed disk drives, removable memory cards, caches, optical storage, network attached storage (NAS), or storage area networks (SAN).
In some aspects, I/O devices 935 (such as keyboards, monitors, etc.) are connected via the I/O interface(s) 920. Further, via the network interface 925, the computing device 900 can be communicatively coupled with one or more other devices and components (e.g., via a network, which may include the Internet, local network(s), and the like). As illustrated, the CPU 905, memory 910, storage 915, network interface(s) 925, and I/O interface(s) 920 are communicatively coupled by one or more buses 930.
In the illustrated aspect, the memory 910 includes an image denoising component 950, a patch feature generation component 955, a patch feature denoising component 960, a coreset subsampling component 965, and a nearest neighbor search component 970. Although depicted as a discrete component for conceptual clarity, in some aspects, the operations of the depicted component (and others not illustrated) may be combined or distributed across any number of components. Further, although depicted as software residing in memory 910, in some aspects, the operations of the depicted components (and others not illustrated) may be implemented using hardware, software, or a combination of hardware and software.
In the illustrated aspect, the image denoising component 950 is configured to remove noisy images from the received normal dataset before they are provided for pretraining an anomaly detection model (e.g., 115 of
In the illustrated example, the patch feature generation component 955 is configured to process the received images (e.g., normal images 305 of
In the illustrated example, the patch feature denoising component 960 is designed to further refine the patch features extracted from the normal images (e.g., by removing noisy patch features) before they are provided for coreset subsampling. In some aspects, before performing denoising, the patch feature denoising component 960 may first generate an outlier score for each patch feature using a density-based method (e.g., LOF). With the LOF approach, the patch feature denoising component 960 may first project the patch features into a shared feature space (e.g., 350 of
In the illustrated aspect, the coreset subsampling component 965 selects coreset samples from the refined set of patch features (e.g., 375 of
In the illustrated aspect, the nearest neighbor search component 970 is configured to measure the similarity between a test image (received during model inference) (e.g., 405 of
In the illustrated example, the storage 915 may include the coreset samples 975, the outlier scores 980 associated with these coreset samples, and the historical anomaly scores 985 generated for test images. In some aspects, the aforementioned data may be saved in a remote database (e.g., 125 of
In the current disclosure, reference is made to various aspects. However, it should be understood that the present disclosure is not limited to specific described aspects. Instead, any combination of the following features and elements, whether related to different aspects or not, is contemplated to implement and practice the teachings provided herein. Additionally, when elements of the aspects are described in the form of “at least one of A and B,” it will be understood that aspects including element A exclusively, including element B exclusively, and including element A and B are each contemplated. Furthermore, although some aspects may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given aspect is not limiting of the present disclosure. Thus, the aspects, features, aspects and advantages disclosed herein are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
As will be appreciated by one skilled in the art, aspects described herein may be embodied as a system, method or computer program product. Accordingly, aspects may take the form of an entirely hardware aspect, an entirely software aspect (including firmware, resident software, micro-code, etc.) or an aspect combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects described herein may take the form of a computer program product embodied in one or more computer readable storage medium(s) having computer readable program code embodied thereon.
Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems), and computer program products according to aspects of the present disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the block(s) of the flowchart illustrations and/or block diagrams.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other device to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the block(s) of the flowchart illustrations and/or block diagrams.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process such that the instructions which execute on the computer, other programmable data processing apparatus, or other device provide processes for implementing the functions/acts specified in the block(s) of the flowchart illustrations and/or block diagrams.
The flowchart illustrations and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowchart illustrations or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order or out of order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
While the foregoing is directed to aspects of the present disclosure, other and further aspects of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.