The present invention relates generally to the field of defect detection in product manufacturing, particularly biological equipment, such as sample cartridges configured for analysis of a fluid sample.
In recent years, there has been considerable development in the field of biological testing devices that facilitate manipulate a fluid sample within a sample cartridge for biological testing of a fluid sample by polymerase chain reaction (PCR). One notable development in this field is the GeneXpert sample cartridge by Cepheid. The configuration and operation of these types of cartridges can be further understood by referring to U.S. Pat. No. 6,374,684 entitled “Fluid Control and Processing System,” and U.S. Pat. No. 8,048,386 entitled “Fluid Processing and Control.” While these sample cartridges represent a considerable advancement in the start of the art, as with any precision instrument, there are certain challenges in regard to manufacturing of the sample cartridge, in particular the assembly of multiple components in a precision, pressurized instrument can occasionally result in defects that cause the sample cartridge to leak or unable to maintain internal pressure needed for successful operation.
Conventional systems for manufacturing sample cartridges utilize a series of manufacturing process and steps, some of which employ processes that can introduce defects into the sample cartridge, for example defects that cause the sample cartridge to leak or unable to maintain internal pressure, commonly known as seal failures. Certain steps, such as welding of a lid apparatus onto a cartridge body and film sealing of reagents in the cartridge, can introduce defects in sealing that are difficult to detect. Existing approaches to detecting these defects include various seal testing approaches and visual inspections, however, these approaches often utilize destructive methods and/or occasionally fail to identify all defects. Further, these approaches often test a number of cartridges in a certain lot, and if defects are detected, the entire lot is scrapped, which causes considerable waste of cartridges to ensure defective cartridges do not reach the end user. Additionally, current inspection methods are not always consistent due to human error. Accordingly, there exists a need for improved methods of defect detection methods that are non-destructive and do not require extensive testing post-manufacture and that are not prone to human errors associated with visual inspection methods. There is further need for such defect detection methods that are automated.
Accordingly, to achieve these goals there is a need to develop high accuracy models by which the product and manufacturing process can be analyzed to detect defects during manufacturing. Given the complexity and breadth of data relating to a given product and associated manufacturing processes, machine learning models are a useful tool to develop such models however, the development of a high-accuracy models presents considerable challenges, particularly given the limited data sets associated with product defects. Thus, there is a need for robust, high accuracy models for defect detection and methodologies to develop these highly sensitive and specific models to ensure low false negative (e.g. not missing defects/all defects are detected) and at the same time ensure low false positives (e.g. not over-rejecting good products).
In one aspect, the invention pertains to methods of training a model for defect detection of product defects. Such training methods can include a combination of: supervised transfer learning through auxiliary tasks, and a combination of supervised and unsupervised learning.
In another aspect, such model training methods can includes steps of: performing supervised transfer learning on a plurality of data sets from acceptable products, where the plurality of data sets include expert labels; performing active learning on a plurality of data sets including both acceptable products and fail products having defects and identifying anomalies; providing additional expert labels for the identified anomalies; and performing supervised transfer learning with the expert labels on both the acceptable products and the fail products having defects. In some embodiments, the recited steps are associated with a first task, and the method is repeated for a second task that is more specific than the first task, and this approach can be repeated, each time with a more specific task. In some embodiments, the first task comprises determination of threshold for upper and lower range limits of the product. The thresholds can pertain to any of: feature maps, activations, predictions, and operational parameters. In some embodiments, the first task is identification of a product feature, and the second task is identification of an attribute of that feature. In some embodiments, the product is a sample cartridge configured for analyzing a sample. The first task can be identification of a feature and the second task can be an attribute of the feature. The feature can be any of a chimney, a weld between a lid and cartridge body, or a film seal on the lid. In some embodiments, the method repeats the same steps when additional data sets from both acceptable products and fail products having defects become available. The model can be configured for use within an automated defect detection of a sample cartridge during manufacture.
In still another aspect, the training methods can include steps of: performing a product classification step utilizing image data from acceptable products, where the image data includes expert labels; performing an experimental failure classification step on a second set of image data that includes both acceptable products and fail products having experimental defects and identifying anomalies; and performing a production failure classification step on a third set of image data that includes standard products including both acceptable products and products with standard production defects. In some embodiments, the product classification step develops a relational algorithm, which is transferred to and updated in the experimental failure classification, which in turn is transferred to and updated in the production failure classification step. In some embodiments, the method further utilizes a pre-trained classifier configured to identify generic features from an image data set, where a relational algorithm associated therewith is transferred to and updated by the cartridge classifier. In some embodiments, the method can include inducing the experimental defects in select products so as to increase the data sets associated with fail products having defects. In some embodiments, the product can be a sample cartridge configured for analyzing a biological sample. The failure classification can pertain to a feature of a lid of the sample cartridge, where the feature includes any of: a chimney, a weld between the lid and a cartridge body, a film seal on the lid, or any combination thereof. In some embodiments, the model is configured for use within an automated defect detection of the sample cartridge during manufacture.
In yet another aspect, the invention pertains to a defect detection module operably and communicatively coupled to an automation control system of a product manufacturing line, the defect detection module including: a communication unit that is communicatively coupled to a control unit and/or one or more sensors so as to receive one or more data sets regarding the product and/or a manufacturing process; and a processing unit having a memory with programmable instructions recorded thereon, where the instructions include a model thereon configured for determining pass or fail of the product based on the one or more data sets, where the model is developed by a combination of supervised transfer learning through auxiliary tasks and a combination of supervised and unsupervised learning. In some embodiments, the module is developed utilizing any of the training methods described herein.
In one aspect, the invention pertains to method of training a model for automated defect detection of a product in a manufacturing line. In some embodiments, the method is fully automated after training of a model by machine learning. The method can include steps of: obtaining one or more images of the product from one or more angles; performing image labelling to obtain expert labels of one or more defects of the product; and performing training and validation of a model utilizing machine/deep learning using the image labels. In some embodiment, performing image labelling includes receiving input from visual inspection of the images by one or more human experts. The images are presented to one or more human experts for image labelling via a web-based interface such that the human expert is located remotely from the manufacturing line. In some embodiments, the training includes building a binary classification and a multi-class classification. In some embodiments, the binary classification is pass/fall and multi-class classification is a type of defect. In some embodiments, training can further include determining metrics for false negative and/or false positives. In some embodiments, training further includes utilizing consensus voting where multiple labels of a given product differ. In some embodiments, the method further includes performing a digital review to resolve conflicts in the expert labels. Preferably, the images are obtained automatically by robotics within the manufacturing line. In some embodiments, the method includes controlling a robotic inspection system to automatically obtain the one or more images and automatically detect defects in real-time and robotically remove any defective cartridges from the manufacturing line.
In another aspect, the invention pertains to an automated defect detection method of a product in a manufacturing line. The method can includes steps of: obtaining one or more images of the product from one or more angles by a robotic setup; analyzing the images with a model configured for defect detection, where the model is trained using labels from one or more human experts based on visual inspection of prior images; determining one or more defective cartridges in one or more cartridges in the production line via the model; and removing the one or more defective cartridge from the manufacturing line via robotics. In some embodiments, the model accesses binary classifications and multi-class classification of the cartridges. In some embodiments, the images are obtained automatically by robotics within the manufacturing line. The method can further include steps of: robotically positioning the product at differing angles to obtain a plurality of images at differing angles; determining product defects from the plurality of images in real-time; and robotically removing any defective cartridges from the manufacturing line. In some embodiments, the product is an assay cartridge with attached reaction vessel (e.g. “GX tube”), and the one or more defects pertain to the reaction vessel.
In yet another aspect, the invention pertains to an automated inspection system setup for defect inspection of a product in a manufacturing line. The inspection system can include: a robotics systems configured for handling a product in a manufacturing line; a vision system configured for obtaining one or more images of the product; and a robotics controller communicatively coupled with the robotic system. The robotic controller and the vision system can include a processor communicatively coupled with a memory having instructions recorded thereon, the instructions configured with a defect detection model, wherein the model is trained using labels from one or more human experts based on visual inspection of prior images. In some embodiments, the robotics include a universal robotic arm that includes a gripper hand and that is configured to pick up and position the product at one or more angles for the one or more images obtained by the vision system. In some embodiments, the vision system includes: a light, an optics lens, and an image capture device. In some embodiments, the setup can include a teach module configured to display the one or more prior images to one or more human experts and receive one or more inputs of the expert labels for use in training the model by machine/deep learning. In some embodiments, the product is an assay cartridge with attached reaction vessel, and the one or more defects pertain to the reaction vessel.
The present invention relates generally to models for automated defect detection during manufacturing, in particular, defect detection for biological equipment, such as sample cartridges configured for analysis of a fluid sample. Specifically, the invention pertains to training methodologies by which a model can be developed for defect detection. Flowcharts of such defect detection methods using these models are shown in
In one aspect, the invention pertains to developing models for use in automated defect detection of a manufacturing a product, such as a sample cartridge. To illustrate one practical application of this invention, these concepts are described with respect to an exemplary sample cartridge, as shown in
An exemplary use of such a sample cartridge with a reaction vessel for analyzing a biological fluid sample is described in commonly assigned U.S. Pat. No. 6,818,185, entitled “Cartridge for Conducting a Chemical Reaction,” filed May 30, 2000, the entire contents of which are incorporated herein by reference for all purposes. Examples of the sample cartridge and associated instrument module are shown and described in U.S. Pat. No. 6,374,684, entitled “Fluid Control and Processing System” filed Aug. 25, 2000, and U.S. Pat. No. 8,048,386, entitled “Fluid Processing and Control,” filed Feb. 25, 2002, the entire contents of which are incorporated herein by reference in their entirety for all purposes. Various aspects of the sample cartridge can be further understood by referring to U.S. Pat. No. 6,374,684, which described certain aspects of a sample cartridge in greater detail. Such sample cartridges can include a fluid control mechanism, such as a rotary fluid control valve, that is connected to the chambers of the sample cartridge. Rotation of the rotary fluid control valve permits fluidic communication between chambers and the valve so as to control flow of a biological fluid sample deposited in the cartridge into different chambers in which various reagents can be provided according to a particular protocol as needed to prepare the biological fluid sample for analysis. To operate the rotary valve, the cartridge processing module comprises a motor such as a stepper motor typically coupled to a drive train that engages with a feature of the valve to control movement of the valve in coordination with movement of the syringe, thereby resulting in movement of the fluid sample according to the desired sample preparation protocol. The fluid metering and distribution function of the rotary valve according to a particular sample preparation protocol is demonstrated in U.S. Pat. No. 6,374,684.
In one aspect, the defect detection methods utilized herein utilize a model developed through machine learning that is based on data associated with the manufactured produce and/or associated manufacturing processes. In some embodiments, the data includes information from one or more external sensors disposed at one or more locations along the manufacturing production line of the product (e.g. image data, sound data) or operational parameters received from sensors or control units. The external sensors can include, but are not limted to, any of IR cameras, RGB cameras, high-resolution cameras, ultrasonic microphone or any combination thereof. In some embodiments, the IR and RGB cameras are configured to obtain a thermal distribution of the lid during the ultrasonic welding or of the film during or after heat sealing of the film. In other embodiments, the data includes parameters of manufacturing equipment during a process, for example, power, travel, force amplitude and frequency of a welder. While conventional machine learning is well suited for analyzing well-defined features/attributes of large data sets, it is ill suited and inefficient for limited data sets, such as those associated with product defects. For this reason, conventional approaches still rely heavily on destructive testing and human inspection.
Given the breadth and complexity of the data, it is advantageous to utilize a machine learning (ML) model to determine a relationship between one or more characteristics or parameters of the data sets from the external sensors and a cartridge defect. By utilizing a ML model, subtle variations in temperature distribution, image matching or an audio trace can be associated with defects that could otherwise not be detected by visual inspection or standard testing approaches. However, given the limited amount of data associated with defects (which are relatively infrequent), conventional machine learning techniques are not well suited for developing models for defect detection. Thus, the training models described herein can be used to develop such models to allow for automated defect detection in real-time during manufacturing.
In one aspect, the automated process uses ML models that associates one or more parameters or characteristics of a manufacturing process and/or product component with a particular defect. Training methods by which such models can be developed are discussed further below in reference to
As shown in
As shown in
Training supervised deep learning models requires a big, labelled data set to determine the various parameters in the different layers of the deep neural network. The optimal dataset for training and validating classifier models contains a comparable size of labelled examples for each class to ensure that every class is adequately learnt. However, in failure detection models, such as those described above, there is a considerable challenge of having to deal with incomplete labels (e.g., many unlabeled data points, only a few labelled data points) since manual failure inspection is often done on a random subset of the entire data only to save costs. The expensive failure labels come from process experts and cannot easily be obtained by labelling services that are offered for simple tasks (e.g. human in picture yes/no). At the same time, in a well-established manufacturing process, failures are much less likely than pass cases. For example, in current manufacturing procedures, cartridges defects are relatively rare, occurring in less than 1% of cartridges, such that data sets on the defects are quite limited. This results in unbalanced datasets with few, underrepresented fail cases and thus the difficulty to learn the decision boundary between pass/fail adequately due to high uncertainty in the sparse failure space. Accordingly, conventional supervised deep learning techniques are not well suited for failure data sets for defect detection models.
To overcome the above noted challenges, model development can utilize supervised transfer learning with auxiliary tasks to enable efficient labelling by combining a large set of labelled data that is easier to obtain for a simpler auxiliary task compared to the production failure process. In addition, the methods make use of a large set of unlabeled data for the production failure process using anomaly detection to flag outliers as potential failures and selectively obtain expert labels for the anomalies. This allows efficient expert labelling with more focus on failure labels though they are rare in the production process. The supervised model is then retrained with the anomalies and the corresponding expert labels. When new data becomes available from the production process, an iterative cycle of supervised learning, unsupervised anomaly detection and supervised integration of feedback from labelled anomalies can be applied to improve the supervised model over time. The improved model can then distinguish better between fail/pass cases and triggers less anomalies in the unsupervised mode in the future.
In one aspect, the improved deep learning methodology is based on two linked core ideas: (1) supervised transfer learning through auxiliary tasks; and (2) combination of supervised and unsupervised learning. Each of these is described in detail further below.
Regarding the first aspect, supervised transfer learning through auxiliary tasks uses easy to obtain labels from non-experts for more generic auxiliary tasks, such as detecting a cartridge. A non-expert labeler can be given instructions to create cartridge labels (e.g. yes/no). For example, as shown in the schematic flowchart 500 in
Regarding the second aspect, combination of supervised and unsupervised learning in supervised model for multiple auxiliary tasks (see
In some embodiments, the next step, process 720 can include determining thresholds for upper/lower range limits (e.g. feature maps, activations, predictions, etc.) or uncertainty in predictions of the supervised model. For manufacturing defect detection implementations, this process can include: on “pass” cases only to differentiate from fail and novel data; and on “pass” and “fail” cases to differentiate form novel data. In some embodiments, the next step, process 730 can include retraining the model with expert labels for detected anomalies based on determined thresholds. In some embodiments, the process 700 is iterative such that step 730 returns to step 710 in order to handle multiple tasks, for example, this process 731 can include moving to a next auxiliary task (e.g. another task on a same level, or to move from generic to specific) or to repeat for the same task when more production data becomes available.
This process ensures that both labeled and unlabeled data are used to improve the supervised model. Since a higher rate of anomalies for failure cases (e.g. higher rate of range violations/uncertainty etc.) is expected, the approach focuses on labelling more of the rare failure labels. For each auxiliary task (from generic on the left to specific on the right in
A method to detect a manufacturing defect of a container associated with an assembly step, the method comprising:
The labeled data set (e.g., a labeled image data set, a labeled container data set, or both) can associated with a generic auxiliary task and the unlabeled data set can be associated with a failure classifier. The labeled data set can include an equal distribution of classifier results.
One, some, or all of the steps can be performed multiples times, such that each new generic auxiliary task is derived from a previous failure classifier.
The supervised model can be retrained by labeling the anomaly in an unlabeled experimental failure data set using the failure feature extracted from the labeled container data set. The unlabeled experimental failure data set can include an unequal distribution of the classifier results. The retrained model can label an anomaly in an unlabeled production failure data set using the failure features extracted from the unlabeled experimental failure data set. The unlabeled production failure data set can include unequal distribution of the classifier results. Generating the retrained model can be re-performed with each collection of production data.
One or more methods can be performed via a non-transitory computer-readable medium having stored thereon instructions that, when executed by a processor, cause the processor to perform the method. The method can be used to detect manufacturing defects of a container, cartridge, or storage vessel in real time. Additionally, the output communicates a status, a result, information, instructions, or combinations thereof to a user or a device. The output can be auditory, visual, haptic, or combinations thereof. The output can be, for example, a screen, a speaker, an interface, or the like.
Supervised transfer learning is commonly used in the data science community. Additionally, process knowledge is used to carefully define auxiliary tasks that are easier to label and for which more balanced labeled data is obtained. Combining unsupervised and supervised learning is typically done by training an unsupervised Autoencoder first and then transferring the features to a supervised model for further supervised training. Also, unsupervised clustering is often applied, followed by transforming the unsupervised problem into a supervised one using the found groups (e.g. cluster labels). In some embodiments, the improved model training methodology starts with Supervised Learning and uses anomaly detection to identify anomalies to improve the supervised model by anomaly expert labels. Active learning is commonly used in supervised models to identify uncertainty in the model predictions. The improved methodology uses uncertainty in the predictions in addition to ranges of activations etc. to further distinguish the normal space from anomalies. Over/under sampling, augmentation and weighted loss functions are possible countermeasures for unbalanced data. This methodology can apply these, but additionally ensure to have more balanced datasets for the more generic auxiliary task(s) in Transfer Learning. In this methodology, all these individual solutions can be combined and tailored to a specific failure detection process and corresponding data sets. Alternatively, one could label more data on the production process itself and apply only Supervised Learning. However, this would costly, since the labels are determined by process experts and failures are rare. If all data/a large subset is labelled, many pass cases have to be labeled to label a sufficient number of fail cases at the same time.
Semi-supervised learning had been expected to be the solution to combine labelled and unlabeled data. However, in reality, semi-supervised learning might only outperform supervised learning for small datasets. According to some researchers, transfer learning from pre-trained models outperforms semi-supervised learning most of the time. Supervised Transfer Learning is a well-established concept in the Machine/Deep Learning community. However, conventional approaches need extensive process knowledge to identify the auxiliary tasks that are relevant for the problem at hand. To identify these auxiliary tasks, Deep Learning experts have to gain process knowledge or work closely together with process experts, which can be cumbersome, time-consuming and costly. The proposed improved training model concepts described herein can be applied to supervised models with multiple input sources (e.g., images, audio, operational parameters, simple and complex machine data such as scalars, vectors and matrices etc.) and multiple output sources (e.g. labels from different inspection procedures). In some embodiments, these concepts can also be extended to time series of multiple or individual input sources. While the training model concepts described herein have been described with respect to defect detection for sample cartridges, it is appreciated that these concepts can be used to develop models for any defect detection for any manufactured product, or even more broadly to develop any model seeking to identify characteristics from limited data sets where conventional machine learning techniques are found lacking.
Another example implementation of these inventive concepts in a manufacturing process is in the open cartridge assembly. Currently, at the end of the open cartridge assembly lines (e.g. MiniCAL/Jidoka lines), humans inspect each cartridge in real-time manually, for example with their hands and eyes using a magnifier glass, to identify various defects according to a visual inspection protocol (e.g. D20766) before the passed parts are transferred to the next manufacturing step at ROBAL, and failed parts are transferred to scrap bins. Implementing the concepts described above allows for full automation of the open cartridge inspection process using a cartridge manipulator to move each cartridge into multiple sensors' field of view to capture visual data from multiple perspectives and with different measurement techniques, and to detect defects automatically from the captured data with a Machine/Deep Learning algorithm.
In one aspect, the problem solved by the inventive concepts include the following. Current manual visual inspection process cannot quantify missed defects or over rejects of good cartridges for minor cosmetic defects, since inspections rely only on the decision of a single inspector under time pressure without additional validation of the passed/failed Open Cartridges before they move to the next manufacturing step. On the one side, failed Open Cartridges could be transferred to the ROBAL line, waste further material and personnel resources and—in the worst case—result in a field failure at the customer site. After the ROBAL manufacturing step, only a percentage of cartridges (e.g. a random sampling subset) undergoes seal test failure and/or functional testing, such that potential failures can escape into the field. Although rare, there are occasional reports of failed cartridges in the field which can delay a diagnosis for the patient. Therefore, it is desirable that quality inspection processes along the entire production process chain, both individual and assembled parts, be improved to further improve reliability and patient care. The improvements described herein allow for improved product quality, which improve reliability and customer experience and ensure prompt diagnosis for patients. The manual visual inspection process is exhausting for human personnel since personnel must perform repetitive inspections under challenging conditions (e.g., bright lights, noise, time pressure), which often leads to exhaustion and discomfort up to work-related overuse injuries. This fatigue can lead to human errors that lead to visible errors remaining undetected by current inspection protocols.
The invention herein addresses these problems in several ways. Using transfer learning concepts, an automated Open Cartridge Inspection (“OCI”) process has been developed, which uses a cartridge manipulator (e.g. a robot arm with custom gripper tailored to cartridge and/or work cell in production with conveyer belt/indexer to track cartridge). This approach enables a standardized workflow with optimal process conditions (e.g. images from same perspectives/light conditions for all cartridges). The process can use high-resolution sensors (e.g. RGB cameras, 2D laser scanners) to capture small variations on the outer cartridge surface. From the captured sensor data, an image sequence from different cartridge positions, such as the cartridge foot bottom to detect clocking defects or from both sides of the reaction vessel can be analyzed to detect defects (e.g. film wrinkles and dents) that might influence the sensitive PCR reaction result. Utilizing the machine/deep learning algorithm in the analysis, allows more consistent decision making as compared to humans. For training and validating the machine/deep learning algorithms, cartridge inspection (e.g., physical by Process Engineer with their hands) is combined with image labeling (e.g., digital by multiple inspectors by selecting one or multiple defects for each recorded image sequence in a web browser via a web-based interface). In some embodiments, this includes building binary classification algorithms (e.g. configured to predict PASS/FAIL for each cartridge) and multi-class classification models (e.g. configured to predict PASS or one to multiple defects for each cartridge). Metrics can be defined for false negatives (e.g., missed defects) and false positives (e.g., over rejects). Consensus voting (e.g., majority vote, such as in simple cases where 3 out of 4 image labels agree) can be used for more robust labels as ground truth for the algorithm. In some embodiments, a digital review process can be used to resolve conflicts for difficult annotation cases (e.g., where multiple image labelers disagree). In some embodiments, the conflicts can be resolved by a process engineer as a Subject Matter Expert (“SME”) and the expert knowledge can also be integrated into the system to use a digital training resource that allows providing virtual trainings to associates before they are involved in real decision making (e.g., new associates/new defects). While there might be more conflicts for early-stage Machine/Deep Learning models, these conflicts will reduce over time once the algorithm matures by retraining using the SME's feedback. The inspectors do not need to make decisions in real-time in the manufacturing line, which are typically challenging conditions and time pressure. Rather, the inspectors can annotate the images for training/validating the algorithms from an arbitrary computer with a standard web browser after the data has been recorded automatically. In some embodiments, there may be a specialized software application for use by the SME on a remote computer or tablet. This capability improves the working conditions for the personnel and allows dynamic resource allocation of inspectors. In some embodiments, for a more objective ground truth, functional test results can be integrated to separate major defects (e.g., defects more likely to result in functional failures) from minor cosmetic defects (e.g. defects less likely resulting in functional failures).
The approach described herein is a marked improvement over other previously proposed solutions to address the same problems for several reasons. First, this approach provides a tailored solution to the manufacturing processes (e.g., robot fingers tailored to the cartridge, camera/cartridge manipulators tailored to specific manufacturing process with constrained space, tailored sensors for specific defects, RGB cameras, 2D laser scanners). Second, this approach allows for setting-up a new automated inspection process by integration of hardware/software for cartridge manipulation. These can include use of any of: Vision System, robotic arm, Machine/Deep Learning models and data infrastructure, image annotation, robust labeling using consensus voting (e.g. take only labels where 3 out of 4 labelers agree as easy inspections), digital review/training of image annotators (e.g., to resolve more difficult inspections and more consistent decision-making over time), or any combination thereof. Third, conventional systems often use classical computer vision which depends on specific defects, which requires more domain knowledge to extract defect features by a human SME, and does not generalize as well as Machine/Deep Learning, which learn defect features automatically.
This implementation differs from previous manufacturing solutions in at least the following ways. First, this pertains to a different manufacturing step (OCI performed at the end of MiniCAL line rather than in in-line with ROBAL). Secondly, this implementation uses cartridge manipulation with robotics to present the cartridge to the camera in different positions (e.g. underside from cartridge foot bottom, GX tube from one or more angles). In the final production solution, a custom work cell can be used instead to move the cartridge along conveyer belt with an indexer to track the cartridge. Third, this implementation uses deep learning tools (e.g. Cognex Vidi or other suitable software), machine learning tools (e.g. DataRobot) and an image annotation tool (e.g. Labelbox).
Features of the above-described implementation can be further understood by referring to
In the foregoing specification, the invention is described with reference to specific embodiments thereof, but those skilled in the art will recognize that the invention is not limited thereto. Various features, embodiments and aspects of the above-described invention can be used individually or jointly. Further, the invention can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. It will be recognized that the terms “comprising,” “including,” and “having,” as used herein, are specifically intended to be read as open-ended terms of art. Any references to publication, patents, or patent applications are incorporated herein by reference in their entirety for all purposes.
This application is a Non-Provisional of and claims the benefit of priority of U.S. Provisional Application No. 63/374,314 filed Sep. 1, 2022, which is incorporated herein by reference in its entirety. This application is generally related to U.S. Non-Provisional Application ______ entitled “SEAL FAILURE DETECTION SYSTEMS AND METHODS,” [Atty Docket No. 85430-1406031-018210US] and U.S. Provisional Application entitled “FAULTY UNIT DETECTION SYSTEM AND METHODS,” [Atty Docket No. 85430-1406037-018410US]; filed concurrently herewith, the entire contents of each are incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
63374314 | Sep 2022 | US |