The present invention relates to test procedures carried out on user operated test devices, such as point of care and self-test devices, for example lateral flow or other rapid test devices.
Testing for various indications using single use, relatively inexpensive devices is a growing part of medical practice, as well as to other fields of activity. These may be, for example, lateral flow or other rapid test devices, intended for point of care, professional or at home testing. The tests may be for use with samples such as blood, saliva, mucus, urine. Typical tests include for specific infective agents or antibodies, metabolites, specific molecules, or combinations of these.
Such tests generally involve a series of actions which the user is required to undertake. In the case of a blood test, for example, the steps may include lancing a finger to obtain a suitable drop of blood, placing the blood in a specific location on a device, operating the device to deliver the sample to a location, and releasing buffer or a reagent into a specific location. Some of these steps may be manually carried out by the user using a kit of parts supplied with the device, or may be effected using mechanical or electronic components in the test device.
It is known to use imaging to determine or at least indicate a result from such tests. For example, a lateral flow test will generally have a test line, which if present indicates a positive test for the respective attribute, and a control line that indicates that a valid test has occurred. Imaging can be used to read these lines, apply appropriate software processing to determined (for example) that a sufficient intensity relative to some reference has been reached, and thereby indicate the outcome. Some commercially available disposable devices, for example the Clearblue Digital Pregnancy Indicator device, use on-board electronics to assess the control and test line in a lateral flow type device, and provide an indication of the result—in this case, whether the urine tested indicates that the user is pregnant.
U.S. Pat. No. 9,903,857 to Polwart et al discloses the use of camera in a portable device, such as a mobile phone, in order to image a testing apparatus and determine a test result, for example using the intensity of test and control lines.
British patent application GB2569803 by Novarum DX LTD discloses the use of an imaging frame in a process for imaging a test structure, in order to determine a test outcome value by analysis of the target measurement region.
However, the accuracy of the test outcome, whether electronically or visually determined, is highly dependent upon whether the correct procedures have been followed by the user, relative to the correct test protocol. The test result is not reliable unless each step has been carried out by the user in a correct manner, and in the correct order.
It is an object of the present invention to facilitate automated assistance and/or verification for carrying out a test procedure with a user operated test device.
In a first broad form, the present invention provides a method using imaging to confirm that at least a part of a test procedure has been correctly performed. In another broad form, the present invention provides a method using imaging to guide user in the correct performance of a step, or to confirm that a user has correctly performed an intermediate step in a procedure.
According to one aspect, the present invention provides a method for verifying the correct operation of a procedure on a test unit, using an imaging device, including the steps of:
According to another aspect, the present invention provides a test verification system, including a test unit, and an associated software application adapted to be loaded on an imaging device, wherein the software application is adapted to guide a test process which is to be carried out by a user using the test unit, and to capture images from the imaging device for processing;
wherein at one or more stages in the test process, the software application is adapted to direct the user to capture an image of the test unit, process the captured image so as to identify one or more predetermined features of the test unit; analyse the identified features to determine whether they meet determined requirements, and thereby determining whether the requirements have been met; and to provide an indication to the user that the procedure is verified or not verified, responsive to the determination.
Other aspects of the present invention include a stored software application adapted to operatively carry out the method, and a test unit including a link or code to connect to and associate with a stored software application.
Implementations of the invention allow for a verification that at least some of the correct procedure for operating at test unit has been complied with, based on the correct positioning of the device and samples.
Illustrative embodiments of the present invention will be described in connection with the accompanying figures, in which:
The present invention will be described with reference to a particular example of a testing device, using a lateral flow type blood test. However, the general principle of the present invention is applicable to a wide variety of test types, both for medical and other purposes.
The present invention may be applied, for example to tests such as lateral flow biochemical and immunological tests; chemical reagent tests; or any other type of user conducted test. The tests may be medical or for other purposes, for example veterinary, agricultural, environmental or other purposes. The test result may be intended to be read using a conventional visual inspection, for example test and control lines; using optical systems (whether on board or external such as a smartphone) for determining the test outcome; or using electrochemical or other systems to determine the result. The present invention should not be interpreted as limited to any specific type test or the objective of those tests.
In the context of medical tests, the present invention could be applied to any kind of sample testing which can be assessed using the kind of user operated devices discussed, for example testing of blood (including serum, plasma), urine, sputum, mucus, saliva or other body samples. It is noted that while the discussion is primarily in the context of a user operated test, the present invention is also applicable to tests intended for a professional, laboratory or hospital environment.
It will be appreciated that a fluid such as a buffer or reagent may be added to the test unit after the sample is delivered, or at the same time. In some applications, the sample may be pre-mixed with a buffer, reagent or other fluid prior to the mixed sample being delivered to the test device.
The present invention will generally be implemented in conjunction with a portable electronic device, for example a tablet or smartphone, with a suitable application (app) loaded in a conventional way. For example, a customer may purchase a test for home use, which includes a QR code linking to a website at which the appropriate app can be downloaded in the manner appropriate for the particular device, for example at an app store for an Android® or Apple® device. However, the present invention could be implemented using conventional PCs or other devices and systems if desired. The term imaging device is intended to be understood broadly as including personal computers, smartphones and other camera phones, tablets, and any other device adapted to take digital images.
In a preferred implementation, the QR or other code is associated with the specific test unit, so that it is identified and can be specifically link the test verification to a particular test unit on a particular day, and potentially thereby linked to the medical records for a specific user or patient. It may also allow a more generally directed app to select the specific steps, frames and guidance appropriate for that specific test unit.
It will be appreciated that there are devices and systems, for example as described in the prior art references above, which disclose the necessary systems to optically (or otherwise) read a specific test result. The present invention can be principle be implemented in conjunction with any result determining system. For example, the present invention could be implemented in conjunction with a device that itself automatically determines a test outcome, not associated with the app or functionality in the portable device. In other implementations, the result determining process could form part of the process in the portable device/app, and the same or related images could be used to determine both the test outcome, and to verify, guide or confirm that the correct protocol has been followed.
The present implementation will be described with particular reference to the Galileo system test device, https://atomodiagnostics.com/galileo-rdt-platform/. However, it will understood that this is merely one illustrative example of a test unit with which the present invention may be employed.
Considering
The general operation of the device is as follows. The user (who may be a person self testing, or a medical professional or other person assisting) first cleans, for example with a disinfecting wipe, the intended lancing site, typically a finger. The lancet 24 has a spring loaded release, which lances the finger (or other site) and then retracts to a non-operative position. A blood droplet is then expressed by the user.
The device is then positioned so that blood collection tube 21 engages the blood droplet, which is taken up into the blood collection tube 21 by capillary action. It is important that the tube is fully filled, so that the correct sample size is taken.
The BCU 20 is then rotated, to move to the delivery position, in which the blood collection tube 21 engages the test strip via the sample port 11. Contact allows blood to flow from the blood collection tube 21 onto the test strip.
The user then applies a reagent solution through the sample port 11. The lateral flow device then operates conventionally, to produce (for example) a control line and a test line in the result window.
It is axiomatic that the procedure needs to be followed correctly in order to obtain a valid result. If the procedure is not correctly carried out in a material way, then regardless of the status of the test and control lines, no valid result can be produced.
However, in
This implementation of the present invention provides an app, for example for a smartphone, in which a user is guided through the process, and as a result of images taken with the smartphone camera, the correct final status of the device can be verified, and hence provide an indication that the test result (whatever that is) is the result of a correctly performed process.
At step 30, the start of the verification stage starts. The app prompts 31 the user to take an image of the test, at a stage when it is completed. This could be while the waiting time is still running, as part of the process for determining the test result itself, or a separate process undertaken shortly before or after the test result image is captured.
At this stage, the app opens the camera function and the camera viewer loads up on screen. In one implementation, the user is provided with a paper calibration device which includes the outline of the device, and potentially other visual features to assist the image processing software to align and correctly orient the image captured. The card preferably includes sections of one or more known colours, to provide a reference for the image processing software.
At 34, the app prompts the user to position the test device on a certain position on the cards, and align the camera in a particular way, for example as a plan view, relative to the test device. Once that is correct, the image may be captured, either by the user triggering the camera function, or by the software recognising that there is sufficient alignment and taking the image.
In an alternative implementation, the app includes an augmented reality function, in which a skeleton or outline (or other guiding image) of the test device is overlaid on the camera viewing screen 35. This guides the user to try and align the camera to correspond to the virtual image projected on the viewing screen. Once sufficient alignment is present, the camera may be manually triggered by the user, or automatically by the app 36.
In either case, once sufficient alignment is detected by the app, in a preferred form a change in colour or similar visual indication is provided to confirm this to the user.
At step 37, as will be explained in more detail in relation to
At step 38, the app notifies (for example via a display) whether or not the test procedure is valid, and optionally (as discussed above) also delivers the test result after image analysis of the test and control lines (in this example). The app may also advise a central authority or local system, such as a system in a physician's office, of the result of the test. This may be loaded into a electronic patient record, either local or centralised, so as to record that a valid test procedure was undertaken.
The flowchart then has alternative pathways. On the left, at 81 image analysis using pixel properties is undertaken, and at 82 the image is segmented to identify areas with different characteristics.
At 83, the area of interest in the image (which is known in advance) is identified using the predetermined co-ordinates for the sample port, which are defined relative to the aligned skeleton. The area can then be identified in the segmented image.
At 84, pixel properties are analysed for the region of interest. These may include intensity, colour (red, green, blue), and luminescence, and compared with predetermined thresholds. These are selected according to the specific factor being verified, so for example to distinguish between a section of test which has received blood and an unaltered test.
This enables a determination of whether the step has been completed, and the app can correspondingly advise the user whether the step has been correctly completed.
The right hand pathway starts at step 85 with image analysis using pixels, but using artificial intelligence/machine learning tools. At step 86, the image is segmented, and at step 87, areas of interest in the image are identified using features recognition.
At 88, the correct delivery of the sample is verified using the AI/ML tools, comparing the identified area in the image with existing images of accurate and inaccurate tests. The result is then advised to the user in step 90.
The image is captured at 40 and the image processing function is commenced at step 41. The first function is edge detection 42, so that the edges of the test unit are identified. Once the edges have been identified (noting that the shape is known), the specific features of interest can be identified at 43.
In this example, there are three aspects that are important. First, whether it appears that blood has been delivered to the sample port. If no blood is present, then the test cannot be valid. At step 44, the pixels corresponding to the sample port are examined, to determine whether blood is present. This feature should have an irregular red area, compared to an unused sample port which will be uncoloured. This may be determined by, for example, colour, contrast, intensity, or a combination of these. At 45, a determination is made automatically whether a sample was delivered to the sample port.
The second aspect is determining whether the BCU 20 has been moved from the initial position to the delivery position. If it remains unmoved, or even is in some other position, then it is likely that the correct amount of blood has not be delivered, even If the indication at 45 is positive.
The software compares the detected BCU position relative to the body of the test unit with the known correct position at 46, and determines at 47 whether the BCU is in the correct position.
Similarly, the third aspect is to determine whether the lancet has been fired. In this device, the lancet retracts and has a different appearance after it has been operated to the ready state. The lancet position is determined at 46, and a determination is made at 51 as to whether the image is consistent with the lancet being fired.
The fourth aspect is to determine if there appears to be blood present at any other location, for example in the test/control window 12. This would indicate that the user has misunderstood the process and put their blood sample in the wrong place, and even if the other indications are positive, this will still indicate an invalid test.
At 48, the software examines the pixels away from the sample port, in order to determine if their characteristics are consistent with blood being present. As the edges are known for the test unit, and the sample port location is known, pixels corresponding to blood at other locations indicate incorrect operation. This could be limited to positions, for example the test/control window, where blood will definitely invalidate the test, but none the less there may appear to be a well developed test and control line. The result is determined at 49.
At step 50, if all 4 conditions are met, the test is determined to be valid. Otherwise, it is determined to be invalid.
Edge detection is a common algorithm used in image recognition software by comparing pixel properties with the neighbouring pixels. The first step in edge detection is noise reduction. This may conveniently be implemented by a filter, such as Gaussian blur.
The next step is to calculate pixel value intensity gradient, for example using an algorithms such as Sobel Kernel. It calculates the pixel value gradient and intensity.
A suitable edge detection algorithm can then be applied. For example, this could be Sobel edge detection, Canny edge detection, Prewitt edge detection or Roberts edge detection.
Image segmentation is the process of identifying and dividing the image into different types of areas. This is a well established technique in image processing.
Image segmentation can take place by utilising tools such as
The present implementation preferably uses graphical segmentation, since the present implementation of the invention controls how the user captures the image by either loading a skeleton of the device on the screen or asking the user to use a stencil. We can definitively say which part of the image will be our region of interest.
If the artificial intelligence (AI)/machine learning (ML) approach is utilised, it will again be understood that there are many possible approaches to this task. One approach is to use a deep learning method, such as CNN (convoluted neural networks). CNN is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and thereby differentiate one image from another.
The architecture is similar to connectivity pattern of neurons in the brain and was inspired by the organization of the visual cortex. CNN is composed of an input layer, an output layer, and many hidden layers in between.
A typical structure of hidden layers are as follows:
These operations are repeated many thousands of times to create a model. A typical training set for such networks is of the order of 5000 images.
The photo shows an outline or skeleton of the test unit displayed on the screen of a smartphone.
It will be appreciated that the software functions of the present invention may be carried out in processing terms in a variety of ways. For example, part of the processing may be carried out locally on the smartphone or similar device, and part in a server or other remote system or cloud service contacted by the software application as part of its operation. In other implementations, the entire processing could occur in a local imaging device. In another implementation, the local device may be a thin client and simply provide an interface and capture images, the remainder of processing happening at a remote server or in a cloud service.
In an alternative implementation, a test card such as shown in
While the example described is focussed on determining whether the final status of a test device appear correct and consistent with a test protocol, the present invention may equally be applied to intermediate stages of the test, to verify that the user has performed them correctly, and in some cases to then allow the user to be guided to correct their error.
In the test device used in the example, the blood collection tube 21 on the blood collection unit is adapted to take up a specific volume of blood. For some tests, it is critical that the correct blood volume is used. If the tube is not fully filled, the volume will not be sufficient.
Referring to
Referring to
In this case, the feature of interest is the BCU. At 70, BCU fill is determined by looking at the pixel intensity inside the structure identified as the BCU. If the BCU is sufficiently full at 71, then the test, or at least this specific step, is determined to be valid and complete. If the determination at 71 is no, then the user is prompted to take an action, for example to take up additional blood into the blood collection tube.
It will be understood that the present invention may be broadly applied to verify whether any number of steps or procedures have been correctly carried out, whether in a point of care situation, home use, or a laboratory or other professional setting. The image capture and processing aspects of various implementations of the present invention may be used to provide an form of verification of tests and procedures.
For example, in the blood test context discussed above, the present invention may be applied to verify:
While the example described are primarily with reference to blood, it will be appreciated that the principles are equally applicable to samples such as saliva, mucus, urine, or to samples mixed with a buffer or other liquid, for example post nasal swabs for COVID-19 or other respiratory conditions.
In that case, the system may be adjusted appropriately to allow for detection of less strongly coloured samples. For example, a buffer may be used with a coloured component to facilitate detection, a colour change could occur in the test strip in response to the deposit of a sample, or a specific colour or colour combination may be selected using image processing within the camera device, specific to the intended sample (or its interaction with the test strip).
It will be understood that in some implementations, the present invention may be applied to verify one or more intermediate steps in a test process, or to provide confirmation or even guidance to rectify incomplete or incorrect stages. It may be employed to verify one or more aspects of the final stage of a test process. In other implementations, both intermediate stages and final outcomes may be verified.
Number | Date | Country | Kind |
---|---|---|---|
2021902844 | Sep 2021 | AU | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/AU2022/051076 | 9/2/2022 | WO |