The subject matter disclosed herein generally relates to the technical field of machines and methods that facilitate analysis of test strips, including software-configured variants of such machines and improvements to such variants, and to the technologies by which such machines become improved compared to other machines that facilitate analysis of test strips. Specifically, the present disclosure addresses systems and methods to facilitate image capture of test strips for analysis.
Lateral Flow Assay (LFA) is a type of paper-based platform used to detect the concentration of analyte in a liquid sample. LFA test strips are cost-effective, simple, rapid, and portable tests (e.g., contained within LFA testing devices) that have become popular in biomedicine, agriculture, food science, and environment science, and have attracted considerable interest for their potential to provide rapid diagnostic results directly to patients. LFA-based tests are widely used in hospitals, physicians' offices, and clinical laboratories for qualitative and quantitative detection of specific antigens and antibodies, as well as for products of gene amplification. LFA tests have widespread and growing applications (e.g., in pregnancy tests, malaria tests, COVID-19 antibody tests, COVID-19 antigen tests, or drug tests) and are well-suited for point-of-care (POC) and at-home applications.
To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
Example methods (e.g., algorithms) facilitate the image capture of diagnostic test strips (e.g., LFA test strips), and example systems (e.g., machines configured by special-purpose software) are configured to perform such image capture. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, or other function) may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of various example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.
Diagnostic tests may be performed using LFA test strips, which usually have a designated control line region and a test line region. The diagnostic test strip may be included in a diagnostic test carrier, which may take the form of an LFA test cassette (e.g. test cassette 100 shown in
However, qualitative assessment by a human HCP may be subjective and error prone, particularly for faint lines that are difficult to visually identify. Instead, quantitative assessment of line presence or absence, such as by measuring line intensity or other indicator of line strength, may be more desirable for accurate reading of faint test result lines. Fully or partially quantitative approaches directly quantify the intensity or strength of the test result line or can potentially determine the concentration of the analyte in the sample based on the quantified intensity or other quantified strength of the test result line. Dedicated hardware devices that acquire images of LFA test strips typically include image processing software to perform colorimetric analysis to determine line strength, and often rely upon control of dedicated illumination, exclusion of external lighting, perfect alignment of a built-in camera and the LFA test cassette being imaged, and expensive equipment and software to function properly. More flexible and less expensive approaches may be beneficial.
The methods and systems discussed herein describe a technology for the capture of images of LFA test cassettes that include a diagnostic test result region using standard cameras such as a smartphone camera, which may then automatically be read and analyzed locally or remotely by using computer-based reading and analysis techniques. The method and systems discussed herein allow a user to capture a high quality image of test cassette using standard image capture hardware (such as a commodity phone camera) in the presence of ambient lighting or standard light sources (such as a flash or flashlight adjacent to a phone camera) without the need for dedicated specialized hardware. The methods and system discussed herein allow a lay-person to capture a suitably aligned image of a test cassette using only a smartphone with a camera, with the help of a test positioning card disclosed here.
Once a high-quality and aligned image of a test cassette is acquired, the method and systems described herein also allow the performance of automatic quality control on the acquired test cassette image to ensure that the test strip/result well is well lit, is of sufficiently high resolution, and is properly oriented before performing the analysis of the test strip image to determine test results. If the image quality is not sufficient, the end-user is appropriately warned and prompted to recapture the image.
In one example, once the image has been captured, the test strip is analyzed as described in U.S. Provisional Patent Application No. 63/049,213 filed Jul. 8, 2020 entitled “Neural Network Analysis of LFA Test Strips,” the disclosure of which is incorporated herein as if specifically set forth.
In use of the test cassette 100, a liquid biological sample is placed in the sample well 104 (and thus onto one end of the test strip), which then flows through the housing 102 along the test strip 108, thereby to expose the test strip 108 to the biological sample as is known in the art. Exposure of the test strip 108 to the biological sample will cause one or more visual result markers to appear on the test strip 108 in the result well 106, depending on the nature of the test and the contents of the biological sample. In a typical implementation, the test strip 108 includes a control line 110, which becomes visible when the biological sample reaches it after passing through the test area of the test strip 108, regardless of the contents of the biological sample.
The fiducial markers 204 in the illustrated example are four QR codes arranged in a rectangle around the outline 206. While four QR codes are shown, it will be appreciated that all or many of the objectives herein may be met by the use of a different type of fiducial marker or a different number of fiducial markers. For example, the relative inclination of the test positioning card 200 with respect to a computing device 700 that is directed at the test positioning card 200 can be determined from an image of the test positioning card 200 (generated by the computing device 700) that includes three spaced-apart fiducial markers.
Since the fiducial markers 204 are of a known size and have a known relationship, the position of the test positioning card 200 (and hence of the test cassette 100, which is presumably positioned within the outline 206) with respect to the computing device 700 can be assessed from the size of each fiducial marker 204 in an image frame or from the distances between the fiducial markers 204 in the image frame. By checking that the distances between the fiducial markers, or the sizes of the fiducial markers, are within a certain tolerance, it can be determined that the test positioning card 200 is sufficiently close to the computing device 700 and sufficiently parallel to the plane of the image frame (or the plane of the image sensor being used to capture an image of the test positioning card 200).
The fiducial markers 204a to 204d may also be distinct from each other (e.g. include different data or information), which permits the orientation of the test positioning card 200 to be determined from the relative positions of the distinct fiducial markers 204 in an image frame. For example, if the fiducial marker 204a is detected in any relative position other than the top left in the image frame, appropriate steps may be taken, such as providing an output on a display of the computing device 700 prompting a user to rotate the card appropriately. Alternatively, if necessary (since ultimately it is the orientation of the test cassette 100 in the image frame that is important and not the orientation of the test positioning card 200), the computing device may automatically flip the acquired image before performing any analysis. Instead of or in addition to visual prompts or error messages as discussed herein, the computing device 700 may for example provide audible prompts or error messages.
Furthermore, since the fiducial markers 204 are in a known arrangement, the rotational alignment of test positioning card 200 to the image sensor or image frame can also be determined and appropriate steps taken if the test positioning card 200 is misaligned from the image sensor. That is, if the orientation of the test positioning card 200 is generally correct, with fiducial marker 204a in the top left position, but the test positioning card 200 is tilted to the left or the right in the image frame, an output can be provided on a display of the computing device 700 prompting a user to straighten the card.
As will be discussed in more detail below, the capture of a still image including the test cassette 100 occurs when the test positioning card 200 is positioned appropriately with respect to the computing device 700 and the test cassette 100 is appropriately positioned on the test positioning card 200. One of the control parameters is the distance of the test positioning card 200 from the computing device 700. If the test positioning card 200 is too close, the image frame may not be in focus, and if the test positioning card 200 is too far away, the part of the image frame corresponding to the test strip 108 may not be of sufficient resolution or quality so as to permit analysis thereof.
The spacing between the fiducial markers 204 on the test card is selected so that if the computing device 700 is too close to the test positioning card 200, not all of the fiducial markers 204 will appear in the image frame. In such a case, a user prompt may be generated instructing the user to move the computing device 700 further away from the test positioning card 200. In this regard, the aspect ratio of the rectangular arrangement of the fiducial markers 204 is chosen to correspond generally to the aspect ratio of the image frame, or to the aspect ratio of a smartphone display screen, or the aspect ratio of a display window in a smartphone app, e.g. 9:16 or 3:4 in portrait mode. By providing this general aspect ratio correspondence between the test positioning card 200 and the display of the image of the test positioning card on the computing device 700, a natural visual alignment cue is provided to a user. That is, it is logical for the user to position the computing device 700 such that the four fiducial markers 204 appear generally in the four corners of the display or display window of the computing device 700.
Whether or not the test positioning card 200 is too far away from the computing device 700 can be determined from the distance(s) between two or more fiducial markers 204, or from the size of one or more of the fiducial markers 204. If the distance between two identified fiducial markers 204 (e.g. fiducial marker 204a and fiducial marker 204b) is shorter than a predetermined threshold, indicating that the test positioning card is further from the computing device 700 than is preferred, a user prompt can be generated instructing the user to bring the computing device 700 closer to the test positioning card 200.
Similarly, if the distance between two identified fiducial markers 204 (e.g. fiducial marker 204a and fiducial marker 204b) is larger than a predetermined threshold, indicating that the computing device 700 is closer to the test positioning card 200 than is preferred, a user prompt can be generated instructing the user to move the computing device 700 further from the test position card 200.
The outline 206 provides a visual guide to the user for positioning and aligning the test cassette 100 on the test positioning card 200. As can be seen, the length and width of the outline 206 defines a generally rectangular shape that corresponds to the outline or perimeter of the test cassette 100, into which a user can place a test cassette 100. The computing device 700 can then perform test cassette recognition on the image frame to detect a test cassette 100 placed on the test positioning card 200, and the correct positioning and alignment of the test cassette 100 on the test positioning card 200 can be verified. If a test cassette is not detected, or the test cassette 100 is misaligned, appropriate warnings can be generated to alert the user.
Orientation of the test cassette 100 in the image frame can be determined using known image processing techniques. In one example, positioning of the test cassette 100 with respect to the test positioning card 200 is determined by computing the angle of a primary axis of the detected test cassette 100 with respect to a primary axis of the test positioning card 200, determined from the fiducial markers 204. In the event that the test cassette 100 is not correctly positioned or aligned within specified tolerances, a user prompt can be provided on a display screen of the computing device 700, instructing the user to align the test cassette 100 correctly on the test positioning card 200.
Whether or not the computing device 700 used to image the test positioning card 200 is oriented correctly, i.e. is relatively parallel to the test positioning test positioning card 200 and the test cassette 100, can also be determined by the location of the three or more fiducial markers 204 in the image stream observed by the computing device 700. For example, standard image processing and computer vision techniques to determine camera pose, and thereby computing device 700 pose, by determining a homography transform between the detected location of four fiducial markers and the known dimensions of the test positioning card 200 may be used. The determined pose of the camera or the computing device 700 with respect to the test positioning card 200 can then be used to guide the user to correctly position the computing device 700 to achieve appropriate frontal imaging of the test cassette 100.
In the illustrated example, the test positioning card 200 is white in color, which prevents the image of the test cassette 100 that is captured from being washed out or overexposed as a result of automatic exposure algorithms trying to compensate for a dark background. The computing device 700 normally has a flash or flashlight, which may be activated whenever a still image of the test cassette 100 is captured. This may be done to provide consistent lighting of the test cassette 100 as well as to reduce or eliminate any shadows that might be present in ambient lighting.
As shown, the test positioning card 200 may include various text or symbolic instructions, such as instructions 208 to “Scan the test with this side up” and instructions 210 to “Place the rapid test here.”
The software application typically provides a user interface on the display of the computing device 700, including an option to capture an image of diagnostic test results. Upon selection of the image capture option by the user, the software application will set any appropriate camera options, modes or lens selection. In one example, the software application will select a standard camera (if the computing device 700 has multiple cameras) and specify a standard mode with no zoom or other image enhancements, to try and ensure that images captured by different computing devices 700 are as uniform as possible. An example of a computing device 700 is the computing device 700 described below with reference to
Upon activation of the image capture mode in the software application, a feed of images captured by an image input component is provided to the software application. The image feed typically comprises a sequence of image frames and in one example comprises a video stream received from the camera (or other image input component) of the computing device 700. The image feed is shown on the display of the computing device 700 to assist the user in aiming the computing device 700 at the test positioning card 200. At this point, the user will likely have positioned a test cassette 100 on a test positioning card 200 and pointed the camera of the computing device 700 at the test cassette 100. The method starts at operation 302 with the software application determining if a fiducial marker 204 (e.g. a QR code) is detected in the image feed and if this is one of the fiducial markers that is associated with the test positioning card 200. The software application will continue trying to detect a fiducial marker 204 associated with the test positioning card 200 until one is detected or until the user cancels the image capture mode. If an inappropriate fiducial marker is detected by the software application, an appropriate error message can be provided to the user.
Once an appropriate fiducial marker 204 is detected in operation 302, the method passes to operation 304 where the software application determines whether or not at least three fiducial markers 204 are found in the image feed. If less than three fiducial markers 204 are found, the software application displays (operation 306) an error message on a display of the computing device 700 that instructs a user to position the computing device 700 so that all the fiducial markers 204 on the test positioning card 200 are visible, or to move the computing device 700 further from the test positioning card 200. The software application then continues determining whether or not at least three fiducial markers 204 are found in the image feed in operation 304 until at least three fiducial markers 204 are detected or the user cancels the image capture mode.
Once the software application detects at least three appropriate fiducial markers 204 in operation 304, the method passes to operation 308 and operation 310 where the software application measures an imaging distance based on the size of one or more fiducial markers or on the distance(s) between two or more fiducial markers. Since the field of view of standard cameras included in portable computing devices such as smartphones tends to be fairly consistent, the imaging distance can be measured as a percentage or fraction of an image dimension. For example, the difference between they axis values of the locations of fiducial marker 204a and fiducial marker 204b in the image can be compared to the image height. If a fiducial marker 204 is too small or the distance between two fiducial markers 204 is too small compared to a predetermined threshold, the test in operation 310 fails and the software application displays an error message on the display of the computing device 700 at operation 312, indicating that the computing device 700 is too far from the test positioning card 200 and/or that the user needs to move the computing device 700 closer to the test positioning card 200.
It will be appreciated that determining that the test positioning card 200 is not too far away from the computing device 700 may be accomplished in many different ways, including for example determining an image distance between two fiducial markers using both x and y values of the locations of the respective fiducial markers, or by verifying that the test positioning card 200 is sufficiently aligned before making a determination along either the x or they dimension. For example, the alignment of the test positioning card 200 can be verified by comparing, for example, the y values of the locations of fiducial marker 204a and fiducial marker 204c and verifying that the difference between the two y values is below a certain threshold.
The particular thresholds for checking the positioning of the test positioning card 200 are not overly critical, are a matter of design choice, and acceptable values can readily be determined for a particular implementation by experimentation.
It is typically only necessary for the software application to check whether or not the computing device 700 is too far from the test positioning card 200, since if the computing device 700 is too close to the test positioning card 200, the test at operation 304 for at least three visible fiducial markers will fail. However, in other implementations, a check can also be included to restrict close-up imaging of the test cassette 100 as discussed above. The software application continues the measurement of operation 308 and determination of operation 310 until the computing device 700 is at an appropriate distance from the test positioning card 200 or the user cancels the image capture mode.
Once the software application has determined that the computing device 700 is at a sufficient distance in operation 310, the software application determines (at operation 314) the correspondence between the plane of the test positioning card 200 and the plane of the images in the image feed (i.e. the relative pose of the camera used to perform imaging). That is, the plane of the test positioning card is compared to the image plane or a relevant plane of the computing device 700. This is done by the software application comparing the distances between at least three of the fiducial markers 204 with each other, or by comparing the relative sizes of at least three of the fiducial markers 204 with each other, or by computing the angle formed between two sides of a quadrilateral shape (four fiducial markers detected) or triangle (three fiducial markers detected) formed by joining the detected locations of the fiducial markers in the imaging plane.
As for operation 310, the distances or sizes can be determined with respect to an image dimension. In one example, the pitch of the test positioning card 200 with respect to the computing device 700 can be assessed by comparing the horizontal distance between fiducial marker 204a and fiducial marker 204c and the horizontal distance between fiducial marker 204b and fiducial marker 204d. If the difference is greater than a predetermined threshold, the test positioning card 200 is not sufficiently parallel to the computing device 700. Similarly, the roll of the card can be assessed by comparing the distance between fiducial marker 204a and fiducial marker 204b and the vertical distance between fiducial marker 204c and fiducial marker 204d. If the difference is greater than a predetermined threshold, the test positioning card 200 is again not sufficiently parallel to the computing device 700. Also, the software application can determine the camera pose (roll, pitch and relative location) using the location of the fiducial markers 204 on the test positioning card 200 using standard image processing techniques.
If the computing device 700 and the test positioning card 200 are not sufficiently parallel as determined in operation 316 then the software application displays an error message on the display of the computing device 700 at operation 318, indicating that the computing device 700 is not parallel to the test positioning card 200.
The software application continues the measurement of operation 314 and determination of operation 316 until the computing device 700 is sufficiently parallel to the test positioning card 200 or the user cancels the image capture mode.
Once the software application has determined that the computing device 700 is sufficiently parallel in operation 316, the software application attempts to detect the presence of a test cassette 100 in the image frame(s) in operation 320. This is done by visual object recognition and comparison performed by the software application in a known manner or by a machine learning scheme that has been trained on an image set with identified test cassettes 100. If a test cassette 100 is not detected in operation 320 then the test in operation 322 fails and the software application displays an error message on the display of the computing device 700 at operation 324, indicating that a test cassette 100 is not present and prompting a user to place the test cassette 100 within the outline 206. The software application continues the attempted detection of a test cassette 100 in operation 320 and determination of operation 322 until the test cassette 100 is detected or the user cancels the image capture mode.
Once the software application has detected a test cassette 100 in the image frame(s) in in operation 322, the software application determines in operation 326 whether or not the detected test cassette 100 is positioned within boundaries defined with respect to the fiducial markers 204. This is done in one example by ensuring that each corner of the test cassette 100 (detected using corner detection image processing techniques) is within the outer corners of the fiducial markers 204, although it will be appreciated that different boundaries could be specified. In this regard, the outline 206 typically defines a stricter boundary than the boundary used by the software application in operation 326. This permits some variation in successful placement that is not strictly within the outline 206, which primarily serves as a visual guide for a user. If a test cassette 100 is not within the boundaries used in the test of operation 326 then the software application displays an error message on the display of the computing device 700 at operation 328, indicating that a test cassette 100 is incorrectly positioned, and prompting a user to place the test cassette 100 within the outline 206. The software application continues the attempted detection of a correctly positioned test cassette 100 in operation 326 until the test cassette 100 is positioned correctly or the user cancels the image capture mode.
After the software application has verified that the test cassette 100 is correctly located on the test positioning card 200 as in operation 326, the software application determines in operation 330 whether or not the detected test cassette 100 is vertically positioned. This is done in one example by comparing the angle of the test cassette 100 in the image frame, determined from the detected corners of the test cassette 100, with either the vertical axis of the test positioning card 200 or the y axis of the image frame. If the difference in the two angles is not within an acceptable tolerance, (for example +/−30 degrees) then the software application displays an error message on the display of the computing device 700 at operation 332, indicating that the test cassette 100 is not vertical and prompting a user to adjust or straighten the test cassette 100. The software application continues the attempted detection of a correctly aligned test cassette 100 in operation 330 until the test cassette 100 is positioned correctly or the user cancels the image capture mode.
Once the software application has detected that the test cassette 100 in the image is sufficiently vertical in operation 330, the software application then proceeds to an image capture step at operation 332, in which a still image is captured from the image feed. In one example, to reduce the possibility that the positions of the computing device 700, test positioning card 200 and test cassette 100 will be altered by requiring additional input or manipulation of the computing device 700 by the user, the software application may automatically capture the image once the requirements in the flowchart of
If the image is captured automatically, the software application may just examine the continual stream of video frames from the camera until it observes one that meets the acceptance criteria, and then use that frame instead of taking a photo or initiating a separate image capture step.
To facilitate reasonably consistent image capture across different computing devices 700 and under different circumstances, e.g. of ambient lighting, the software application typically enables a flash of the computing device 700 and but otherwise allows the computing device 700 to set the exposure for the capture of the image of the test cassette 100 and test positioning card 200. In one example, the flash of the computing device 700 is set (forced on) by the software application to go off at the time of the image capture. In another example, the flash/flashlight is illuminated constantly for some or all of the time, for example if the software application is going to capture a frame from the video feed automatically as soon as all of the requirements are met.
It will also be appreciated that while the determinations in
After a still image of the test cassette 100 positioned on the test positioning card 200 has been captured at operation 334, the method continues as shown in
The software application typically provides a user interface on the display of the computing device 700, including an option to capture an image of diagnostic test results. Upon selection of the image capture option by the user, the software application will set any appropriate camera options, modes or lens selection. In one example, the software application will select a standard camera (if the computing device 700 has multiple cameras) and specify a standard mode with no zoom or other image enhancements, to try and ensure that images captured by different computing devices 700 are as uniform as possible. An example of a computing device is the computing device 700 described below with reference to
Upon activation of the image capture mode in the software application, a feed of images captured by an image input component is provided to the software application. The image feed typically comprises a sequence of image frames and in one example comprises a video stream received from the camera (or other image input component) of the computing device 700. The image feed is shown on the display of the computing device 700 to assist the user in aiming the computing device 700 at a test cassette 100. The method starts at operation 402 with the software application determining if a test cassette 100 can be detected in the image feed. If a test cassette 100 is not detected by the software application at operation 404, an appropriate error message (e.g. “no test cassette detected”) can be provided to the user at operation 406.
The detection performed by the software application in operation 404 includes the detection or extraction of image features that are characteristic to the test cassette 100. For example, the edges/sides of the test cassette 100 or the corners of the test cassette 100 in the image frame may be detected using standard image processing techniques. Additionally, markings may be provided on the test cassette 100, including fiducial markers (if space permits) or other characteristic markings to enable the test cassette 100 to be detected and to allow its position in an image frame to be determined.
Once the software application has detected the test cassette 100 in operation 404, the method passes to operation 408 and operation 410 where the software application measures an imaging distance based on the image features of the test cassette 100 as extracted from the image frame. In one example, the distance in the image frame between two detected corners can be determined, to determine a height or width or diagonal measurement. Since the field of view of standard cameras included in portable computing devices such as smartphones tends to be fairly consistent, the imaging distance can be measured as a percentage or fraction of an image dimension.
If the measured distance is too large or too small compared to a predetermined threshold, the test in operation 410 fails and the software application displays an error message on the display of the computing device 700 at operation 412, indicating that the computing device 700 is either too near or too far from the test cassette 100 and/or that the user needs to move the computing device 700 further from or closer to the test cassette 100.
It will be appreciated that determining that the test cassette 100 is an appropriate distance from the computing device 700 may be accomplished in many different ways, including for example by determining an image distance between two or more corners using both x and y values of the locations of the corners in the image, or by verifying that the test cassette 100 is sufficiently aligned before making a determination along either the x or the y dimension. For example, the alignment of the test cassette 100 can be verified by comparing, for example, the y values of either the two lowest or the two highest corners in the image and verifying that the difference between the two y values is below a certain threshold, or by determining an angle of a detected edge of the test cassette 100.
The particular thresholds for checking the positioning of the test cassette 100 are not overly critical, are a matter of design choice, and acceptable values can readily be determined for a particular implementation by experimentation.
The software application continues the measurement of operation 408 and determination of operation 410 until the computing device 700 is at an appropriate distance from the test cassette 100 or the user cancels the image capture mode.
Once the software application has determined that the computing device 700 is at a sufficient distance in operation 410, the software application determines (at operation 414) the correspondence between the plane of the test cassette 100 and the plane of the images in the image feed (i.e. the relative pose of the camera used to perform imaging). That is, the plane of the test cassette 100 is compared to the image plane or a relevant plane of the computing device 700. This is done by the software application comparing the distances between at least three of the detected image features (e.g. corners) of the test cassette 100 with each other, or by computing the angle formed between two sides of a quadrilateral shape (four image features detected) or triangle (three image features detected) formed by joining the detected locations of the image features in the imaging plane.
As for operation 410, the distances can be determined with respect to an image dimension. In one example, the pitch of the test cassette 100 with respect to the computing device 700 can be assessed by comparing the distance between the two lower corners of the test cassette 100 with the distance between the two upper corners. If the difference is greater than a predetermined threshold, the test cassette 100 is not sufficiently parallel to the computing device 700. Similarly, the roll of the test cassette 100 can be assessed by comparing the distance between the two left corners of the test cassette 100 with the two right corners. If the difference is greater than a predetermined threshold, the test cassette 100 is again not sufficiently parallel to the computing device 700. Also, the software application can determine the camera pose (roll, pitch and relative location) using the location of image features on the test cassette 100 using standard image processing techniques.
If the computing device 700 and the test cassette 100 are not sufficiently parallel as determined in operation 416 then the software application displays an error message on the display of the computing device 700 at operation 418, indicating that the computing device 700 is not parallel to the test cassette 100.
The software application continues the measurement of operation 414 and determination of operation 416 until the computing device 700 is sufficiently parallel to the test cassette 100 or the user cancels the image capture mode.
Once the software application has determined that the computing device 700 is sufficiently parallel in operation 416, the software application determines in operation 420 whether or not the detected test cassette 100 is vertically positioned. This is done in one example by comparing the angle of the test cassette 100 in the image frame, determined from the detected corners of the test cassette 100, with the y axis of the image frame. If the difference is not within an acceptable tolerance, (for example +/−30 degrees) then the software application displays an error message on the display of the computing device 700 at operation 422, indicating that the test cassette 100 is not vertical and prompting a user to adjust or straighten the test cassette 100. The software application continues the attempted detection of a correctly aligned test cassette 100 in operation 420 and operation 422 until the test cassette 100 is positioned correctly or the user cancels the image capture mode.
Once the software application has detected that the test cassette 100 in the image is sufficiently vertical in operation 420, the software application then proceeds to an image capture step at operation 424, in which a still image is captured from the image feed. In one example, to reduce the possibility that the relative positions of the computing device 700 and test cassette 100 will be altered by requiring additional input or manipulation of the computing device 700 by the user, the software application may automatically capture the image once the requirements in the flowchart of
If the image is captured automatically, the software application may just examine the continual stream of video frames from the camera until it observes one that meets the acceptance criteria, and then use that frame instead of taking a photo or initiating a separate image capture step.
To facilitate reasonably consistent image capture across different computing devices 700 and under different circumstances, e.g. of ambient lighting, the software application typically enables a flash of the computing device 700 and but otherwise allows the computing device 700 to set the exposure for the capture of the image of the test cassette 100. In one example, the flash of the computing device 700 is set (forced on) by the software application to go off at the time of the image capture. In another example, the flash/flashlight is illuminated constantly for some or all of the time, for example if the software application is going to capture a frame from the video feed automatically as soon as all of the requirements are met.
It will also be appreciated that while the determinations in
After a still image of the test cassette 100 has been captured at operation 424, the method continues as shown in
The method commences at operation 502 with the software application determining whether the dimensions and other parameters (e.g. the resolution) of the captured image are correct. This is an additional verification. Because of the checks that have been performed as described above with reference to
If the image dimension is correct at operation 502, the method proceeds at operation 506 where the software application attempts to detect the sample well 104 of the test cassette 100 using known object recognition techniques or by a machine learning scheme that has been trained on an image set with identified sample wells. This includes determining the location and dimensions of the sample well 104 if detected. If the sample well 104 is not detected at operation 506, the software application displays an error message on the display of the computing device 700 at operation 508 and prompts the user to recapture the image. The software application may provide instructions to the user indicating how better to take the image, and may also return automatically, after a short delay, to operation 302 in
If a sample well 104 is detected at operation 506, the method proceeds at operation 510 where the software application attempts to detect the result well 106 of the test cassette 100 using known object recognition techniques or by a machine learning scheme that has been trained on an image set with identified result wells. Once detected, the location of the result well 106 is known or a relevant location parameter can be determined. If the result well 106 is not detected at operation 510, the software application displays an error message on the display of the computing device 700 at operation 512 and prompts the user to recapture the image. The software application may provide instructions to the user indicating how better to take the image, and may also return automatically, after a short delay, to operation 302 in
If a result well 106 is detected at operation 510, the vertical positions in the captured still image of the result well 106 and the sample well 104 are compared at operation 514. If the sample well 104 is above the result well 106 in the image, the image is flipped at operation 516 so that the result well 106 is above the sample well 104.
If the sample well 104 is below the result well 106 in the image, the method proceeds at operation 518 where the software application determines the height of the result well 106 and determines whether or not the height of the result well 106 is within acceptable limits. Because of the checks that have been performed as described above with reference to
If an acceptable height for the result well 106 is verified at operation 518, the method proceeds at operation 522 where the software application checks if the captured image is underexposed, or if a relevant portion (e.g. the region of interest in the result well 106) is underexposed. Any suitable method of checking image exposure may be used, but in one example the RGB values of the pixels in the region of interest are compared to a predetermined threshold. For example, if more than a certain percentage of pixels (e.g. 30%) have RGB values below 60, then the region of interest is deemed to be underexposed.
If the comparison in operation 522 reveals that the captured image is underexposed, then the software application displays an error message on the display of the computing device 700 at operation 512 indicating that the image is underexposed and prompting the user to recapture the image. The software application may provide instructions to the user indicating how better to take the image, and may also return automatically, after a short delay, to operation 302 in
If it is determined that the captured image is not underexposed at operation 522, the method proceeds at operation 524 where the software application checks if the captured image is overexposed. Any suitable method of checking image exposure may be used, but in one example the RGB values of the pixels in the region of interest are compared to a predetermined threshold. For example, if more than a certain percentage of pixels (e.g. 30%) have RGB values above 240, then the region of interest is deemed to be overexposed. If the comparison in operation 524 reveals that the captured image is overexposed, then the method proceeds to operation 526, where the software application attempts to detect a control line 110 in the captured image. This detection is performed in the region of interest of the detected result well 106 or of the test strip 108.
If a control line 110 is detected in operation 526, then the software application displays an error message on the display of the computing device 700 at operation 528 indicating that the captured image is overexposed and prompts the user to recapture the image. The software application may provide instructions to the user indicating how better to take the image, and may also return automatically, after a short delay, to operation 302 in
If a control line 110 is not detected in operation 526, then the software application displays an error message on the display of the computing device 700 at operation 528 indicating that a control line 110 has not been detected. In such a case, the test cassette 100 may not have been exposed to a sample and the software application may prompt, at operation 530, the user to recapture the image or to confirm that the test cassette 100 has been used correctly before prompting the user to recapture the image. The software application may provide instructions to the user indicating how better to take the image, and may also return automatically, after a short delay, to operation 302 in
If it is determined in operation 524 that the captured image is not overexposed, the method proceeds to operation 532, where the software application attempts to detect a control line 110 in the captured image. If a control line 110 is not detected in operation 532, then the software application displays an error message on the display of the computing device 700 at operation 528 indicating that a control line 110 has not been detected. In such a case, the test cassette 100 may not have been exposed to a sample and the software application may prompt the user to recapture the image or to confirm that the test cassette 100 has been used correctly before prompting the user to recapture the image. The software application may provide instructions to the user indicating how better to take the image, and may also return automatically, after a short delay, to operation 302 in
If a control line 110 is detected at operation 532, the captured image is output at operation 534. In some cases, the captured image is processed and analyzed by the software application to interpret the test strip 108 using known techniques for analyzing and interpreting diagnostic test strips. In other cases, the captured image may be stored locally or remotely for later processing, or may be transmitted to a remote server for analysis. In other cases, the test strip 108 may be cropped from the captured image and similarly analyzed, stored or transmitted as appropriate or desired.
The operating system 612 manages hardware resources and provides common services. The operating system 612 includes, for example, a kernel 614, services 616, and drivers 622. The kernel 614 acts as an abstraction layer between the hardware and the other software layers. For example, the kernel 614 provides memory management, Processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services 616 can provide other common services for the other software layers. The drivers 622 are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers 622 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FED drivers, audio drivers, power management drivers, and so forth.
The libraries 610 provide a low-level common infrastructure used by the applications 606. The libraries 610 can include system libraries 618 (e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries 610 can include API libraries 624 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries 610 can also include a wide variety of other libraries 628 to provide many other APIs to the applications 606.
The frameworks 608 provide a high-level common infrastructure that is used by the applications 606. For example, the frameworks 608 provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks 608 can provide a broad spectrum of other APIs that can be used by the applications 606, some of which may be specific to a particular operating system or platform.
In an example embodiment, the applications 606 may include a home application 636, a contacts application 630, a browser application 632, a book reader application 634, a location application 642, a media application 644, a messaging application 646, a game application 648, and a broad assortment of other applications such as a third-party application 640. The applications 606 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications 606, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application 640 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application 640 can invoke the API calls 650 provided by the operating system 612 to facilitate functionality described herein.
The computing device 700 may include Processors 704, memory 706, and I/O components 702, which may be configured to communicate with each other via a bus 740. In an example embodiment, the Processors 704 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another Processor, or any suitable combination thereof) may include, for example, a Processor 708 and a Processor 712 that execute the instructions 710. The term “Processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Although
The memory 706 includes a main memory 714, a static memory 716, and a storage unit 718, both accessible to the processors 704 via the bus 740. The main memory 706, the static memory 716, and storage unit 718 store the instructions 710 embodying any one or more of the methodologies or functions described herein. The instructions 710 may also reside, completely or partially, within the main memory 714, within the static memory 716, within machine-readable medium 720 within the storage unit 718, within at least one of the processors 704 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the computing device 700.
The I/O components 702 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 702 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 702 may include many other components that are not shown in
In further example embodiments, the I/O components 702 may include biometric components 730, motion components 732, environmental components 734, or position components 736, among a wide array of other components. For example, the biometric components 730 include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components 732 include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components 734 include, for example, one or cameras, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components 736 include location sensor components (e.g., a GPS receiver Component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
Communication may be implemented using a wide variety of technologies. The I/O components 702 further include communication components 738 operable to couple the computing device 700 to a network 722 or devices 724 via respective coupling or connections. For example, the communication components 738 may include a network interface Component or another suitable device to interface with the network 722. In further examples, the communication components 738 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices 724 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
Moreover, the communication components 738 may detect identifiers or include components operable to detect identifiers. For example, the communication components 738 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 738, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
The various memories (e.g., main memory 714, static memory 716, and/or memory of the processors 704) and/or storage unit 718 may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 710), when executed by processors 704, cause various operations to implement the disclosed embodiments.
The instructions 710 may be transmitted or received over the network 722, using a transmission medium, via a network interface device (e.g., a network interface Component included in the communication components 738) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 710 may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices 724. A machine readable medium can comprise a transmission medium or a storage medium. A machine readable medium can comprise a non-transitory storage medium. A machine readable medium can comprise a transmission medium or a storage medium.
The following numbered examples are embodiments.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
Although an overview of the inventive subject matter has been described with reference to specific examples, various modifications and changes may be made to these examples without departing from the broader scope of examples of the present disclosure. Such examples of the inventive subject matter may be referred to herein, individually or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed.
The examples illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other examples may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various examples is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various examples of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of examples of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application is a national phase entry of International Application No. PCT/US2021/056670, titled “IMAGE CAPTURE FOR DIAGNOSTIC TEST RESULTS, and filed Oct. 26, 2021, which claims the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 63/115,889, filed Nov. 19, 2020 entitled, “IMAGE CAPTURE FOR DIAGNOSTIC TEST RESULTS,” the entire content of which applications are incorporated herein by reference in their entireties as if explicitly set forth.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/056670 | 10/26/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63115889 | Nov 2020 | US |