Prior approaches to analyzing cells and cellular morphology from samples such as blood samples can be less than ideal in at least some respects. For example, prior clinical standards for the review and analysis of blood samples can be based on a compromise between what would be ideal and what can be achieved by a person manually reviewing slides. This can lead to a failure to detect rare cell types and morphology structures, which can lead to a flawed diagnosis in a least some instances. Also, the statistical sampling of prior approaches can be less than ideal because of the limited number of cells that can be analyzed, and in at least some instances diagnoses are made without statistical significance.
Although efforts have been made to improve and/or automate the analysis of cells, the prior approaches have typically analyzed fewer cells and cellular structures than would be ideal, such that the prior automated approaches suffer from shortcomings that are similar to the manual approaches in at least some respects. These shortcomings can be related to the area scanned and the rate at which samples can be scanned at sufficient resolution. Also, the number of cells that can be analyzed at a sufficient rate to be used in a clinical setting may be less than ideal. Work in relation to the present disclosure suggests that the prior approaches to scanning hematology samples may not scan appropriate areas for the type of cells and cellular structure to be analyzed, which may result in the scan taking longer than would be ideal.
In light of the above, it would be desirable to provide improved approaches to analyzing cells that can provide less time-consuming scans of samples and a more accurate analysis of samples to detect diseases and blood conditions. Ideally, an appropriate area would be scanned to provide a sufficient number of cells and cellular structures to decrease the scan time and increase the sensitivity of the analysis and provide statistical significance for the analysis of cell types, morphology and diseases in at least some instances.
The presently disclosed systems, methods and apparatuses provide improved scanning and analysis of hematology samples such as blood samples. In some embodiments, a first image is acquired and an area of the sample to be scanned at a high resolution is determined from the first image in order to decrease the amount of time to scan the sample, which can lead to an improved diagnosis. In some embodiments, patient data is received as input to determine the area of the sample to scan. While the patient data may comprise any suitable patient data, in some embodiments the patient data comprises one or more of prior diagnostic data or prior blood sample analysis such as a complete blood count, patient symptom, diagnosis, flow cytometry or other data. In some embodiments, the area scanned is dynamically adjusted in response to the classification of cellular structures, such as cellular structures associated with a rare cell type or disease. The dynamic adjustment to the scan area may occur at any suitable time, such as after the scanning of the sample has started and prior to completion of the scanning of the sample at a resolution suitable to determine and classify cellular structures. This approach can promote scanning of areas that are more likely to have relevant cell data and decreased scan times of other areas.
In some embodiments, microscope system for detecting a scan area within hematology slides in digital microscopy comprises a scanning apparatus to scan a hematology sample, and a processor coupled to the scanning apparatus and a memory. The processor may be configured to execute instructions which cause the system to receive a first image of the sample at a first resolution and determine a scan area of the sample to scan in response to the first image. The instructions may further cause the system to scan the scan area to generate an image of the scan area at a second resolution greater than the first resolution and classify a plurality of cells from the image of the scan area into cell data comprising a plurality of cell parameters. The instructions may also cause the microscope system to output the cell data.
All patents, applications, and publications referred to and identified herein are hereby incorporated by reference in their entirety and shall be considered fully incorporated by reference even though referred to elsewhere in the application.
A better understanding of the features, advantages and principles of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, and the accompanying drawings of which:
The following detailed description and provides a better understanding of the features and advantages of the inventions described in the present disclosure in accordance with the embodiments disclosed herein. Although the detailed description includes many specific embodiments, these are provided by way of example only and should not be construed as limiting the scope of the inventions disclosed herein.
Although reference is made to the scanning of hematology samples, embodiments of the present disclosure will find application in many fields where structures such as cells are analyzed, for example bone marrow aspirates, cytology, body fluid samples, histopathology, etc.
The presently disclosed systems, methods and apparatuses are well suited for combination with prior approaches to scanning and analyzing samples such as hematology samples. For example, the optical scanning apparatus may comprise one or more components of a conventional microscope with a sufficient numerical aperture, or a computational microscope as described in U.S. patent application Ser. No. 15/775,389, filed on Nov. 10, 2016, entitled “Computational microscopes and methods for generating an image under different illumination conditions,” published as US20190235224. The system may comprise one or more components of an autofocus system, for example as described in U.S. Pat. No. 10,705,326, entitled “Autofocus system for a computational microscope”. While the system may comprise any suitable user interface and data storage, in some embodiments, the system comprises one or more components for data storage and user interaction as described in U.S. Pat. No. 10,935,779, entitled “Digital microscope which operates as a server”. The system may comprise one or more components of an autoloader for loading slides, for example as described in U.S. patent application Ser. No. 16/875,665, filed on May 15, 2020, entitled “Multi/parallel scanner”. The system may comprise one or more components for selectively scanning areas of a sample, for example as described in U.S. patent application Ser. No. 16/875,721, filed on May 15, 2020, entitled “Accelerating digital microscopy scans using empty/dirty area detection,” published as US20200278530. The system may comprise a grid with a known pattern to facilitate image reconstruction, for example as described in U.S. Pat. No. 10,558,029, entitled “System for image reconstruction using a known pattern”.
Image capture device 102 may be used to capture images of sample 114. In this specification, the term “image capture device” as used herein generally refers to a device that records the optical signals entering a lens as an image or a sequence of images. The optical signals may be in the near-infrared, infrared, visible, and ultraviolet spectrums. Examples of an image capture device comprise a CCD camera, a CMOS camera, a color camera, a photo sensor array, a video camera, a mobile phone equipped with a camera, a webcam, a preview camera, a microscope objective and detector, etc. Some embodiments may comprise only a single image capture device 102, while other embodiments may comprise two, three, or even four or more image capture devices 102. In some embodiments, image capture device 102 may be configured to capture images in a defined field-of-view (FOV). Also, when microscope 100 comprises several image capture devices 102, image capture devices 102 may have overlap areas in their respective FOVs. Image capture device 102 may have one or more image sensors (not shown in
In some embodiments, microscope 100 comprises focus actuator 104. The term “focus actuator” as used herein generally refers to any device capable of converting input signals into physical motion for adjusting the relative distance between sample 114 and image capture device 102. Various focus actuators may be used, including, for example, linear motors, electrostrictive actuators, electrostatic motors, capacitive motors, voice coil actuators, magnetostrictive actuators, etc. In some embodiments, focus actuator 104 may comprise an analog position feedback sensor and/or a digital position feedback element. Focus actuator 104 is configured to receive instructions from controller 106 in order to make light beams converge to form a clear and sharply defined image of sample 114. In the example illustrated in
However, in other embodiments, focus actuator 104 may be configured to adjust the distance by moving stage 116, or by moving both image capture device 102 and stage 116. Microscope 100 may also comprise controller 106 for controlling the operation of microscope 100 according to the disclosed embodiments. Controller 106 may comprise various types of devices for performing logic operations on one or more inputs of image data and other data according to stored or accessible software instructions providing desired functionality. For example, controller 106 may comprise a central processing unit (CPU), support circuits, digital signal processors, integrated circuits, cache memory, or any other types of devices for image processing and analysis such as graphic processing units (GPUs). The CPU may comprise any number of microcontrollers or microprocessors configured to process the imagery from the image sensors. For example, the CPU may comprise any type of single- or multi-core processor, mobile device microcontroller, etc. Various processors may be used, including, for example, processors available from manufacturers such as Intel®, AMD®, etc. and may comprise various architectures (e.g., x86 processor, ARM®, etc.). The support circuits may be any number of circuits generally well known in the art, including cache, power supply, clock and input-output circuits. Controller 106 may be at a remote location, such as a computing device communicatively coupled to microscope 100.
In some embodiments, controller 106 may be associated with memory 108 used for storing software that, when executed by controller 106, controls the operation of microscope 100. In addition, memory 108 may also store electronic data associated with operation of microscope 100 such as, for example, captured or generated images of sample 114. In one instance, memory 108 may be integrated into the controller 106. In another instance, memory 108 may be separated from the controller 106.
Specifically, memory 108 may refer to multiple structures or computer-readable storage mediums located at controller 106 or at a remote location, such as a cloud server. Memory 108 may comprise any number of random-access memories, read only memories, flash memories, disk drives, optical storage, tape storage, removable storage and other types of storage.
Microscope 100 may comprise illumination assembly 110. The term “illumination assembly” as used herein generally refers to any device or system capable of projecting light to illuminate sample 114.
Illumination assembly 110 may comprise any number of light sources, such as light emitting diodes (LEDs), LED array, lasers, and lamps configured to emit light, such as a halogen lamp, an incandescent lamp, or a sodium lamp. For example, illumination assembly 110 may comprise a Kohler illumination source. Illumination assembly 110 may be configured to emit polychromatic light. For instance, the polychromatic light may comprise white light.
In some embodiments, illumination assembly 110 may comprise only a single light source. Alternatively, illumination assembly 110 may comprise four, sixteen, or even more than a hundred light sources organized in an array or a matrix. In some embodiments, illumination assembly 110 may use one or more light sources located at a surface parallel to illuminate sample 114. In other embodiments, illumination assembly 110 may use one or more light sources located at a surface perpendicular or at an angle to sample 114.
In addition, illumination assembly 110 may be configured to illuminate sample 114 in a series of different illumination conditions. In one example, illumination assembly 110 may comprise a plurality of light sources arranged in different illumination angles, such as a two-dimensional arrangement of light sources. In this case, the different illumination conditions may comprise different illumination angles. For example,
Although reference is made to computational microscopy, the presently disclosed systems and methods are well suited for use with many types of microscopy and microscopes such as one or more of a high definition microscope, a digital microscope, a scanning digital microscope, a 3D microscope, a phase imaging microscope, a phase contrast microscope, a dark field microscope, a differential interference contrast microscope, a light-sheet microscope, a confocal microscope, a holographic microscope, or a fluorescence-based microscope.
In some embodiments, image capture device 102 may have an effective numerical aperture (“NA”) of at least 0.8. In some embodiments, the effective NA corresponds to a resolving power of the microscope that has the same resolving power as an objective lens with that NA. Image capture device 102 may also have an objective lens with a suitable NA to provide the effective NA, although the NA of the objective lens may be less than the effective NA of the microscope. For example, the imaging apparatus may comprise a computational microscope to reconstruct an image from a plurality of images captured with different illumination angles as described herein, in which the reconstructed image corresponds to an effective NA that is higher than the NA of the objective lens of the image capture device. In some embodiments with conventional microscopes, the NA of the microscope objective corresponds to the effective NA of the images. The lens may comprise any suitable lens such as an oil immersion lens or a non-oil immersion lens.
The dynamic adjustment to the scan area as described herein may occur at any suitable time, such as after the scanning of the sample has started and prior to completion of the scanning of the sample at a resolution suitable to determine and classify cellular structures. This approach can promote scanning of areas that are more likely to have relevant cell data and decreased scan times of other areas. In some embodiments, a first image is generated at a first resolution to determine the area to scan at a second resolution greater than the first resolution, and after scanning of the area at the second resolution has been initiated, the area scanned at the second resolution is adjusted during the scan of the area and prior to completion of the scanning of the sample.
Consistent with disclosed embodiments, microscope 100 may comprise, be connected with, or in communication with (e.g., over a network or wirelessly, e.g., via Bluetooth) user interface 112. The term “user interface” as used herein generally refers to any device suitable for presenting a magnified image of sample 114 or any device suitable for receiving inputs from one or more users of microscope 100.
Microscope 100 may also comprise or be connected to stage 116. Stage 116 comprises any horizontal rigid surface where sample 114 may be mounted for examination. Stage 116 may comprise a mechanical connector for retaining a slide containing sample 114 in a fixed position. The mechanical connector may use one or more of the following: a mount, an attaching member, a holding arm, a clamp, a clip, an adjustable frame, a locking mechanism, a spring or any combination thereof. In some embodiments, stage 116 may comprise a translucent portion or an opening for allowing light to illuminate sample 114. For example, light transmitted from illumination assembly 110 may pass through sample 114 and towards image capture device 102. In some embodiments, stage 116 and/or sample 114 may be moved using motors or manual controls in the XY plane to enable imaging of multiple areas of the sample.
As illustrated in
As described herein, in some embodiments the scanning apparatus may comprise an optical microscope configured with a substantially fixed illumination light source to capture a plurality of images of the sample. Optionally, in some embodiments, the scanning apparatus may sequentially acquire the plurality of images from different areas of the sample.
As described herein, in some embodiments the scanning apparatus may comprise a computational microscope configured to vary a light source with a plurality of illumination angles to capture a plurality of images of the sample. Optionally, in some embodiments the plurality of images may be processed to generate a high resolution image of the area.
The sample, which may be a hematology sample or blood sample, may include various particular components. For example, the hematology sample may comprise a body, a monolayer of cells and a feathered edge.
The systems described herein may receive additional signals that may aid in scan area detection described further below. For example, microscope 100 may receive patient data prior to scanning sample 114. The patient data may include various types of data that may be relevant to scanning sample 114. For example, the patient data may comprise one or more of flow cytometry data from a flow cytometer or a complete blood count (“CBC”) from a CBC machine. The flow cytometry data may comprise a platelet count.
In some embodiments, the patient data may comprise one or more of a complete blood count (“CBC”), a white blood cell (“WBC”) count, a WBC differential count, a red blood cell (“RBC”) count, or a platelet count. Optionally, in some embodiments the WBC differential count may comprise relative amounts of neutrophils, eosinophils, basophils, lymphocytes and monocytes.
In some embodiments, the patient data may comprise prior diagnostic data of the patient. For example, the prior diagnostic data may comprise the WBC differential count. In some embodiments, the WBC differential count may comprise one or more cell types outside a normal range.
In some embodiments, the patient data may correspond to an abnormal cell type. In some embodiments, the patient data may correspond to an anemia of the patient.
At step 220, one or more of the systems described herein may receive, with a processor, a first image of the sample at a first resolution. For example, microscope 100 (e.g., controller 106), may receive a first image of sample 114 at a first resolution.
The first image may, in some embodiments, prioritize fast acquisition over high resolutions. For example, the first image may comprise one or more of a preview image, a webcam image, or an image from the scanning apparatus.
In some embodiments, the first image may comprise a plurality of first images captured over different fields of view. For instance, the plurality of first images may comprise no more than two images. Optionally, the plurality of first images may comprise fewer than 5 images, optionally fewer than 10 images, optionally fewer than 20 images, or optionally fewer than 50 images.
Turning back to
Microscope 100 may determine the scan area based on various attributes relating to sample 114 as may be detected from the first image. For example, when sample 114 comprises a body, a monolayer of cells and a feathered edge, microscope 100 may select the scan area in response to one or more of a location of the body, a location of the monolayer of cells, or a location of the feathered edge.
In some embodiments, microscope 100 may determine the scan area in response to one or more of identified locations of cells in the first image, a density of cells in an area of the first image, or relative densities of cells at different areas of the first image.
The scan area may be determined to meet particular requirements. For example, the scan area may comprise at least 0.4 cm2 and an optical resolution of the image of the scan area may be within a range from about 200 nm to about 500 nm. Optionally, the scan area may be within a range from about 200 nm to about 400 nm.
In some embodiments, microscope 100 may dynamically adjust the scan area to scan in response to cell data from the image of the scan area, as will be described further below.
In some embodiments, when patient data is received prior to scanning the sample, the scan area may be determined using the patient data. For example, when the patient data comprises the WBC differential count, microscope 100 may determine the scan area in response to cell counts of the WBC differential count. In other examples, when the WBC differential count comprises one or more cell types outside a normal range, microscope 100 may determine an area of the sample having an increased likelihood of presence for the one or more cell types outside the normal range.
In some embodiments, when the patient data corresponds to an anemia of the patient, microscope 100 may increase the scan area to detect one or more of tear drop cells or dacrocytes or one or more of schistocytes.
In some embodiments, microscope 100 may define the scan area to classify a plurality of platelets for a platelet count and platelet morphology. In some embodiments, the scan area may comprise a feathered edge of the sample. In some embodiments, the scan area may comprise the feathered edge of the sample in response to a low platelet count.
In some embodiments, the scan area may comprise a scan area to classify a plurality of WBCs for a WBC differential count. In some embodiments, the scan area may comprise a scan area to classify a plurality of RBCs for an RBC count.
In some embodiments, when the patient data corresponds to an abnormal cell type, microscope 100 may adjust the scan area in response to the abnormal cell type.
In some embodiments, the scan area may comprise an area to detect parasites.
In some embodiments, microscope 100 may receive additional criteria for determining the scan area by way of a user input. For example, microscope 100 may receive a user input that may correspond to a type of cell to analyze. In such embodiments, microscope 100 may determine the scan area in response to the user input. Further, the type of cell to analyze may comprise one or more of a red blood cell count, a platelet count, a platelet morphology, a WBC count, a WBC differential count, a bone marrow megakaryocyte count, or a parasite detection.
In some embodiments, microscope 100 may scan a plurality of samples in an automated mode and microscope 100 may enter a manual mode of operation to receive a user input. For example, a user of microscope 100 may use user interface 112 to enter user inputs.
Returning to
In some embodiments, the scan area may comprise at least 0.4 cm2 and an optical resolution of the image of the scan area may be within a range from about 200 nm to about 500 nm. Optionally, the scan area may be within a range from about 200 nm to about 400 nm. In some embodiments, a pixel resolution of the image of the scan area may be within a range from about 100 nm to about 250 nm and optionally within a range from about 100 nm to about 200 nm.
At step 250 one or more of the systems described herein may classify, with the processor, a plurality of cells from the image of the scan area into cell data comprising a plurality of cell parameters for the plurality of cells. For example, microscope 100 (e.g., controller 106) may classify a plurality of cells from the image of the scan area into cell data comprising a plurality of cells from the image of the scan area into cell data that may comprise a plurality of cell parameters for the plurality of cells.
As described above, in some embodiments microscope 100 may adjust the scan area. For example, as microscope 100 receives and/or analyzes data, microscope 100 may accordingly adjust the scan area. In some embodiments, microscope 100 may repeat one or more steps of method 200 and/or perform one or more steps of method 200 in parallel.
In some embodiments, microscope 100 may dynamically adjust the scan area such that the scan area may be updated while microscope 100 performs one or more steps of method 200. For example, microscope 100 may dynamically adjust the scan area in response to a gradient of cells in the area.
In some embodiments, microscope 100 may dynamically adjust the scan area from a first area scanned with the scanning apparatus to a second area not yet scanned with the scanning apparatus in response to the cell data. For instance, the first area may not overlap with the second area. Optionally in some examples, a gap may extend between the first area and the second area such that the scanning apparatus may skip scanning the sample between the first area and the second area. For instance, microscope 100 may skip scanning the gap (e.g., by controlling and/or moving one or more of focus actuator 104, stage 116, image capture device 102, etc.).
In some embodiments, microscope 100 may classify the plurality of cell parameters during the scan of the scan area and microscope 100 may dynamically adjust the scan of the scan area in response to the plurality of cell parameters. Optionally in some examples, the plurality of cell parameters may comprise a plurality of cell types.
In some embodiments, microscope 100 may classify the plurality cell parameters with an artificial intelligence (AI) algorithm during the scan of the scan area. For instance, the AI algorithm comprises on or more of a statistical classifier or a neural network classifier. In some embodiments, microscope 100 may process at least 10 classifiers in parallel with each other and with the scanning of the scan area.
The artificial intelligence used to classify the parameters may be configured in any suitable way in accordance with the present disclosure. In some embodiments, the artificial intelligence may comprise a neural network classifier, e.g. a convolutional neural network, or a machine learning classifier, for example. The classifier may include one or more models such as a neural network, a convolutional neural network, decision trees, support vector machines, regression analysis, Bayesian networks, and/or training models. The classifier may be configured to classify the at least 10 parameters as described herein with any of the aforementioned approaches. In some embodiments, the classifier may comprise a convolutional neural network with several cascaded layers for detection and segmentation. In some embodiments, the classifier may comprise a binary classification parameter or a multi-level classification parameter. The various steps may be performed sequentially or in parallel. For example, cell types may be classified, and then cellular morphology parameters classified based on a cell type. In some embodiments, a plurality of parameters may be classified and output to determine cell type, and additional parameters may be selected and classified based on cell type. In some embodiments, groups of cells or parameters may be classified, and these groups may further be classified into subgroups, which may be used to classify other subgroups. In some embodiments, cellular structures may be segmented to provide segmented cellular images. Alternatively, cells and parameters may be classified without segmentation. In some embodiments, combinations of logical operations may be performed on the output parameters to determine additional parameters to classify and associated processes, such as logical operations related to detected morphology structures. A person of ordinary skill in the art in of artificial intelligence will recognize many adaptations and variations for classifying and determining parameters in accordance with the present disclosure.
In some embodiments, microscope 100 may detect and count a number of red blood cells over at least a portion of the scan area and microscope 100 may adjust the scan area in response to the number of detected red blood cells. Optionally in some examples, the number may comprise a number per unit area and optionally microscope 100 may adjust the scan area in response to a number of non-overlapping red blood cells.
In some embodiments, microscope 100 may detect and count a number of white blood cells over at least a portion of the scan area and microscope 100 may adjust the scan area in response to the number of detected white blood cells. Optionally in some examples, the number may comprise a number per unit area.
In some embodiments, the cell data may comprise one or more of a cell type, a rare cell type, a density of cells of a cell type, a number of cells of a cell type, or a target number of cells of a cell type. For instance, the rare cell type may comprise one or more of a blast cell, a plasma cell, a myelocyte or a promyelocyte, a circulating lymphoma cell, an immature cell, a hairy cell a binucleated cell (“buttocks cell”), a Sezary cell, or a cup-like blast. In some embodiments, microscope 100 may increase the scan area in response to the rare cell type. Optionally in some examples, the rare cell type may comprise an abnormal cell type.
In some embodiments, the plurality of cell parameters may comprise a parameter corresponding to a size of a cell and microscope 100 may adjust the scan area to scan a feathered edge of the sample in response to the size of the cell. In some embodiments, the cell may comprise a distance across greater than 20 um and optionally microscope 100 may adjust the scan area from a monolayer or a body of the sample to the feathered edge in response to the size of the cell.
In some embodiments, microscope 100 may adjust the scan area to an edge of the sample in response to a platelet count below a threshold value. In some embodiments, microscope 100 may detect clumped platelets and optionally the plurality of cell parameters may comprise a clumped platelet parameter.
In some embodiments, microscope 100 may increase the scan area in response to a WBC count below a threshold value and optionally microscope 100 may decrease the scan area in response to the WBC count above a threshold value.
In some embodiments, the plurality of cell parameters may comprise one or more of a total WBC count or a count of a type of WBC and microscope 100 may adjust the scan area in response to the one or more of the total WBC count or the count of the type of WBC. In some embodiments, microscope 100 may increase the scan area in response to the one or more of the total WBC count or the count of the type of WBC below a threshold value.
Returning to
In some embodiments, when the cell data is output to user interface 112, the cell data may comprise one or more of cell statistics, cell counts, cell populations, cell types, parasites, or a digital scan image. In some embodiments, the digital scan image may be presented with a size and resolution suitable for the user to select a cell in the image and present data for the cell in response to the user selecting the cell.
In some embodiments, microscope 100 (e.g., user interface 112), may present an image of the clumped platelets to the user to verify detection and classification of the clumped platelets. In some embodiments, microscope 100 may present additional data and/or image to the user in response to user inputs, and may further verify detection, classification, and/or other analyzed data based on user input.
As described herein, microscope 100 may perform the steps of method 200 sequentially in any order and/or in parallel and may repeat steps as needed. For example, microscope 100 may repeat certain steps in response to analyzed data and/or user inputs, such as for dynamically adjusting the scan area.
Computing system 610 broadly represents any single or multi-processor computing device or system capable of executing computer-readable instructions. Examples of computing system 610 include, without limitation, workstations, laptops, client-side terminals, servers, distributed computing systems, handheld devices, or any other computing system or device. In its most basic configuration, computing system 610 may include at least one processor 614 and a system memory 616.
Processor 614 generally represents any type or form of physical processing unit (e.g., a hardware-implemented central processing unit) capable of processing data or interpreting and executing instructions. In certain embodiments, processor 614 may receive instructions from a software application or module. These instructions may cause processor 614 to perform the functions of one or more of the example embodiments described and/or illustrated herein.
System memory 616 generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or other computer-readable instructions. Examples of system memory 616 include, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, or any other suitable memory device. Although not required, in certain embodiments computing system 610 may include both a volatile memory unit (such as, for example, system memory 616) and a non-volatile storage device (such as, for example, primary storage device 632, as described in detail below). In one example, one or more of steps from
In some examples, system memory 616 may store and/or load an operating system 640 for execution by processor 614. In one example, operating system 640 may include and/or represent software that manages computer hardware and software resources and/or provides common services to computer programs and/or applications on computing system 610. Examples of operating system 640 include, without limitation, LINUX, JUNOS, MICROSOFT WINDOWS, WINDOWS MOBILE, MAC OS, APPLE'S IOS, UNIX, GOOGLE CHROME OS, GOOGLE'S ANDROID, SOLARIS, variations of one or more of the same, and/or any other suitable operating system.
In certain embodiments, example computing system 610 may also include one or more components or elements in addition to processor 614 and system memory 616. For example, as illustrated in
Memory controller 618 generally represents any type or form of device capable of handling memory or data or controlling communication between one or more components of computing system 610. For example, in certain embodiments memory controller 618 may control communication between processor 614, system memory 616, and I/O controller 620 via communication infrastructure 612.
I/O controller 620 generally represents any type or form of module capable of coordinating and/or controlling the input and output functions of a computing device. For example, in certain embodiments I/O controller 620 may control or facilitate transfer of data between one or more elements of computing system 610, such as processor 614, system memory 616, communication interface 622, display adapter 626, input interface 630, and storage interface 634.
As illustrated in
As illustrated in
Additionally or alternatively, example computing system 610 may include additional I/O devices. For example, example computing system 610 may include I/O device 636. In this example, I/O device 636 may include and/or represent a user interface that facilitates human interaction with computing system 610. Examples of I/O device 636 include, without limitation, a computer mouse, a keyboard, a monitor, a printer, a modem, a camera, a scanner, a microphone, a touchscreen device, variations or combinations of one or more of the same, and/or any other I/O device.
Communication interface 622 broadly represents any type or form of communication device or adapter capable of facilitating communication between example computing system 610 and one or more additional devices. For example, in certain embodiments communication interface 622 may facilitate communication between computing system 610 and a private or public network including additional computing systems. Examples of communication interface 622 include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, and any other suitable interface. In at least one embodiment, communication interface 622 may provide a direct connection to a remote server via a direct link to a network, such as the Internet. Communication interface 622 may also indirectly provide such a connection through, for example, a local area network (such as an Ethernet network), a personal area network, a telephone or cable network, a cellular telephone connection, a satellite data connection, or any other suitable connection.
In certain embodiments, communication interface 622 may also represent a host adapter configured to facilitate communication between computing system 610 and one or more additional network or storage devices via an external bus or communications channel. Examples of host adapters include, without limitation, Small Computer System Interface (SCSI) host adapters, Universal Serial Bus (USB) host adapters, Institute of Electrical and Electronics Engineers (IEEE) 1394 host adapters, Advanced Technology Attachment (ATA), Parallel ATA (PATA), Serial ATA (SATA), and External SATA (eSATA) host adapters, Fibre Channel interface adapters, Ethernet adapters, or the like. Communication interface 622 may also allow computing system 610 to engage in distributed or remote computing. For example, communication interface 622 may receive instructions from a remote device or send instructions to a remote device for execution.
In some examples, system memory 616 may store and/or load a network communication program 638 for execution by processor 614. In one example, network communication program 638 may include and/or represent software that enables computing system 610 to establish a network connection 642 with another computing system (not illustrated in
Although not illustrated in this way in
As illustrated in
In certain embodiments, storage devices 632 and 633 may be configured to read from and/or write to a removable storage unit configured to store computer software, data, or other computer-readable information. Examples of suitable removable storage units include, without limitation, a floppy disk, a magnetic tape, an optical disk, a flash memory device, or the like. Storage devices 632 and 633 may also include other similar structures or devices for allowing computer software, data, or other computer-readable instructions to be loaded into computing system 610. For example, storage devices 632 and 633 may be configured to read and write software, data, or other computer-readable information. Storage devices 632 and 633 may also be a part of computing system 610 or may be a separate device accessed through other interface systems.
Many other devices or subsystems may be connected to computing system 610. Conversely, all of the components and devices illustrated in
The computer-readable medium containing the computer program may be loaded into computing system 610. All or a portion of the computer program stored on the computer-readable medium may then be stored in system memory 616 and/or various portions of storage devices 632 and 633. When executed by processor 614, a computer program loaded into computing system 610 may cause processor 614 to perform and/or be a means for performing the functions of one or more of the example embodiments described and/or illustrated herein. Additionally or alternatively, one or more of the example embodiments described and/or illustrated herein may be implemented in firmware and/or hardware. For example, computing system 610 may be configured as an Application Specific Integrated Circuit (ASIC) adapted to implement one or more of the example embodiments disclosed herein.
Client systems 710, 720, and 730 generally represent any type or form of computing device or system, such as example computing system 610 in
As illustrated in
Servers 740 and 745 may also be connected to a Storage Area Network (SAN) fabric 780. SAN fabric 780 generally represents any type or form of computer network or architecture capable of facilitating communication between a plurality of storage devices. SAN fabric 780 may facilitate communication between servers 740 and 745 and a plurality of storage devices 790(1)-(N) and/or an intelligent storage array 795. SAN fabric 780 may also facilitate, via network 750 and servers 740 and 745, communication between client systems 710, 720, and 730 and storage devices 790(1)-(N) and/or intelligent storage array 795 in such a manner that devices 790(1)-(N) and array 795 appear as locally attached devices to client systems 710, 720, and 730. As with storage devices 760(1)-(N) and storage devices 770(1)-(N), storage devices 790(1)-(N) and intelligent storage array 795 generally represent any type or form of storage device or medium capable of storing data and/or other computer-readable instructions.
In certain embodiments, and with reference to example computing system 610 of
In at least one embodiment, all or a portion of one or more of the example embodiments disclosed herein may be encoded as a computer program and loaded onto and executed by server 740, server 745, storage devices 760(1)-(N), storage devices 770(1)-(N), storage devices 790(1)-(N), intelligent storage array 795, or any combination thereof. All or a portion of one or more of the example embodiments disclosed herein may also be encoded as a computer program, stored in server 740, run by server 745, and distributed to client systems 710, 720, and 730 over network 750.
As detailed above, computing system 610 and/or one or more components of network architecture 700 may perform and/or be a means for performing, either alone or in combination with other elements, one or more steps of an example method for bone marrow aspirate analysis.
As described herein, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each comprise at least one memory device and at least one physical processor.
The term “memory” or “memory device,” as used herein, generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices comprise, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In addition, the term “processor” or “physical processor,” as used herein, generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors comprise, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor. The processor may comprise a distributed processor system, e.g. running parallel processors, or a remote processor such as a server, and combinations thereof.
Although illustrated as separate elements, the method steps described and/or illustrated herein may represent portions of a single application. In addition, in some embodiments one or more of these steps may represent or correspond to one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks, such as the method step.
In addition, one or more of the devices described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another form of computing device by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
The term “computer-readable medium,” as used herein, generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media comprise, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
A person of ordinary skill in the art will recognize that any process or method disclosed herein can be modified in many ways. The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed.
The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or comprise additional steps in addition to those disclosed. Further, a step of any method as disclosed herein can be combined with any one or more steps of any other method as disclosed herein.
The processor as described herein can be configured to perform one or more steps of any method disclosed herein. Alternatively or in combination, the processor can be configured to combine one or more steps of one or more methods as disclosed herein.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of.” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and shall have the same meaning as the word “comprising.
The processor as disclosed herein can be configured with instructions to perform any one or more steps of any method as disclosed herein.
It will be understood that although the terms “first,” “second,” “third”, etc. may be used herein to describe various layers, elements, components, regions or sections without referring to any particular order or sequence of events. These terms are merely used to distinguish one layer, element, component, region or section from another layer, element, component, region or section. A first layer, element, component, region or section as described herein could be referred to as a second layer, element, component, region or section without departing from the teachings of the present disclosure.
As used herein, the term “or” is used inclusively to refer items in the alternative and in combination.
As used herein, characters such as numerals refer to like elements.
The present disclosure includes the following numbered clauses.
Embodiments of the present disclosure have been shown and described as set forth herein and are provided by way of example only. One of ordinary skill in the art will recognize numerous adaptations, changes, variations and substitutions without departing from the scope of the present disclosure. Several alternatives and combinations of the embodiments disclosed herein may be utilized without departing from the scope of the present disclosure and the inventions disclosed herein. Therefore, the scope of the presently disclosed inventions shall be defined solely by the scope of the appended claims and the equivalents thereof.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 63/114,827, filed Nov. 17, 2020, entitled “DETECTING SCAN AREA WITHIN HEMATOLOGY SLIDES IN DIGITAL MICROSCOPY,” which is incorporated, in its entirety, by this reference. The subject matter of the present application is related to U.S. patent application Ser. No. 15/775,389, filed on Nov. 10, 2016, entitled “Computational microscopes and methods for generating an image under different illumination conditions”, published as US20190235224, U.S. Pat. No. 10,705,326, entitled “Autofocus system for a computational microscope”, and U.S. Pat. No. 10,935,779, entitled “Digital microscope which operates as a server”, U.S. patent application Ser. No. 16/875,665, filed on May 15, 2020, entitled “Multi/parallel scanner”, U.S. patent application Ser. No. 16/875,721, filed on May 15, 2020, entitled “Accelerating digital microscopy scans using empty/dirty area detection”, published as US20200278530, U.S. Pat. No. 10,558,029, entitled “System for image reconstruction using a known pattern”, the entire disclosures of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL2021/051366 | 11/17/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63114827 | Nov 2020 | US |