The present disclosure relates generally to digital microscopy and/or computational microscopy and, more specifically, to systems and methods for accelerating digital microscopy by detecting sample artifacts and empty areas of the sample.
Microscopy is used in several applications and use cases to analyze samples, such as a hospital, a lab or a clinic. A large volume of slides may need to be read at a microscope facility, and the throughput of such systems can be less than ideal. Commercial microscopes, such as whole slide imaging (WSI) devices that are currently available, often comprise a single scanning microscope that relies primarily on accurate mechanical scanning and high-quality objective lenses. Recently computational microscopy has been proposed as a way of improving the resolution of optical images. With computational microscopy, a plurality of images of the sample can be obtained and processed to improve resolution and quality.
With these prior approaches to microscopy, the throughput may still be limited by the speed in which the scanning microscope can scan a single slide, and by the computational time to generate the image in at least some instances. The prior approaches may less than ideally allocate microscope and processing resources and may sample and process more sample data than would be ideal. This can result in delays, resulting in less than ideal throughput.
Some facilities such as pathology labs and hospitals may scan several microscope samples, and the throughput of the prior systems can be less than ideal. For example, some samples such as frozen samples, may need to be read quickly, while other samples may be less time sensitive. In addition, some samples may be read while a patient is in surgery to determine how to treat the patient surgically. Also, the samples obtained from tissue and other objects may contain artifacts or empty regions that are not helpful in evaluating the sample. For example, with some tissue samples such as needle biopsies and microtomes, the sample on the microscope slide can be distributed unevenly.
The prior approaches to microscopy may scan more of the sample than would be ideal. For example, regions that contain artifacts or empty space may not be helpful in evaluating the sample. Examples of artifacts include particulate matter, dust, dirt, debris, and smudges. The artifacts and empty space on the sample will generally not be helpful in analyzing the sample. The prior approaches to microscopy can scan and process these regions with artifacts and empty space with resources similar to other regions that contain useful sample material, resulting less than ideal throughput for the output images.
In light of the above, it would be desirable to have improved methods and apparatus for increasing microscope imaging throughput at facility. Ideally, such improved microscope systems would overcome at least some of the aforementioned limitations of the prior approaches.
As will be described in greater detail below, the instant disclosure describes various systems and methods for improving throughput of microscopes such as computational microscopes. A microscope comprising an illumination assembly, an image capture device and a processor can be configured to selectively identify regions of a sample comprising artifacts or empty space. By selectively identifying regions of the sample that have artifacts or empty space, the amount of time to generate an image of the sample and resources used to generate the image can be decreased substantially while providing high resolution for appropriate regions of the output image. The can be processor configured to change the imaging process in response to regions of the sample that comprises artifacts or empty space. The imaging process may comprise a higher resolution process to output higher resolution portions of the computational image for sample regions comprising valid sample material, and a lower resolution process to output lower resolution portions of the computational image for sample regions comprising valid sample material.
In an aspect, a microscope comprise an illumination assemble, an image capture device, and a processor. The illumination assembly can be operable to illuminate a sample under observation of the microscope. The image capture device can be operable to capture an initial image set of the illuminated sample. The processor can be coupled to the image capture device and configured with instructions to identify an area of the sample that comprises at least one of artifact or empty space, and to determine a process for generating a computational image of the sample in response to identifying the area.
Features from any of the embodiments described herein may be used in combination with one another in accordance with the general principles described herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
The accompanying drawings illustrate a number of exemplary embodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.
Throughout the drawings, identical reference characters and descriptions indicate similar, but not necessarily identical, elements. While the exemplary embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims.
The presently disclosed microscope methods and apparatus disclosed herein can be used to measure many types of samples and generate computational images. The microscope can be configured to measure regions that comprise artifacts or are empty or dirty, for example. In some embodiments, the images may comprise a computational image, although the present disclosure will find applications in other fields. By selectively identifying regions of the sample that are less useful, the speed of the imaging process can be increased. The methods and apparatus disclosed herein are well suited for use with one or more components of prior systems. For example, the microscope methods and apparatus disclosed herein can be readily incorporated into prior systems, for example with a software upgrade.
The artifact as described herein such as particulate matter, e.g. dust or debris, may be located from a focal plane of the microscope or within it. The processor can be configured with instructions to determine whether specific artifacts, portions or areas of the processed images originate from locations away from the focal plane, for example by exhibiting different shifts among an image set in response to a different illumination angle of the illumination beam.
Image capture device 102 may be used to capture images of sample 114. In this specification, the term “image capture device” as used herein generally refers to a device that records the optical signals entering a lens as an image or a sequence of images. The optical signals may be in the near-infrared, infrared, visible, and ultraviolet spectrums. Examples of an image capture device comprise a CCD camera, a CMOS camera, a photo sensor array, a video camera, a mobile phone equipped with a camera, a webcam, a preview camera, a microscope objective and detector, etc. Some embodiments may comprise only a single image capture device 102, while other embodiments may comprise two, three, or even four or more image capture devices 102. In some embodiments, image capture device 102 may be configured to capture images in a defined field-of-view (FOV). Also, when microscope 100 comprises several image capture devices 102, image capture devices 102 may have overlap areas in their respective FOVs. Image capture device 102 may have one or more image sensors (not shown in
In some embodiments, microscope 100 comprises focus actuator 104. The term “focus actuator” as used herein generally refers to any device capable of converting input signals into physical motion or changing ray convergence for adjusting the relative distance between sample 114 and image capture device 102. Various focus actuators may be used, including, for example, linear motors, electrostrictive actuators, electrostatic motors, capacitive motors, voice coil actuators, magnetostrictive actuators, liquid lenses, etc. In some embodiments, focus actuator 104 may comprise an analog position feedback sensor and/or a digital position feedback element. Focus actuator 104 is configured to receive instructions from controller 106 in order to make light beams converge to form a clear and sharply defined image of sample 114. In the example illustrated in
However, in other embodiments, focus actuator 104 may be configured to adjust the distance by moving stage 116, or by moving both image capture device 102 and stage 116. Microscope 100 may also comprise controller 106 for controlling the operation of microscope 100 according to the disclosed embodiments. Controller 106 may comprise various types of devices for performing logic operations on one or more inputs of image data and other data according to stored or accessible software instructions providing desired functionality. For example, controller 106 may comprise a central processing unit (CPU), support circuits, digital signal processors, integrated circuits, cache memory, or any other types of devices for image processing and analysis such as graphic processing units (GPUs). The CPU may comprise any number of microcontrollers or microprocessors configured to process the imagery from the image sensors. For example, the CPU may comprise any type of single- or multi-core processor, mobile device microcontroller, etc. Various processors may be used, including, for example, processors available from manufacturers such as Intel®, AMD®, etc. and may comprise various architectures (e.g., x86 processor, ARM®, etc.). The support circuits may be any number of circuits generally well known in the art, including cache, power supply, clock and input-output circuits. Controller 106 may be at a remote location, such as a computing device communicatively coupled to microscope 100.
In some embodiments, controller 106 may be associated with memory 108 used for storing software that, when executed by controller 106, controls the operation of microscope 100. In addition, memory 108 may also store electronic data associated with operation of microscope 100 such as, for example, captured or generated images of sample 114. In one instance, memory 108 may be integrated into the controller 106. In another instance, memory 108 may be separated from the controller 106.
Specifically, memory 108 may refer to multiple structures or computer-readable storage mediums located at controller 106 or at a remote location, such as a cloud server. Memory 108 may comprise any number of random access memories, read only memories, flash memories, disk drives, optical storage, tape storage, removable storage and other types of storage.
Microscope 100 may comprise illumination assembly 110. The term “illumination assembly” as used herein generally refers to any device or system capable of projecting light to illuminate sample 114.
Illumination assembly 110 may comprise any number of light sources, such as light emitting diodes (LEDs), LED array, lasers, and lamps configured to emit light, such as a halogen lamp, an incandescent lamp, or a sodium lamp. In one embodiment, illumination assembly 110 may comprise only a single light source. Alternatively, illumination assembly 110 may comprise four, sixteen, or even more than a hundred light sources organized in an array or a matrix. In some embodiments, illumination assembly 110 may use one or more light sources located at a surface parallel to illuminate sample 114. In other embodiments, illumination assembly 110 may use one or more light sources located at a surface perpendicular or at an angle to sample 114. Illumination assembly 110 may comprise other optical elements, such as lenses, mirrors, diffusers, active or passive phase elements, intensity elements, etc.
In addition, illumination assembly 110 may be configured to illuminate sample 114 in a series of different illumination conditions. In one example, illumination assembly 110 may comprise a plurality of light sources arranged in different illumination angles, such as a two-dimensional arrangement of light sources. In this case, the different illumination conditions may comprise different illumination angles. For example,
Consistent with disclosed embodiments, microscope 100 may comprise, be connected with, or in communication with (e.g., over a network, via dedicated connection (e.g., HDMI, VGA, RGB, Coaxial) or wirelessly, e.g., via Bluetooth or WiFi) user interface 112. The term “user interface” as used herein generally refers to any device suitable for presenting a magnified image of sample 114 or any device suitable for receiving inputs from one or more users of microscope 100.
Microscope 100 may also comprise or be connected to stage 116. Stage 116 comprises any horizontal rigid surface where sample 114 may be mounted for examination. Stage 116 may comprise a mechanical connector for retaining a slide containing sample 114 in a fixed position. The mechanical connector may use one or more of the following: a mount, an attaching member, a holding arm, a clamp, a clip, an adjustable frame, a locking mechanism, a spring or any combination thereof. In some embodiments, stage 116 may comprise a translucent portion or an opening for allowing light to illuminate sample 114. For example, light transmitted from illumination assembly 110 may pass through sample 114 and towards image capture device 102. In some embodiments, stage 116 and/or sample 114 may be moved using motors or manual controls in the XY plane to enable imaging of multiple areas of the sample.
The processor can be configured with instructions to generate the computational image in accordance with regions corresponding to the identified areas of the sample comprising valid sample data, artifacts or empty space. For example, the computational image may comprise a higher spatial resolving power at regions corresponding to valid portion 210, and lower spatial resolving power at regions outside valid portion 210. For example, regions of the computational image corresponding to region 202 and 203 may comprise lower spatial resolving power. In some embodiments, the lower resolution image comprises the same number of pixel density as other regions, which can be generated by empty magnification or interpolation. These approaches can provide pixel resolution enhancement without increasing spatial resolving power of the portion image, and the processor can be configured with appropriate instructions.
A processor of microscope 100, such as controller 106, tests whether an area of at least one image has artifact and/or empty space, at step 306. For example, the processor may scan sample 200 and identify areas 202 and/or 203 as empty or having artifact, e.g., having no discernible viewing interest. Alternatively or in combination, information from a plurality of images, such as a computational image or appearance of artifacts in several images, can be used with step 306. In some embodiments, the processor is configured with instructions to search for valid data, and determine that the area is empty or contains artifacts if the amount of valid data found in the area is under a threshold amount or does not meet a defined criterion for valid data. For example, the processor can be configured with instructions to identify areas comprising artifact or empty space, or instructions to identify valid data, and combinations thereof. The processor can be configured with instructions to separately test for each of artifacts or empty space, either separately or in combination.
The area or regions of the sample under test can be provided in step 306 in many ways and in some embodiments without steps 302 and 304. For example, the area under test can be provided to the processor and processed to determine whether the area has valid data, artifact or empty space. Any source of image data as described herein can be used to perform the test at step 306.
At step 308 the processor determines whether an area has artifact and/or empty space.
The test for artifact and/or empty data can be configured in many ways either alternatively or in combination. For example, the test can be performed on a composite image or a computational image or from analyzing similarities or differences between images (e.g a portion which may look like valid data in a single image or some of the images may not be present in other images and may be interpreted as an artifact). Also, the test can be performed on any one or more images of the image set, or any portion of the one or more images of the image set, or other image data for example. This test can be configured to determine when there is valid data, artifacts or empty space. Also, the testing can be configured to provide statistical data such as a probability that the area or region comprises, valid data, artifacts, or empty space, as described herein. The probability can be used to suggest that the tested area or region comprises valid data, artifacts, or empty space. Also, additional or alternative metrics to probability can be used, such as analysis of spatial frequencies, to test the area or region. In this regard, the test can determine whether the area potentially has artifact. At step 308, the “yes” or “no” test can be performed based on a statistical or other analysis to alter the process as described herein.
If the area comprises artifact or empty space, the processor may direct microscope 100 to skip the area and/or only partially process the area to reduce computational imaging at step 310. This process can be employed during the acquisition of images with the image capture device, or later during the image reconstruction process, and combinations thereof. If the area does not comprise artifact or empty space, the area can be process with high resolution at step 314. Although a high resolution process is shown, other processes can be performed either alternatively or in combination. The process may improve other aspects of the image related to image quality, such as quality improvement, aberration correction, computational refocusing, contrast enhancement, distortion correction, color enhancement, registration, removing identified elements of the data. In some embodiments, the removed identified elements of the data comprise one or more of artifact, dust or empty space.
The processor may also flag the areas having artifact and/or empty space for subsequent imaging at step 312. For example, as microscope 100 initiates more in-depth computational imaging of the sample, those areas of the sample that are of little or no interest may be flagged such that microscope 100 forgoes any additional processing of those areas.
At a step 316, the imaging process moves to the next area of the sample.
In some embodiments the microscope processor can be configured with one mode, and the mode selection described herein is optional. e.g. only one mode, which tests for empty or dirty areas while processing full resolution images, or PE mode without switching to full resolution mode.
The processor may then computationally process the image at step 414 to generate a computational image. Generally, a computational image is an image where at least a part of the image was created using a computational process. For example, a computational process may include resolution enhancement and/or quality improvement, such as, aberration correction, computational refocusing, contrast enhancement, distortion correction, color enhancement, registration, and/or removing certain elements of the data such as debris, dust, and/or empty space. Some processes that may be applied to areas that are empty or dirty (e.g., areas 202 and 203 of
In this regard, the processor of microscope 100 may perform additional and/or computational imaging processes if the area being observed is also being tested, at step 416. Step 416 is an optional step, and can depend on other aspects of the work flow and process 400 and other processes and methods as described herein. For example, step 416 can be performed when the microscope has identified an area as having relevant data and partially constructed the image as part of step 404. Alternatively, step 416 can be skipped, for example when the process comprises the full resolution mode, and process 414 has generated the computational image.
Thereafter, the processor may adjust the testing criteria used for testing particular area, at step 418. This may allow the processor to change modes from full resolution mode to PE mode. For example, as the processor is testing an area of the image, the processor may deem the area as either empty or dirty before proceeding to a subsequent area. As a subsequent area may likely be empty or dirty as well, the processor may switch to the PE mode at step 420 for the subsequent area. Conversely, if the processor encounters valid data (e.g., a portion of the image occupied by region 201 of
Microscope 100 may move on to the next area, at step 422, and returned to step 402. If there is no reason to change the testing criteria (e.g., because the current mode of microscope 100 is likely to be used in a subsequent area), the processor may direct microscope 100 to simply move on to the next area, at step 422.
Returning to step 402, if the processor determines that microscope 100 is operating in the PE mode, the processor may direct microscope 100 to perform an imaging process, at step 404. For example, the processor may form a partial imaging of an area and then test whether the area has debris and/or empty space, at step 406. In some embodiments, the partial or full imaging partial process may be limited to the computational process, while other processes such as image illumination and acquisition continue. This partial imaging may reduce the computational complexity and thus reduce the number of computations used in the imaging. Then, the processor may they determine whether the area includes relevant data or not, at step 408.
If the area does include relevant data, the processor may direct microscope 100 to operate in full resolution mode and generate a computational process image, at step 414. Otherwise, the processor may partially process the image or even skip over the entire area, at step 410.
Any of the steps of method 400 can be combined with any method step corresponding to a block of workflow 300 as described herein. Although workflow 300 and method 400 are described as a sequence of steps, in some embodiments various concurrent iterations may result in steps being stalled, omitted, repeated, and/or performed in different order. The steps disclosed herein are optional, e.g. steps 302 and 304, and can be performed in any order.
As detailed above, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each comprise at least one memory device and at least one physical processor.
The term “memory” or “memory device,” as used herein, generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices comprise, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
In addition, the term “processor” or “physical processor,” as used herein, generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors comprise, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application-Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
Although illustrated as separate elements, the method steps described and/or illustrated herein may represent portions of a single application. In addition, in some embodiments one or more of these steps may represent or correspond to one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks, such as the method step.
In addition, one or more of the devices described herein may transform data, physical devices, and/or representations of physical devices from one form to another. For example, one or more of the devices recited herein may receive image data of a sample to be transformed, transform the image data, output a result of the transformation to determine a 3D process, use the result of the transformation to perform the 3D process, and store the result of the transformation to produce an output image of the sample. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another form of computing device by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
The term “computer-readable medium,” as used herein, generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media comprise, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical-storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or comprise additional steps in addition to those disclosed.
The processor as disclosed herein can be configured to perform any one or more steps of a method as disclosed herein.
The preceding description has been provided to enable others skilled in the art to best utilize various aspects of the exemplary embodiments disclosed herein. This exemplary description is not intended to be exhaustive or to be limited to any precise form disclosed. Many modifications and variations are possible without departing from the spirit and scope of the instant disclosure. The embodiments disclosed herein should be considered in all respects illustrative and not restrictive. Reference should be made to the appended claims and their equivalents in determining the scope of the instant disclosure.
Unless otherwise noted, the terms “connected to” and “coupled to” (and their derivatives), as used in the specification and claims, are to be construed as permitting both direct and indirect (i.e., via other elements or components) connection. In addition, the terms “a” or “an,” as used in the specification and claims, are to be construed as meaning “at least one of” Finally, for ease of use, the terms “including” and “having” (and their derivatives), as used in the specification and claims, are interchangeable with and have the same meaning as the word “comprising.” Also, as used herein the term “multiple” encompasses a “plurality” and refers to two or more.
This disclosure also includes the following numbered clauses
Clause 1. A microscope, comprising:
Clause 2. The microscope of clause 1, wherein:
Clause 3. The microscope according to any of clauses 1 to 2, wherein:
Clause 4. The microscope according to any of clauses 1 to 3, wherein:
Clause 5. The microscope according to any of clauses 1 to 4, wherein:
Clause 6. The microscope according to any of clauses 1 to 5, wherein:
Clause 7. The microscope according to any of clauses 1 to 6, further comprising:
Clause 8. The microscope according to any of clauses 1 to 7, wherein:
Clause 9. The microscope according to any of clauses 1 to 8, wherein:
Clause 10. The microscope according to any of clauses 1 to 9, further comprising:
Clause 11. The microscope according to any of clauses 1 to 10, wherein:
Clause 12. The microscope according to any of clauses 1 to 11, wherein:
Clause 13. The microscope according to any of clauses 1 to 12, wherein:
Clause 14. The microscope according to any of clauses 1 to 13, wherein:
Clause 15. The microscope according to any of clauses 1 to 14, wherein:
Clause 16. The microscope according to any of clauses 1 to 15, further comprising:
Clause 17. The microscope according to any of clauses 1 to 16, wherein:
Clause 18. The microscope according to any of clauses 1 to 17, wherein:
Clause 19. The microscope according to any of clauses 1 to 18, wherein:
Clause 20. The microscope according to any of clauses 1 to 19, wherein:
Clause 21. The microscope according to any of clauses 1 to 20, wherein:
Clause 22. The microscope according to any of clauses 1 to 21, wherein:
Clause 23. The microscope according to any of clauses 1 to 22 wherein:
Clause 24. The microscope according to any of clauses 1 to 23 herein wherein:
Clause 25. The microscope according to any of clauses 1 to 24 wherein:
Clause 26. The microscope according to any of clauses 1 to 25 wherein:
Clause 27. The microscope according to any of clauses 1 to 26 wherein:
Clause 28. The microscope according to any of clauses 1 to 27, wherein:
Clause 29. The microscope of any one of clauses 1 to 28 wherein:
While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.
This application is a continuation of U.S. patent application Ser. No. 16/875,721, filed May 15, 2020, now U.S. Pat. No. 11,409,095, issued Aug. 9, 2022, which is a bypass continuation of International Application No. PCT/IL2018/051253, filed Nov. 20, 2018, published as WO 2019/097524, on May 23, 2019, and claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 62/588,658, filed Nov. 20, 2017, the disclosures of which are incorporated, in their entirety, by this reference.
Number | Name | Date | Kind |
---|---|---|---|
5671085 | Gustafsson | Sep 1997 | A |
6084991 | Sampas | Jul 2000 | A |
6804011 | Kuechel | Oct 2004 | B2 |
8253789 | Aizaki | Aug 2012 | B2 |
8565503 | Eichhorn | Oct 2013 | B2 |
9817224 | Zheng | Nov 2017 | B2 |
9824259 | Liebel | Nov 2017 | B2 |
10169852 | Putman | Jan 2019 | B1 |
10558029 | Leshem | Feb 2020 | B2 |
10705326 | Small | Jul 2020 | B2 |
11409095 | Madar | Aug 2022 | B2 |
20140226003 | Phaneuf | Aug 2014 | A1 |
20170261741 | Stoppe | Sep 2017 | A1 |
20180348500 | Naaman, III | Dec 2018 | A1 |
20180373016 | Leshem, III | Dec 2018 | A1 |
20190072751 | Rainbolt | Mar 2019 | A1 |
20190101736 | Chen | Apr 2019 | A1 |
20190235224 | Small | Aug 2019 | A1 |
20190384962 | Hayut | Dec 2019 | A1 |
20200041780 | Na'Aman | Feb 2020 | A1 |
20200278362 | Hayut | Sep 2020 | A1 |
20200278530 | Madar | Sep 2020 | A1 |
20200302144 | Leshem | Sep 2020 | A1 |
Number | Date | Country |
---|---|---|
2014044360 | Mar 2014 | JP |
2013015740 | Jan 2013 | WO |
2014075764 | May 2014 | WO |
2017081539 | May 2017 | WO |
2017081540 | May 2017 | WO |
2017081541 | May 2017 | WO |
2017081542 | May 2017 | WO |
2018078447 | May 2018 | WO |
2018078448 | May 2018 | WO |
2019077610 | Apr 2019 | WO |
2019097523 | May 2019 | WO |
2019097524 | May 2019 | WO |
2020129064 | Jun 2020 | WO |
2021095037 | May 2021 | WO |
Entry |
---|
Gryanik, A., et al., “Automatic Image Quantification for Structural Analysis of in vitro Dermal Samples,” Biomed Tech; 57 (Supp. 1):494-497 (2012). |
International Search Report and Written Opinion for International Application No. PCT/IL2018/051253, 17 pages (dated Mar. 5, 2019). |
Number | Date | Country | |
---|---|---|---|
20220350129 A1 | Nov 2022 | US |
Number | Date | Country | |
---|---|---|---|
62588658 | Nov 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16875721 | May 2020 | US |
Child | 17812374 | US | |
Parent | PCT/IL2018/051253 | Nov 2018 | US |
Child | 16875721 | US |