This disclosure relates generally to digital imaging, and in particular but not exclusively, relates to classification of objects within image data.
When an image of an object is captured by a camera, a user may want to enhance the image by adjusting contrast of the image, accentuating certain features of the image, etc. when analyzing the object captured within the camera's field of view. For example, an image may be enhanced to distinguish between different mineral/soil compositions (e.g., in aerial or ground photography), to highlight blood flow (e.g., oxygenated and non-oxygenated blood), to perform emotion recognition and/or detection, to distinguish anatomy in surgery (e.g., to distinguish between benign and malignant tissue), etc. In medical imaging in particular, enhancing medical images, for example to distinguish between benign and malignant tissue, increases the safety and expands the reach of surgical interventions.
Specifically, during laparoscopic surgery, a surgeon inserts an endoscope into a small incision of a patient. The endoscope illuminates and captures images of a surgical area in real-time while the surgeon performs the surgery through the same or other incision, typically without physically viewing the area being operated on. A significant problem with such surgeries and the imaging used during such surgeries is that of low contrast between tissue(s) of interest and other tissue(s). For example, in a surgical procedure to excise cancerous tissue, the surgeon wants high contrast between the cancerous tissue and the healthy tissue so that the cancerous tissue can be removed while making no/minimal damage to surrounding healthy tissue. The problem is even more acute in surgery close to nerve tissue, where high contrast between adipose, cancerous, or other tissue and nerve tissue is needed to prevent inadvertent contact and/or removal of nerve tissue to prevent damage to the nervous system.
One solution to distinguishing between different tissue types during surgical procedures includes injecting fluorescent dyes into a patient where different dyes bind to different tissue types. When imaged with illumination from an excitation laser, the dye will fluoresce, and the desired tissue will appear much brighter in captured image data. To obtain high contrast between multiple tissues, a specific dye with a different excitation wavelength for each type of tissue is needed, a different excitation laser is needed for each tissue type, and a separate image for each tissue type is also needed. This approach for distinguishing between tissue types, however, typically requires the use of toxic dyes having specific fluorescence, which have not been approved by the Food and Drug Administration for use in humans, the development of which is a long, costly process. Furthermore, specific wavelength excitation lasers are needed and images must be captured for each dye/tissue combination. Thus, to obtain multiclass contrast between different types of tissues, the number of lasers and the number of images scales linearly with the number of tissue types to be differentiated.
Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles being described.
Embodiments of an apparatus, system, and process for performing multiclass classification of objects within images using a programmable light source are described herein. In embodiments, a programmable light source illuminates a scene and a camera captures image data of the illuminated. In one embodiment, the programmable light source and the camera are integrated into an endoscope for use during, for example, laparoscopic surgery being performed by a medical professional.
In one embodiment, image data captured by the camera is provided to a trained machine learning model, such as a trained neural network, that analyzes color values of individual pixels of the image data to classify the individual pixels as belonging to one or more different classes. In embodiments, the different classes can be different classes of tissue imaged during a medical procedure. Furthermore, as discussed in greater detail below, the color values of the individual pixels are a function of the spectrum of illumination provided by the programmable light source, spectral reflectivities of tissue(s) being imaged, and color functions of a filter of the camera (e.g., Bayer filter color functions). The machine learning model may be trained prior to use using multispectral imaging and samples of classes of tissue to determine a spectrum of illumination of the programmable light source that optimally distinguishes between the classes of tissue. Furthermore, during use, the color values of the individual pixels are provided to the trained machine learning model to distinguish the classes of tissue on a per-pixel basis. Beneficially, hyperspectral quality imaging is used to train the machine learning model before use, but a much less bulky and much faster discrete imaging system is utilized during use to classify different tissue types based on color pixel values.
In embodiments, based on the classification of the pixels of the captured image data, the different classes of image data can be distinguished when rendered to a user, such as a surgeon performing a medical procedure. For example, a rendered image of a scene can include color overlays that visually distinguish between different tissue types. In embodiments, the surgeon can provide feedback, such as highlighting areas believed to be certain tissue types. This feedback data can be provided back to the machine learning model for training a machine learning model in real time and/or refinement of an existing machine learning model.
The description below refers to an endoscope with a camera and a programmable light source used during medical procedures for distinguishing between different classes of tissue. However, the presently described embodiments are not limited to endoscopes and/or medical procedures. Rather, the embodiments and techniques described herein are flexible, and can be used for performing multiclass classification for a wide class of imaging applications in which types of objects within image data are to be classified. However, for sake of clarity, the below description will be focused on multi-class tissue type classification.
Some portions of the detailed description that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “applying”, “illuminating”, “capturing”, “analyzing”, “rendering”, “determining”, “identifying”, “generating”, “measuring”, “using”, “receiving”, “providing”, or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
Endoscope 110 is responsible for capturing images of a scene with camera 114. Camera 114 may include a lens 116 and an image sensor 118. The lens 116 of camera 114 allows light to pass from outside of endoscope 110 (e.g., a scene) to the image sensor 118 of camera 114. In one embodiment, the image sensor is a Bayer image sensor that includes a color arrangement of red, green, and blue color filters on a pixel array of the image sensor 118. Thus, the image sensor 118 captures the light filtered through the color filters on a grid of photosensitive pixels and conveys red, green, or blue image information for each pixel to image processing device 120.
Image processing device 120 is a computer processing system, such as a desktop computer, laptop computer, tablet computer, mobile telephone, or purpose build computing device, which includes a processor, memory, communication interfaces, and other components typically found in computer processing systems. One embodiment of a computing processing system is discussed in greater detail below in
In one embodiment, endoscope 110 utilizes programmable light source(s) 112 to illuminate a scene during a medical procedure. As illustrated in
In one embodiment, camera 114 captures the light generated by the programmable light source(s) 112 and reflected from the different tissues 180 in the scene using image sensor 118. In one embodiment, image sensor 118 includes a Bayer filter having a color filter array arranged over a grid of photosensors. Thus, every pixel of the camera's 114 sensor 118 captures a function of the red, green, or blue light reflected from a scene, such as each photosensor recording a different weighted integral of the incident spectrum (e.g., Red=∫(I(λ)R(λ)r(λ)d, Green=∫(I(λ)R(λ)g(λ)d, and Blue=∫(I(λ)R(λ)b(λ)d). That is, the recorded red, green, or blue values at each pixel are a function of the spectrum of the programmable light source(s) 710 (e.g. I(λ)), the reflectivity of the tissue being imaged at each pixel 720 (e.g., R(λ)), and the Bayer filter color functions for the associated pixel filter color (e.g., one of r(λ), g(λ), or b(λ)), as illustrated in
In one embodiment, endoscope controller 125 generates visualization 182 of the tissue types in real time during a medical procedure based on the determined classification probabilities, which is rendered on display 130. In one embodiment, the visualization 182 of each tissue type during surgery includes a color overlay selectively displayed over each pixel of captured image data on display 130. The color overlay image can include a different color for each type of tissue rendered in real time over the captured image data for a surgeon or other medical professional. That is, for each classifiable tissue type, a different color is assigned, and a probability mapping of each tissue type to pixels of a captured image can govern the opacity of the color over the image to, in embodiments, distinguish between different tissue types and provide an indication of the likely accuracy of the determined classification. Thus, different color overlays are created for each tissue type at each pixel of captured image data to differentiate regions of tissue type, increase contrast between tissue types, warn a medical professional using endoscope of the location of specific types of tissue, etc. These color overlays can be rendered to create a single visualization 182 where different types of tissue are conspicuously differentiated based on the boundaries of the color overlays. However, each color overlay can be toggled through by a surgeon or medical professional in real time as image data is captured by endoscope 110.
In one embodiment, the image processing device 250 includes endoscope controller 252 having an endoscope interface 255, a light engine controller 260, a machine learning model analysis engine 265, a display rendering engine 290, an optional machine learning model training engine 270, and a store of machine learning models 280. As discussed above in
The image processing device 250, in embodiments, can be implemented in a computing device, such as a desktop computer, laptop computer, tablet computer, purpose built computing appliance, video game console, mobile telephone, as well as other computing devices. Endoscope interface 255 is responsible for communicably coupling the endoscope controller 252 of image processing device 250 with endoscope 210 to enable endoscope controller 260 the ability to control the programmable light source(s) 212 of endoscope 210, as well as to receive color pixel data captured by camera 214 of endoscope 210.
In one embodiment, light engine controller 260 is responsible for configuring the programmable light source(s) 212 of endoscope. In one embodiment, programmable light source(s) 212 include individually configurable light sources that illuminate a scene (e.g., discrete wavelength laser light sources with modulatable power levels). In one embodiment, light engine controller 260 configures each of the programmable light source(s) 212 to illuminate the scene at a corresponding power level. In one embodiment, the power level for each of the programmable light source(s) 212 is determined by light engine controller 260 from a trained machine learning model (MLM) used by MLM analysis engine 265 when performing multi-class tissue type classification. A machine learning model, which is trained according to the discussion below, is accessed by endoscope controller 125 to determine power weighting values corresponding to power weighting functions found by, for example neural network optimization, to program the intensities/power levels applied to the one or more programmable light source(s) 212 by light engine controller 260.
Camera 214 captures an image of the scene as illuminated by programmable light source(s) 212. More specifically, the light reflected from one or more types of tissue in the scene passes through lens 216 and onto sensor 218. Sensor 218, in one embodiment, is a Bayer color filter that has a matrix of red, green, and blue color pixels. The color values captured by each pixel are provided to endoscope controller 252 for MLM analysis engine 265 to analyze using a selected machine learning model corresponding to the machine learning model used to configure the programmable light source(s) 212.
Prior to use of a machine learning model by MLM analysis engine 265, endoscope controller may obtain one or more MLM models trained by a remote system using the techniques discussed herein, and store them in MLM data store 280. Additionally, or alternatively, MLM training engine 270 may perform a machine learning process to locally train one or more machine learning models for performing multi-class tissue type classification or further refine an already trained MLM. In embodiments, each machine learning model utilized by MLM analysis engine 265 enables differentiation between a set of two or more types of tissue.
In one embodiment, where endoscope controller 252 locally performs machine learning model training, MLM training engine 270 is communicably coupled with a hyperspectral imager (not shown) via hyperspectral imager (HSI) interface 275. In this embodiment, the hyperspectral imager is responsible for supply training data to the MLM training engine 270 in the form of measured reflectance of one or more tissue samples in response to a plurality of different discrete illumination wavelengths. In one embodiment, MLM training engine 265 trains a machine learning model, such as the neural network illustrated in
In one embodiment, to train the machine learning model of
During training, responsive to the input sample reflection data 602 and based on a machine learning process, the light engine intensities 612 layer of activation functions iteratively refine weightings applied to the wavelengths of the illumination source (e.g., programmable light source(s) 212), iteratively refine the RGB color value 614 activation functions (e.g., computed red, green, or blue color values as a function of the light engine intensities 612 and the input sample reflection 602), and iteratively refine the zero or more additional layers of MLM activation functions to minimize training error of the expected probability vector associated with a tissue of known type. That is, the machine learning process used by MLM analysis engine 265 to train the machine learning model iteratively adjusts the activation functions and weightings of each layer of activation function at each node of the neural network machine learning model 610 with each training iteration to minimize the error when obtaining a desired output having a probability of 1 for the actual tissue type of the sample being imaged (e.g., the known tissue type), and a probability of 0 for all the other tissue types (e.g., other classifications not being imaged).
In embodiments, machine learning models, such as neural network machine learning model 610, are scalable to any number of categories/types of tissue. After a sufficient number of tissue samples have been imaged at a sufficient number of wavelengths by a hyperspectral imager, and MLM training engine 270 has performed a corresponding number of iterative refinements of the deep neural network machine learning model 610, the resulting model can be stored in MLM data store for later use by MLM analysis engine 265.
In embodiments, however, the deep neural network machine learning model 610 may alternatively be trained by a remote system (not shown) and not MLM training engine 270. In this embodiment, the remote system provides endoscope controller 252 with the trained machine learning model for user during medical procedures where distinguishing between tissue types in the model are desired. In embodiments, the remote system can provide a set of procedure specific trained machine learning models, a set of different multi-class classifiers for distinguishing between different groupings of tissue type, etc. for storage in MLM data store 280. As another example, MLM analysis engine 265 can request and receive a specific multi-class tissue type classification model on a procedure-by-procedure basis.
In another embodiment, MLM training engine 270 can receive training data from endoscope 210. In embodiments, this training data can be received in response to the training of a new machine learning model or in response to a refinement of an existing machine learning model. In this embodiment, MLM training engine 270 initially receives highlighted, annotated, or otherwise differentiated areas of image data captured by endoscope 210 (e.g., a surgeon's real time annotations of image data during a medical procedure). For example, a cursor or touch input device can provide input for a display system coupled with an image processing device, which is displaying real-time image data of a scene captured by endoscope 210. In embodiments, the differentiated areas are received in real-time with annotations for each area and the corresponding believed tissue type. MLM training engine 270 instructs light engine controller 260 to individually iterate through each of the discrete programmable light source(s) 212, illuminating the scene with each light source one at a time and storing the corresponding image in a memory accessible to MLM training engine 270. The set of images corresponding to the different light sources yields partial spectral reflectivities for each pixel of captured image data. In one embodiment, the partial spectral reflectivities are fed into the machine learning model for iteratively training a machine learning model, such as a deep neural network machine learning model, as discussed above. In one embodiment, a new machine learning model may be trained using the partial spectral reflectivities. In another embodiment, an existing trained machine learning model (e.g. a model trained using hyperspectral imaging), may be refined using real-time surgical annotations.
After training a machine learning model using one of the techniques discussed above, or after acquiring a trained machine learning model, endoscope controller 252 utilizes the trained machine learning model for performing real time multi-class tissue type classification during medical procedures, and generating appropriate visualizations. In one embodiment, MLM analysis engine 265 performs a pixel wise machine learning model analysis in order to classify each pixel as belonging to one of a plurality of different classes of tissue types. In embodiments, color information captured by each pixel is fed into an appropriate intermediate layer of the machine learning model being used by MLM analysis engine 265 (e.g., layer 614 of machine learning model 610). The intermediate layer of the machine learning model takes as input the color value of a Bayer color filter pixel, performs a series of machine learning model computations using the color value as an input, and outputs a probability vector with probabilities that the pixel belongs to one of the various potential classes of tissue. That is, during a medical procedure, the recorded red, green, or blue color value for each pixel of filter 218 is fed by MLM analysis engine 265 into the appropriate intermediate layer of a trained machine learning model (e.g., the RGB color value layer 614 of the trained deep neural network 610 illustrated in
In embodiments, MLM analysis engine 265 performs machine learning model based analysis of color information captured for each pixel of image data in real time as image data is captured by camera 214, and provides the analysis results to display rendering engine 290. Display rendering engine 290 receives the analysis results, including the probability vectors for each pixel of image data. In one embodiment, the probability vectors are utilized to generate one or more visualization(s) that enhance captured image data rendered on a display device (not shown) coupled with image processing device. The visualization(s) can include enhancing contrast between two regions having tissues of different types (e.g., enhancing contrast between nerve and cancer tissue), generating color overlays for different classes of tissue types (e.g., color overlays over fat, muscle, and intestinal tissue), generating a notice for unknown zones of tissue type (e.g., a zone for which a probability vector is not able to predict any tissue type of interest within a threshold degree of accuracy), generating warnings when a medical tool encroaches on a zone having a specific tissue type (e.g., when a medical tool approaches a threshold distance to nerve tissue), etc. The visualizations are generated in real time or near real time as image data is captured by camera 214. Thus, the visualizations provide valuable real-time feedback and tissue type differentiation to medical professionals during a medical procedure.
Referring to
Processing logic configures one or more programmable light source(s) of an imaging system based on a weighting of light source(s) in the machine learning model (processing block 304). In one embodiment, the weightings corresponding with weighting extracted from the selected machine learning model. For example, the weightings can correspond with weightings applied to light engine intensities as the result of the training of the selected machine learning model (e.g., weightings applied to activation functions 612). In embodiments, processing logic powers each light programmable light source according to its respective weighting so that the combined weightings applied to the programmable light sources match the weightings extracted from the machine learning model.
Processing logic illuminates a scene with the configured light source(s) (processing block 306) Processing logic then captures an image of the scene including one or more types of tissue depicted in the image data (processing block 308). As discussed herein, the light source illuminates the scene by shining one or more laser lights at discrete illumination wavelengths and configured power levels. An image sensor, such as Bayer color filter having an array of photosensors with a color grid overlay, then captures the light reflected from the tissue(s) in the scene.
For each pixel of captured image data, the pixels corresponding to pixels of an image filter of a camera, processing logic inputs color information from the captured image data into an intermediate layer of the machine learning model to classify that pixel as belonging to different tissue types (processing block 310). In one embodiment, the intermediate layer is an intermediate layer of a trained neural network machine learning model that performs activation function calculations on pixel color value, which are in turn used by the remaining layers of the trained neural network machine learning model to compute a probability vector associated with that pixel. As discussed herein, the probability vector includes probabilities that the pixel belongs to each potential type of tissue. When the probability exceeds a threshold value (e.g., 85%, 90%, 99%, etc.), processing logic concludes that the pixel of image data has captured image data of the corresponding tissue type. This can be repeated for each pixel so that the entire image, or a subset of an image, can be classified according to tissue type captured therein.
Processing logic utilizes the classifications to render a visualization of the captured image data including a visual differentiation between tissues determined to belong to different types (processing block 312). In embodiments, the differentiation between tissue types in the rendered visualization can include contrast enhancement between tissue types of interest. In another embodiment, color overlays may be generated for each tissue type of interest, with opacity of each color overlay adjusted to account for the computed probability (e.g., the higher the probability, the more opaque a region is rendered in the visualization). In embodiments, the color overlays and/or contrast enhancements can be selectively activated and/or deactivated by the medical professional(s) performing the procedure. Furthermore, different visualizations can be stepped through, such as by turning on/off individual color overlays, enhancing contrast of selected types of tissue, etc. Each of the visualizations and the control of the visualization provide a medical professional improved imaging of a scene, which is valuable since the medical professionals typically perform medical procedures with endoscopes without actually seeing the scene.
Referring to
Processing logic measures a spectral reflectance of a tissue sample of a tissue type of interest at one of a plurality of discrete illumination wavelengths to determine reflectance of the tissue sample as a function of that illumination wavelength (processing block 404). In one embodiment, the spectral reflectance corresponds with measurements taken for a plurality of discrete illumination wavelengths generated by a hyperspectral imager. In another embodiment, the spectral reflectance corresponds with partial spectral reflectivity measurements taken for each programmable light source of an endoscope.
Processing logic inputs the measured spectral reflectance for the tissue type into a MLM training process for the machine learning model (processing block 406). As discussed herein, the input is used to iteratively train the machine learning model by inputting the measured spectral reflectance values into the machine learning model for a known tissue type. The training methods are discussed in greater detail above.
Processing logic then determines if there are more wavelengths for which spectral reflectance data is desired (processing block 408) and if there are more tissue type samples of the same and/or different tissue type (processing block 410). If either of these processing blocks is true, the process returns to processing block 404 to measure spectral reflectance and further train the machine learning model. In embodiments, the machine learning model is iteratively trained using a plurality of different wavelengths of illumination of each tissue sample and a plurality of different samples of the same and different type. However, when there is no more training data to be obtained and input into the machine learning model training process, the process ends.
The computer system illustrated in
The system may further be coupled to a display device 570, such as a light emitting diode (LED) display, a liquid crystal display (LCD), etc., coupled to bus 515 through bus 565 for displaying information to a computer user, such as a medical professional utilizing the image processing system during a medical procedure. An alphanumeric input device 575, including alphanumeric and other keys, may also be coupled to bus 515 through bus 565 for communicating information and command selections to processor 510. An additional user input device is cursor control device 580, such as a mouse, a trackball, stylus, or cursor direction keys coupled to bus 515 through bus 565 for communicating direction information and command selections to processor 510, and for controlling cursor movement on display device 570.
Another device, which may optionally be coupled to computer system 500, is a communication device 590 for accessing other nodes of a distributed system via a network. The communication device 590 may include any of a number of commercially available networking peripheral devices such as those used for coupling to an Ethernet, token ring, Internet, or wide area network. The communication device 590 may further be a null-modem connection, or any other mechanism that provides connectivity between the computer system 500 and the outside world. Note that any or all of the components of this system illustrated in
It will be appreciated by those of ordinary skill in the art that any configuration of the system may be used for various purposes according to the particular implementation. The control logic or software implementing the present invention can be stored in memory 550, data storage device 525, or other storage medium locally or remotely accessible to processor(s) 510.
It will be apparent to those of ordinary skill in the art that the systems, methods, and processes described herein can be implemented as software stored in memory 550 or read only memory 520 and executed by processor(s) 510. This control logic or software may also be resident on an article of manufacture comprising a computer readable medium having computer readable program code embodied therein and being readable by the data storage device 525 and for causing the processor(s) 510 to operate in accordance with the methods and teachings herein.
The present invention may also be embodied in a handheld or portable device, such as a tablet computer system, laptop computer system, smartphone, smart glasses, etc., containing a subset of the computer hardware components described above. For example, the handheld device may be configured to contain only the bus 515, the processor(s) 510, and memory 550 and/or 525. The handheld or portable device may also be configured to include a set of buttons or input signaling components with which a user may select from a set of available options. The handheld or portable device may also be configured to include an output apparatus such as a liquid crystal display (LCD) for displaying information to a user of the handheld or portable device. Conventional methods may be used to implement such a handheld or portable device. The implementation of the present invention for such a device would be apparent to one of ordinary skill in the art given the disclosure of the present invention as provided herein.
The present invention may also be embodied in a special purpose appliance including a subset of the computer hardware components described above. For example, the appliance may include a processor(s) 510, a data storage device 525, a bus 515, and memory 550, and only rudimentary communications mechanisms, such as a small touch-screen that permits the user to communicate in a basic manner with the device. In general, the more special-purpose the device is, the fewer of the elements need be present for the device to function.
The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described. Additionally, the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or otherwise.
A tangible machine-readable storage medium includes any mechanism that provides (i.e., stores) information in a non-transitory form accessible by a machine (e.g., a computer, network device, personal digital assistant, any device with a set of one or more processors, etc.). For example, a machine-readable storage medium includes recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.).
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
This application is a continuation of U.S. patent application Ser. No. 17/011,420, filed on Sep. 3, 2020, which is a continuation of U.S. patent application Ser. No. 15/445,121, filed Feb. 28, 2017, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5830141 | Makram-Ebeid et al. | Nov 1998 | A |
6174291 | McMahon | Jan 2001 | B1 |
9486128 | Hannaford | Nov 2016 | B1 |
9743825 | Hirota et al. | Aug 2017 | B2 |
9931039 | Huang et al. | Apr 2018 | B2 |
10383694 | Venkataraman | Aug 2019 | B1 |
10806334 | Ganapati | Oct 2020 | B2 |
11229353 | Ganapati | Jan 2022 | B2 |
20060245631 | Levenson et al. | Nov 2006 | A1 |
20070078335 | Horn | Apr 2007 | A1 |
20070169253 | Panzer | Jul 2007 | A1 |
20080177183 | Courtney | Jul 2008 | A1 |
20090247845 | Pav | Oct 2009 | A1 |
20090312644 | Kosugi et al. | Dec 2009 | A1 |
20110042258 | Nett | Feb 2011 | A1 |
20120061590 | Khojasteh et al. | Mar 2012 | A1 |
20120210502 | Baham | Aug 2012 | A1 |
20120272334 | Zhang et al. | Oct 2012 | A1 |
20140275792 | Hawkins | Sep 2014 | A1 |
20150248770 | Hasegawa | Sep 2015 | A1 |
20160015471 | Piron | Jan 2016 | A1 |
20160174848 | Ammar | Jun 2016 | A1 |
20160191887 | Casas | Jun 2016 | A1 |
20160235482 | Wood | Aug 2016 | A1 |
20160278678 | Valdes | Sep 2016 | A1 |
20170099479 | Browd | Apr 2017 | A1 |
20170105601 | Pheiffer | Apr 2017 | A1 |
20170112353 | Ikemoto | Apr 2017 | A1 |
20170256052 | Bhattacharya | Sep 2017 | A1 |
20170270382 | Rephaeli | Sep 2017 | A1 |
20170293139 | Rehe | Oct 2017 | A1 |
20170347043 | Rephaeli | Nov 2017 | A1 |
20170352161 | Ganapati | Dec 2017 | A1 |
20180014851 | Hansen | Jan 2018 | A1 |
20180028047 | Ganapati | Feb 2018 | A1 |
20180042466 | Kang | Feb 2018 | A1 |
20180150675 | Kamiyama | May 2018 | A1 |
20180153384 | Ikemoto | Jun 2018 | A1 |
20180210188 | Ganapati | Jul 2018 | A1 |
20180220892 | Klubben, III | Aug 2018 | A1 |
20180247153 | Ganapati | Aug 2018 | A1 |
20180330818 | Hsieh | Nov 2018 | A1 |
Number | Date | Country |
---|---|---|
104968257 | Oct 2015 | CN |
105748027 | Jul 2016 | CN |
106455987 | Feb 2017 | CN |
104411229 | Apr 2017 | CN |
104541153 | Dec 2017 | CN |
102011080180 | Feb 2013 | DE |
06196269 | Jul 1994 | JP |
H9134434 | May 1997 | JP |
11101944 | Apr 1999 | JP |
2002077660 | Mar 2002 | JP |
2009300131 | Dec 2009 | JP |
2011002341 | Jan 2011 | JP |
2016158682 | Sep 2016 | JP |
2017161128 | Nov 2017 | WO |
Entry |
---|
Chinese Office Action, dated Dec. 14, 2021, in corresponding Chinese Patent Application No. 201880014223, 6 pages. |
International Search Report and Written Opinion from the International Searching Authority dated Apr. 4, 2018, for International Application No. PCT/US2018/014689, filed Jan. 22, 2018, 17 pages. |
“Bayer filter,” Wikipedia, retrieved from <https://en.wikipedia.org/wiki/Bayer_filter> on Mar. 17, 2020. |
Japanese Office Action, dated Oct. 12, 2020, in Japanese Patent Application No. 2019-538530, 3 pages. |
Chinese Office Action dated Mar. 29, 2021, in Chinese Patent Application No. 201880014223.X, 12 pages. |
Japanese Office Action, dated Mar. 30, 2021, in Japanese Patent Application No. 2019-538530, 3 pages. |
Decision of Rejection, dated Aug. 2, 2021, in Japanese Patent Application No. 2019-538530, 3 pages. |
European Examination Report, dated Nov. 11, 2022, in corresponding European Application No. 18703172.9-1126, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20220095905 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17011420 | Sep 2020 | US |
Child | 17548336 | US | |
Parent | 15445121 | Feb 2017 | US |
Child | 17011420 | US |