The present application is a U.S. National Stage Application filed under 35 U.S.C. § 371(a) claiming the benefit of and priority to International Patent Application No. PCT/CN2019/105368, filed Sep. 11, 2019, the entire disclosure of which being incorporated by reference herein.
The present disclosure relates to devices, systems, and methods for color restoration in images, and more particularly, to color restoration in dehazed images during surgical procedures.
Endoscopes are introduced through an incision or a natural body orifice to observe internal features of a body. Conventional endoscopes are used for visualization during endoscopic or laparoscopic surgical procedures. During electrosurgical procedures, it is possible for haze to be generated when the surgical instrument is used, for example, to treat tissue with electrosurgical energy during the surgery. Thus, the image acquired by the endoscope may include this haze. The haze may obscure features of the surgical site and delay the surgical procedure while surgeons wait for the haze to clear. Other procedures may experience similar issues where smoke is present during the capture of an image. Accordingly, there is interest in improving imaging technology.
The present disclosure relates to devices, systems, and methods for color restoration in images. In accordance with aspects of the present disclosure, a method for color restoration in images includes accessing an image of an object and processing the image based on an image processing operation to provide a processed image, where the image processing affects color of the object. The method further includes determining color adjustment parameters using a trained neural network, where an input to the trained neural network is based on the image and the processed image, restoring color in the processed image based on the color adjustment parameters to produce a color-restored image, and displaying the color-restored image on a display device. The color restoration technique described herein can be applied to images resulting from image processing other than dehazing, as well.
In an aspect of the present disclosure, the image processing operation may include a dehazing operation to dehaze the image. The dehazing operation includes: determining a dark channel matrix of the image, estimating an atmospheric light component for the image, determining a transmission map based on the atmospheric light component and the dark channel matrix, and dehazing the image based on the transmission map to provide the processed image.
In another aspect of the present disclosure, the image may be an RGB image, and the processed image may be an RGB processed image.
In an aspect of the present disclosure, determining the color adjustment parameters may include: converting the RGB image to an HSV image, converting the RGB processed image to a HSV processed image, subtracting the HSV image from the HSV processed image to provide an HSV difference image, inputting the HSV difference image to the trained neural network, and obtaining an HSV adjustment image as an output of the trained neural network, the HSV adjustment image including the color adjustment parameters. Restoring color in the processed image may include adding a hue channel and a saturation channel of the HSV adjustment image to the HSV processed image to provide a HSV color-restored image, converting the HSV color-restored image to RGB to provide the color-restored image.
In yet another aspect of the present disclosure, the method may further include training the neural network. The training includes: accessing a RGB haze-free image dataset having haze-free images, accessing a RGB haze dataset having images of haze on a dark background, combining the RGB haze-free image dataset with the RGB haze dataset to provide a RGB hazy image data set, dehazing images in the RGB hazy image dataset to provide a RGB dehazed image dataset, converting the RGB dehazed image dataset, the RGB hazy image dataset, and the RGB haze-free image dataset from RGB images to HSV images, to provide a HSV dehazed image dataset, a HSV hazy image dataset, and an HSV haze-free image dataset, respectively, determining a difference between images in the HSV dehazed image dataset and corresponding images in the HSV hazy image dataset to provide an HSV difference image dataset, and providing the HSV difference image dataset as a training input to the neural network.
In a further aspect of the present disclosure, training the neural network may further include decreasing a loss function. The loss function may be based on at least a portion of the HSV difference image dataset.
In an aspect of the present disclosure, the loss function is further based on a ground truth, the ground truth being based on a difference between an image of the HSV haze-free image dataset and a corresponding image of the HSV hazy image dataset.
In a further aspect of the present disclosure, the method may further include combining the RGB haze-free image dataset with the RGB haze dataset by determining a weighted combination using the formula: image in the RGB haze dataset*coeff+image in the RGB haze-free image dataset*(1−coeff). The coeff is a value between 0 and 1.
In yet another aspect of the present disclosure, the neural network may include a convolutional neural network and/or a fully connected neural network.
In a further aspect of the present disclosure, the convolutional neural network may include: a first convolution layer having outputs. The convolutional neural network further includes a first rectified linear unit configured to receive the outputs of the first convolution layer, a middle convolution layer configured to receive outputs of the first rectified linear unit, a middle rectified linear unit configured to receive outputs of the middle convolution layer, a last convolution layer configured to receive outputs of the middle rectified linear unit, and a last rectified linear unit configured to receive outputs of the last convolution layer. The middle convolution layer and the middle rectified linear unit are configured to iterate for a number of iterations.
In accordance with aspects of the present disclosure, a system for color restoration in images includes a display device, a processor, and a memory storing instructions. The instructions, when executed by the processor, cause the system to: access an image of an object, process the image based on an image processing operation to provide a processed image, wherein the image processing affects color of the object, and determine color adjustment parameters using a trained neural network. An input to the trained neural network is based on the image and the processed image. The instructions further cause the system to: restore color in the processed image based on the color adjustment parameters to produce a color-restored image and display the color-restored image on the display device.
In yet a further aspect of the present disclosure, the image processing operation may include a dehazing operation to dehaze the image. The instructions, when performing the dehazing operation further cause the system to: determine a dark channel matrix of the image, estimate an atmospheric light component for the image, determine a transmission map based on the atmospheric light component and the dark channel matrix, and dehaze the image based on the transmission map to provide the processed image.
In yet another aspect of the present disclosure, the image may be an RGB image, and the processed image may be an RGB processed image.
In a further aspect of the present disclosure, the instructions, when determining the color adjustment parameters may further cause the system to: convert the RGB image to an HSV image, convert the RGB processed image to a HSV processed image, subtract the HSV image from the HSV processed image to provide an HSV difference image, input the HSV difference image to the trained neural network, and obtain an HSV adjustment image as an output of the trained neural network, the HSV adjustment image including color adjustment parameters. Restoring color in the processed image includes: adding a hue channel and a saturation channel of the HSV adjustment image to the HSV processed image, to provide an HSV color-restored image and convert the HSV color-restored image to RGB to provide the color-restored image.
In yet a further aspect of the present disclosure, the instructions when training the neural network may further cause the system to: access a RGB haze-free image dataset having haze-free images, access an RGB haze dataset having images of haze on a dark background, combine the RGB haze-free image dataset with the RGB haze dataset to provide a RGB hazy image data set, dehaze images in the RGB hazy image dataset to provide a RGB dehazed image dataset, convert the RGB dehazed image dataset, the RGB hazy image dataset, and the RGB haze-free image dataset from RGB images to HSV images, to provide a HSV dehazed image dataset, a HSV hazy image dataset, and a HSV haze-free image dataset respectively, determine a difference between images in the HSV dehazed image dataset and corresponding images in the HSV hazy image dataset to provide an HSV difference image dataset, and provide the HSV difference image dataset as a training input to the neural network.
In yet another aspect of the present disclosure, training the neural network may further include decreasing a loss function, the loss function being based on at least a portion of the HSV difference image dataset.
In a further aspect of the present disclosure, the loss function may be further based on a ground truth, the ground truth being based on a difference between an image of the HSV haze-free image dataset and a corresponding image of the HSV hazy image dataset.
In an aspect of the present disclosure, combining the RGB haze-free image dataset with the RGB haze dataset includes determining a weighted combination using the formula: image in the RGB haze dataset*coeff+image in the RGB haze-free image dataset*(1−coeff). The coeff is a value between 0 and 1.
In another aspect of the present disclosure, the neural network may include a convolutional neural network and/or a fully connected neural network.
In a further aspect of the present disclosure, the convolutional neural network may include: a first convolution layer having outputs, a first rectified linear unit configured to receive outputs of the first convolution layer, a middle convolution layer configured to receive outputs of the first rectified linear unit, a middle rectified linear unit configured to receive outputs of the middle convolution layer, a last convolution layer configured to receive outputs of the middle rectified linear unit, and a last rectified linear unit configured to receive outputs of the last convolution layer. The middle convolution layer and the middle rectified linear unit may loop twenty times.
Further details and aspects of various embodiments of the present disclosure are described in more detail below with reference to the appended figures.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
Embodiments of the present disclosure are described herein with reference to the accompanying drawings, wherein:
Further details and aspects of exemplary embodiments of the disclosure are described in more detail below with reference to the appended figures. Any of the above aspects and embodiments of the disclosure may be combined without departing from the scope of the disclosure.
Embodiments of the presently disclosed devices, systems, and methods of treatment are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views. As used herein, the term “distal” refers to that portion of a structure that is farther from a user, while the term “proximal” refers to that portion of a structure that is closer to the user. The term “clinician” refers to a doctor, nurse, or other care provider and may include support personnel.
The present disclosure is applicable where images of a surgical site are captured. Endoscope systems are provided as an example, but it will be understood that such description is exemplary and does not limit the scope and applicability of the present disclosure to other systems and procedures.
Referring initially to
With reference to
The following description will now refer various flow and block diagrams, including various blocks described in an ordered sequence. However, those skilled in the art will appreciate that one or more blocks of the flow or block diagrams may be performed in a different order, repeated, and/or omitted without departing from the scope of the present disclosure. The below description of the flow diagram refers to various actions or tasks performed by one or more video system 30, but those skilled in the art will appreciate that the video system 30 is exemplary. In various embodiments, the disclosed operations can be performed by another component, device, or system. In various embodiments, the video system 30 or other component/device performs the actions or tasks via one or more software applications executing on a processor. In various embodiments, at least some of the operations can be implemented by firmware, programmable logic devices, and/or hardware circuitry. Other implementations are contemplated to be within the scope of the present disclosure.
Referring to
In various embodiments, the memory 454 can be random access memory, read-only memory, magnetic disk memory, solid-state memory, optical disc memory, and/or another type of memory. In various embodiments, the memory 454 can be separate from the imaging device controller 450 and can communicate with the processor 452 through communication buses of a circuit board and/or through communication cables such as serial ATA cables or other types of cables. The memory 454 includes computer-readable instructions that are executable by the processor 452 to operate the imaging device controller 450. In various embodiments, the imaging device controller 450 may include a network interface 540 to communicate with other computers or a server.
In the systems of
Referring now to
Initially, at step 502, the operation accesses an image of a surgical site. The image can be captured via the objective lens 36 and forwarded to the image sensor 32 of endoscope system 1. The term “image” as used herein may include still images or moving images (for example, video). In various embodiments, the captured image is communicated to the video system 30 for processing. For example, during an endoscopic procedure a surgeon may cut tissue with an electrosurgical instrument. During this cutting, haze such as smoke or fog may be generated. When the image is captured, it may include the haze. Haze is generally a turbid medium (such as particles, water droplets) in the atmosphere, which can be an enclosed atmosphere in the body cavity of a patient. The irradiance received by the objective lens 36 from the scene point is attenuated by the line of sight. This incoming light is mixed with ambient light (air-light) reflected into the line of sight by atmospheric particles such as smoke. This haze degrades the image, making it lose contrast and color fidelity.
At step 504, the operation dehazes the image to reduce the haze in the image. A dehazing operation will be described in more detail in connection with
Referring now to
With continuing reference to
In accordance with aspects of the present disclosure, the image 600 can include haze, and the video system 30 (
With continuing reference to
I_DARK(x)=min(min(Ic(y))), for all c∈{r,g,b}y∈Ω(x)
where y denotes a pixel of the patch Ω(x), c denotes a color component, and Ic(y) denotes the intensity value of the color component c of pixel y. Thus, the dark channel of a pixel is the outcome of two minimum operations across two variables c and y, which together determine the lowest color component intensity value among all pixels of a patch. In various embodiments, the video system 30 can calculate the dark channel of a pixel x by acquiring the lowest color component intensity value for every pixel in the patch Ω(x) and then finding the minimum value among all of those values.
For example, with reference to
In this example, for the top left pixel in the pixel area Ω(x1) 602, the R component may have an intensity of 1, the G component may have an intensity of 3, and the B component may have an intensity of 6. In this example, the R component has the minimum intensity value (a value of 1) of the RGB components for that pixel.
The minimum color component intensity value of each the pixels would be determined. In the above example, for the 3×3 pixel area Ω(x1) 602 centered at x1 the minimum color component intensity value for each of the pixels in the pixel area Ω(x1) 602 are:
Thus, the dark channel of the pixel would have an intensity value of 0 for this exemplary 3×3 pixel area Ω(x1) 602 centered at x1. In this manner, the dark channel can be determined for each pixel of the image 600, and the dark channel for all pixels form the dark channel matrix for the image 600.
In various embodiments, the dehazing operation involves estimating what is referred to herein as an “atmospheric light component” for the image. The estimated atmospheric light component for the image will be denoted herein as A. In various embodiments, the dehazing operation may estimate the atmospheric light component from the most haze-opaque pixel in the image. In various embodiments, the atmospheric light component A can be determined based on finding the lowest color component intensity value for each pixel in the image 600, such as min(IR(x), IG(x), IB(x)) for every pixel “x” in the image 600, and then finding the maximum among these lowest color component intensity values.
In various embodiments, the dehazing operation determines what is referred to herein as a transmission map T. The transmission map includes a transmission component T(x) for each pixel x. The transmission map value T(x) for a pixel x is determined based on the dark channel of pixel x and the atmospheric light component A as follows:
where ω is a parameter having a value between 0 and 1, such as 0.85. In practice, even in clear images, there are some particles. Thus, some haze exists when distant objects are observed. The presence of haze is a cue to human perception of depth. If all haze is removed, the perception of depth may be lost. Therefore, to retain some haze, the parameter ω (0<ω<=1) is introduced. In various embodiments, the value of ω can vary based on the particular application. Thus, the transmission map for a pixel is equal to 1 minus ω times the dark channel of the pixel (I−DARK(x)) divided by the atmospheric light component value for the image 600. The transmission map is used in the dehazing process described in Kaiming He et al., “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 33, No. 12, December 2011, the entire contents of which were previously incorporated by reference herein. The dehazing operation described above in connection with
With reference to
Initially, at step 702, the video system 30 converts the RGB hazy image 600 to an HSV hazy image denoted as I_HSV. Next, at step 704, the video system 30 converts the image dehazed in step 504 to an HSV dehazed image denoted as J_HSV.
Next, at step 706, the video system 30 subtracts the HSV hazy image I_HSV from the HSV dehazed image J_HSV to provide an HSV difference image D_HSV as follows:
D_HSV=J_HSV−I_HSV
The HSV dehazed image J_HSV is generally darker than the original HSV hazy image I_HSV because haze generally appears lighter. In HSV color space, darker corresponds to a higher saturation value, and brighter corresponds to a lower saturation value. Accordingly, the saturation values in the HSV difference image J_HSV will generally be positive values. However, for other types of image processing, the saturation values in the difference image may be negative. Additionally, the hue values of the difference image may be positive or negative depending on the direction of color change and/or the type of image processing. In general, the HSV difference image D_HSV reflects changes in hue and saturation related to the image processing, which in the above examples relate to the dehazing image processing.
Next, at step 708, the video system 30 inputs the HSV difference image D_HSV to a trained neural network and outputs an HSV adjustment image F_HSV. Aspects of the neural network will be described in connection with
Next, at step 710, the video system 30 adds the hue and saturation adjustment values of the adjustment image F_HSV to the HSV dehazed image J_HSV and outputs an HSV restored image R_HSV, as follows:
Hue of R_HSV=Hue of J_HSV+Hue of F_HSV
Saturation of R_HSV=Saturation of J_HSV+Saturation of F_HSV
Value of R_HSV=Value of J_HSV
Next, at step 712, the video system 30 converts the HSV restored image R_HSV to an RGB restored image R_RGB.
Finally, at step 714, the video system 30 may display the RGB restored image on a display. In various embodiments, the video system 30 may communicate the resultant RGB dehazed and color-restored image on the display device 40 and/or save it to a memory or external storage device for later recall or further processing. Although the operation of
With reference to
Initially, at step 802, the training operation acquires an RGB haze-free image dataset C_S which includes haze-free images. In various embodiments, the image set may include at least thousands of clean, haze-free images taken with a laparoscope. Next at step 804, training operation acquires an RGB haze dataset N_S which includes images of haze on a dark background. In the RGB space, a black background has zero values for the R, G, and B color components.
Next, at step 806, the training operation may combine the haze of the RGB haze dataset N_S with the images of the RGB haze-free image dataset C_S to provide an RGB hazy image data set I_S of hazy images. In various embodiments, the images can be combined in various ways. For example, the combined image may be a weighted sum of the individual images, such as:
image in I_S=(image in N_S)*coef+(image in C_S)*(1−coef),
where coef is a value between 0 to 1.
Next, at step 808, the training operation dehazes the hazy images of the RGB hazy image dataset I_S to provide dehazed images in an RGB dehazed images dataset J_S. It is contemplated that various dehazing algorithms may be used, including the dehazing operation described above in connection with
Next, at step 812, the training operation determines a difference between the dehazed images of the HSV dehazed image dataset J_S_HSV and the corresponding hazy images of the HSV hazy image dataset I_S_HSV to provide difference images of an HSV difference image dataset D_S_HSV. Finally, at step 814, the training operation provides the difference images of the HSV difference image dataset D_S_HSV as training input data to the neural network. As described below, the outputs of the neural network are hue and saturation adjustment values that should be added to the HSV dehazed images to restore the colors.
In various embodiments, the training operation provides a ground truth of the training as a difference between the HSV haze-free image dataset C_S_HSV and the HSV hazy image dataset I_S_HSV. In various embodiments, the loss function may include a mean square error, and the error of the neural network's prediction for the hue and saturation adjustment values can be expressed at a high level as:
(image of J_S_HSV−image of I_S_HSV+neural network output)−(image of C_S_HSV−image of I_S_HSV).
Persons skilled in the art will recognize techniques for minimizing a loss function to improve the accuracy of a neural network's predictions. In various embodiments, the error of the neural network's prediction for the hue and saturation adjustment values can be expressed as:
neural network output−(image of C_S_HSV−image of J_S_HSV),
such that the ground truth for the training can be based on a difference between the HSV haze-free image dataset C_S_HSV and the HSV dehazed image dataset J_S_HSV. A particular neural network structure will be described in connection with
The operation may start with the access of the image 902 of
The operation converts the original hazy image 902 from an RGB image to an HSV image, as in step 702 of
Next, as in step 710, the operation adds the hue and saturation values of the HSV adjustment image to the HSV dehazed image to provide an HSV restored image 906. The operation then converts the restored image 906 from HSV into RGB, as in step 712.
With reference to
In various embodiments, the HSV difference image D_HSV 1102 is input to the first convolution layer 1104 of the neural network structure 1100. For example, the HSV difference image D_HSV 1102 may be a size of 1920×1080 pixels, with each pixel having 3 parameters—hue, saturation, and value. Accordingly, the three inputs to the first convolution layer correspond to the hue, saturation, and value parameters, and each input is a 1920×1080 set of such values. Persons skilled in the art will recognize the techniques for entering such an input to a convolutional neural network.
In various embodiments, the output of the first convolution layer 1104 includes 16 outputs, which are input into rectified linear unit (ReLU) 1106 activation functions, which persons skilled in the art will understand. In summary, each ReLU unit converts negative values in the output to a zero but leaves the non-negative values unchanged. In various embodiments, the outputs of the ReLU 1106 are input to a middle convolution layer 1108, which can receive 16 inputs and provide 16 outputs. Each input would be a feature map resulting from the first convolutional layer. In the illustrated embodiment, the middle convolution layer 1108 may perform iterative convolutions 1109 and ReLU 1110, as illustrated in
In various embodiments, the output of the middle convolution layer 1108 may be input into a last convolution layer 1112. For example, the last convolution layer may include 16 input channels and include 3 output channels corresponding to hue, saturation, and value parameters, and can operate according to the configuration in the table below. In various embodiments, the last convolution may be input into a ReLU 1114 resulting in a saturation and hue adjustment image F_HSV 1116.
Accordingly, described herein are systems and methods for training and applying a neural network in connection with color restoration. Although dehazing is used as an example herein, color change can result from other types of image processing, and the color restoration aspects described herein can be applied to other types of image processing as well. Additionally, even though the color restoration described herein utilizes HSV color space to determine hue and saturation adjustments, other color spaces can be used and other types of parameters can be used for color adjustment. Additionally, the convolutional neural network disclosed herein is exemplary and does not limit the scope of the present disclosure. Other configurations and other types of neural networks are contemplated to be within the scope of the present disclosure.
The embodiments disclosed herein are examples of the present disclosure and may be embodied in various forms. For instance, although certain embodiments herein are described as separate embodiments, each of the embodiments herein may be combined with one or more of the other embodiments herein. Specific structural and functional details disclosed herein are not to be interpreted as limiting, but as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure. Like reference numerals may refer to similar or identical elements throughout the description of the figures.
The phrases “in an embodiment,” “in embodiments,” “in some embodiments,” or “in other embodiments” may each refer to one or more of the same or different embodiments in accordance with the present disclosure. A phrase in the form “A or B” means “(A), (B), or (A and B).” A phrase in the form “at least one of A, B, or C” means “(A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).” The term “clinician” may refer to a clinician or any medical professional, such as a doctor, nurse, technician, medical assistant, or the like, performing a medical procedure.
The systems described herein may also utilize one or more controllers to receive various information and transform the received information to generate an output. The controller may include any type of computing device, computational circuit, or any type of processor or processing circuit capable of executing a series of instructions that are stored in a memory. The controller may include multiple processors and/or multicore central processing units (CPUs) and may include any type of processor, such as a microprocessor, digital signal processor, microcontroller, programmable logic device (PLD), field programmable gate array (FPGA), or the like. The controller may also include a memory to store data and/or instructions that, when executed by the one or more processors, causes the one or more processors to perform one or more methods and/or algorithms.
Any of the herein described methods, programs, algorithms or codes may be converted to, or expressed in, a programming language or computer program. The terms “programming language” and “computer program,” as used herein, each include any language used to specify instructions to a computer, and include (but is not limited to) the following languages and their derivatives: Assembler, Basic, Batch files, BCPL, C, C+, C++, Delphi, Fortran, Java, JavaScript, machine code, operating system command languages, Pascal, Perl, PL1, scripting languages, Visual Basic, metalanguages which themselves specify programs, and all first, second, third, fourth, fifth, or further generation computer languages. Also included are database and other data schemas, and any other meta-languages. No distinction is made between languages which are interpreted, compiled, or use both compiled and interpreted approaches. No distinction is made between compiled and source versions of a program. Thus, reference to a program, where the programming language could exist in more than one state (such as source, compiled, object, or linked) is a reference to any and all such states. Reference to a program may encompass the actual instructions and/or the intent of those instructions.
Any of the herein described methods, programs, algorithms or codes may be contained on one or more machine-readable media or memory. The term “memory” may include a mechanism that provides (for example, stores and/or transmits) information in a form readable by a machine such a processor, computer, or a digital processing device. For example, a memory may include a read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or any other volatile or non-volatile memory storage device. Code or instructions contained thereon can be represented by carrier wave signals, infrared signals, digital signals, and by other like signals.
It should be understood that the foregoing description is only illustrative of the present disclosure. Various alternatives and modifications can be devised by those skilled in the art without departing from the present disclosure. Accordingly, the present disclosure is intended to embrace all such alternatives, modifications and variances. The embodiments described with reference to the attached drawing figures are presented only to demonstrate certain examples of the present disclosure. Other elements, steps, methods, and techniques that are insubstantially different from those described above and/or in the appended claims are also intended to be within the scope of the present disclosure.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2019/105368 | 9/11/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2021/046752 | 3/18/2021 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6132368 | Cooper | Oct 2000 | A |
6206903 | Ramans | Mar 2001 | B1 |
6246200 | Blumenkranz et al. | Jun 2001 | B1 |
6312435 | Wallace et al. | Nov 2001 | B1 |
6331181 | Tierney et al. | Dec 2001 | B1 |
6394998 | Wallace et al. | May 2002 | B1 |
6424885 | Niemeyer et al. | Jul 2002 | B1 |
6441577 | Blumenkranz et al. | Aug 2002 | B2 |
6459926 | Nowlin et al. | Oct 2002 | B1 |
6491691 | Morley et al. | Dec 2002 | B1 |
6491701 | Tierney et al. | Dec 2002 | B2 |
6493608 | Niemeyer | Dec 2002 | B1 |
6565554 | Niemeyer | May 2003 | B1 |
6645196 | Nixon et al. | Nov 2003 | B1 |
6659939 | Moll | Dec 2003 | B2 |
6671581 | Niemeyer et al. | Dec 2003 | B2 |
6676684 | Morley et al. | Jan 2004 | B1 |
6685698 | Morley et al. | Feb 2004 | B2 |
6699235 | Wallace et al. | Mar 2004 | B2 |
6714839 | Salisbury, Jr. et al. | Mar 2004 | B2 |
6716233 | Whitman | Apr 2004 | B1 |
6728599 | Wang et al. | Apr 2004 | B2 |
6746443 | Morley et al. | Jun 2004 | B1 |
6766204 | Niemeyer et al. | Jul 2004 | B2 |
6770081 | Cooper et al. | Aug 2004 | B1 |
6772053 | Niemeyer | Aug 2004 | B2 |
6783524 | Anderson et al. | Aug 2004 | B2 |
6793652 | Whitman et al. | Sep 2004 | B1 |
6793653 | Sanchez et al. | Sep 2004 | B2 |
6799065 | Niemeyer | Sep 2004 | B1 |
6837883 | Moll et al. | Jan 2005 | B2 |
6839612 | Sanchez et al. | Jan 2005 | B2 |
6840938 | Morley et al. | Jan 2005 | B1 |
6843403 | Whitman | Jan 2005 | B2 |
6846309 | Whitman et al. | Jan 2005 | B2 |
6866671 | Tierney et al. | Mar 2005 | B2 |
6871117 | Wang et al. | Mar 2005 | B2 |
6879880 | Nowlin et al. | Apr 2005 | B2 |
6899705 | Niemeyer | May 2005 | B2 |
6902560 | Morley et al. | Jun 2005 | B1 |
6936042 | Wallace et al. | Aug 2005 | B2 |
6951535 | Ghodoussi et al. | Oct 2005 | B2 |
6974449 | Niemeyer | Dec 2005 | B2 |
6991627 | Madhani et al. | Jan 2006 | B2 |
6994708 | Manzo | Feb 2006 | B2 |
7048745 | Tierney et al. | May 2006 | B2 |
7066926 | Wallace et al. | Jun 2006 | B2 |
7118582 | Wang et al. | Oct 2006 | B1 |
7125403 | Julian et al. | Oct 2006 | B2 |
7155315 | Niemeyer et al. | Dec 2006 | B2 |
7239940 | Wang et al. | Jul 2007 | B2 |
7306597 | Manzo | Dec 2007 | B2 |
7357774 | Cooper | Apr 2008 | B2 |
7373219 | Nowlin et al. | May 2008 | B2 |
7379790 | Toth et al. | May 2008 | B2 |
7386365 | Nixon | Jun 2008 | B2 |
7391173 | Schena | Jun 2008 | B2 |
7398707 | Morley et al. | Jul 2008 | B2 |
7413565 | Wang et al. | Aug 2008 | B2 |
7453227 | Prisco et al. | Nov 2008 | B2 |
7524320 | Tierney et al. | Apr 2009 | B2 |
7574250 | Niemeyer | Aug 2009 | B2 |
7594912 | Cooper et al. | Sep 2009 | B2 |
7607440 | Coste-Maniere et al. | Oct 2009 | B2 |
7666191 | Orban, III et al. | Feb 2010 | B2 |
7682357 | Ghodoussi et al. | Mar 2010 | B2 |
7689320 | Prisco et al. | Mar 2010 | B2 |
7695481 | Wang et al. | Apr 2010 | B2 |
7695485 | Whitman et al. | Apr 2010 | B2 |
7699855 | Anderson et al. | Apr 2010 | B2 |
7713263 | Niemeyer | May 2010 | B2 |
7725214 | Diolaiti | May 2010 | B2 |
7727244 | Orban, III et al. | Jun 2010 | B2 |
7741802 | Prisco | Jun 2010 | B2 |
7756036 | Druke et al. | Jul 2010 | B2 |
7757028 | Druke et al. | Jul 2010 | B2 |
7762825 | Burbank et al. | Jul 2010 | B2 |
7778733 | Nowlin et al. | Aug 2010 | B2 |
7803151 | Whitman | Sep 2010 | B2 |
7806891 | Nowlin et al. | Oct 2010 | B2 |
7819859 | Prisco et al. | Oct 2010 | B2 |
7819885 | Cooper | Oct 2010 | B2 |
7824401 | Manzo et al. | Nov 2010 | B2 |
7835823 | Sillman et al. | Nov 2010 | B2 |
7843158 | Prisco | Nov 2010 | B2 |
7865266 | Moll et al. | Jan 2011 | B2 |
7865269 | Prisco et al. | Jan 2011 | B2 |
7886743 | Cooper et al. | Feb 2011 | B2 |
7899578 | Prisco et al. | Mar 2011 | B2 |
7907166 | Lamprecht et al. | Mar 2011 | B2 |
7935130 | Williams | May 2011 | B2 |
7963913 | Devengenzo et al. | Jun 2011 | B2 |
7983793 | Toth et al. | Jul 2011 | B2 |
8002767 | Sanchez | Aug 2011 | B2 |
8004229 | Nowlin et al. | Aug 2011 | B2 |
8012170 | Whitman et al. | Sep 2011 | B2 |
8054752 | Druke et al. | Nov 2011 | B2 |
8062288 | Cooper et al. | Nov 2011 | B2 |
8079950 | Stern et al. | Dec 2011 | B2 |
8100133 | Mintz et al. | Jan 2012 | B2 |
8108072 | Zhao et al. | Jan 2012 | B2 |
8120301 | Goldberg et al. | Feb 2012 | B2 |
8142447 | Cooper et al. | Mar 2012 | B2 |
8147503 | Zhao et al. | Apr 2012 | B2 |
8151661 | Schena et al. | Apr 2012 | B2 |
8155479 | Hoffman et al. | Apr 2012 | B2 |
8182469 | Anderson et al. | May 2012 | B2 |
8202278 | Orban, III et al. | Jun 2012 | B2 |
8206406 | Orban, III | Jun 2012 | B2 |
8210413 | Whitman et al. | Jul 2012 | B2 |
8216250 | Orban, III et al. | Jul 2012 | B2 |
8220468 | Cooper et al. | Jul 2012 | B2 |
8256319 | Cooper et al. | Sep 2012 | B2 |
8285517 | Sillman et al. | Oct 2012 | B2 |
8315720 | Mohr et al. | Nov 2012 | B2 |
8335590 | Costa et al. | Dec 2012 | B2 |
8347757 | Duval | Jan 2013 | B2 |
8374723 | Zhao et al. | Feb 2013 | B2 |
8418073 | Mohr et al. | Apr 2013 | B2 |
8419717 | Diolaiti et al. | Apr 2013 | B2 |
8423182 | Robinson et al. | Apr 2013 | B2 |
8452447 | Nixon | May 2013 | B2 |
8454585 | Whitman | Jun 2013 | B2 |
8499992 | Whitman et al. | Aug 2013 | B2 |
8508173 | Goldberg et al. | Aug 2013 | B2 |
8528440 | Morley et al. | Sep 2013 | B2 |
8529582 | Devengenzo et al. | Sep 2013 | B2 |
8540748 | Murphy et al. | Sep 2013 | B2 |
8551116 | Julian et al. | Oct 2013 | B2 |
8562594 | Cooper et al. | Oct 2013 | B2 |
8594841 | Zhao et al. | Nov 2013 | B2 |
8597182 | Stein et al. | Dec 2013 | B2 |
8597280 | Cooper et al. | Dec 2013 | B2 |
8600551 | Itkowitz et al. | Dec 2013 | B2 |
8608773 | Tierney et al. | Dec 2013 | B2 |
8620473 | Diolaiti et al. | Dec 2013 | B2 |
8624537 | Nowlin et al. | Jan 2014 | B2 |
8634957 | Toth et al. | Jan 2014 | B2 |
8638056 | Goldberg et al. | Jan 2014 | B2 |
8638057 | Goldberg et al. | Jan 2014 | B2 |
8644988 | Prisco et al. | Feb 2014 | B2 |
8666544 | Moll et al. | Mar 2014 | B2 |
8668638 | Donhowe et al. | Mar 2014 | B2 |
8746252 | McGrogan et al. | Jun 2014 | B2 |
8749189 | Nowlin et al. | Jun 2014 | B2 |
8749190 | Nowlin et al. | Jun 2014 | B2 |
8758352 | Cooper et al. | Jun 2014 | B2 |
8761930 | Nixon | Jun 2014 | B2 |
8768516 | Diolaiti et al. | Jul 2014 | B2 |
8786241 | Nowlin et al. | Jul 2014 | B2 |
8790243 | Cooper et al. | Jul 2014 | B2 |
8808164 | Hoffman et al. | Aug 2014 | B2 |
8816628 | Nowlin et al. | Aug 2014 | B2 |
8821480 | Burbank | Sep 2014 | B2 |
8823308 | Nowlin et al. | Sep 2014 | B2 |
8827989 | Niemeyer | Sep 2014 | B2 |
8838270 | Druke et al. | Sep 2014 | B2 |
8852174 | Burbank | Oct 2014 | B2 |
8858547 | Brogna | Oct 2014 | B2 |
8862268 | Robinson et al. | Oct 2014 | B2 |
8864751 | Prisco et al. | Oct 2014 | B2 |
8864752 | Piolaiti et al. | Oct 2014 | B2 |
8903546 | Diolaiti et al. | Dec 2014 | B2 |
8903549 | Itkowitz et al. | Dec 2014 | B2 |
8911428 | Cooper et al. | Dec 2014 | B2 |
8912746 | Reid et al. | Dec 2014 | B2 |
8944070 | Guthart | Feb 2015 | B2 |
8989903 | Weir et al. | Mar 2015 | B2 |
9002518 | Manzo | Apr 2015 | B2 |
9014856 | Manzo et al. | Apr 2015 | B2 |
9016540 | Whitman et al. | Apr 2015 | B2 |
9019345 | O'Grady et al. | Apr 2015 | B2 |
9043027 | Durant et al. | May 2015 | B2 |
9050120 | Swarup et al. | Jun 2015 | B2 |
9055961 | Manzo et al. | Jun 2015 | B2 |
9068628 | Solomon et al. | Jun 2015 | B2 |
9078684 | Williams | Jul 2015 | B2 |
9084623 | Gomez et al. | Jul 2015 | B2 |
9095362 | Dachs, II et al. | Aug 2015 | B2 |
9096033 | Holop et al. | Aug 2015 | B2 |
9101381 | Burbank et al. | Aug 2015 | B2 |
9113877 | Whitman et al. | Aug 2015 | B1 |
9138284 | Krom et al. | Sep 2015 | B2 |
9144456 | Rosa et al. | Sep 2015 | B2 |
9198730 | Prisco et al. | Dec 2015 | B2 |
9204923 | Manzo et al. | Dec 2015 | B2 |
9226648 | Saadat et al. | Jan 2016 | B2 |
9226750 | Weir et al. | Jan 2016 | B2 |
9226761 | Burbank | Jan 2016 | B2 |
9232984 | Guthart et al. | Jan 2016 | B2 |
9241766 | Duque et al. | Jan 2016 | B2 |
9241767 | Prisco et al. | Jan 2016 | B2 |
9241769 | Larkin et al. | Jan 2016 | B2 |
9259275 | Burbank | Feb 2016 | B2 |
9259277 | Rogers et al. | Feb 2016 | B2 |
9259281 | Griffiths et al. | Feb 2016 | B2 |
9259282 | Azizian et al. | Feb 2016 | B2 |
9261172 | Solomon et al. | Feb 2016 | B2 |
9265567 | Orban, III et al. | Feb 2016 | B2 |
9265584 | Itkowitz et al. | Feb 2016 | B2 |
9283049 | Diolaiti et al. | Mar 2016 | B2 |
9301811 | Goldberg et al. | Apr 2016 | B2 |
9314307 | Richmond et al. | Apr 2016 | B2 |
9317651 | Nixon | Apr 2016 | B2 |
9345546 | Toth et al. | May 2016 | B2 |
9393017 | Flanagan et al. | Jul 2016 | B2 |
9402689 | Prisco et al. | Aug 2016 | B2 |
9417621 | Diolaiti | Aug 2016 | B2 |
9424303 | Hoffman et al. | Aug 2016 | B2 |
9433418 | Whitman et al. | Sep 2016 | B2 |
9446517 | Burns et al. | Sep 2016 | B2 |
9452020 | Griffiths et al. | Sep 2016 | B2 |
9474569 | Manzo et al. | Oct 2016 | B2 |
9480533 | Devengenzo et al. | Nov 2016 | B2 |
9503713 | Zhao et al. | Nov 2016 | B2 |
9550300 | Danitz et al. | Jan 2017 | B2 |
9554859 | Nowlin et al. | Jan 2017 | B2 |
9566124 | Prisco et al. | Feb 2017 | B2 |
9579164 | Itkowitz et al. | Feb 2017 | B2 |
9585641 | Cooper et al. | Mar 2017 | B2 |
9615883 | Schena et al. | Apr 2017 | B2 |
9623563 | Nixon | Apr 2017 | B2 |
9623902 | Griffiths et al. | Apr 2017 | B2 |
9629520 | Diolaiti | Apr 2017 | B2 |
9662177 | Weir et al. | May 2017 | B2 |
9664262 | Donlon et al. | May 2017 | B2 |
9687312 | Dachs, II et al. | Jun 2017 | B2 |
9700334 | Hinman et al. | Jul 2017 | B2 |
9718190 | Larkin et al. | Aug 2017 | B2 |
9730719 | Brisson et al. | Aug 2017 | B2 |
9737199 | Pistor et al. | Aug 2017 | B2 |
9795446 | DiMaio et al. | Oct 2017 | B2 |
9797484 | Solomon et al. | Oct 2017 | B2 |
9801690 | Larkin et al. | Oct 2017 | B2 |
9814530 | Weir et al. | Nov 2017 | B2 |
9814536 | Goldberg et al. | Nov 2017 | B2 |
9814537 | Itkowitz et al. | Nov 2017 | B2 |
9820823 | Richmond et al. | Nov 2017 | B2 |
9827059 | Robinson et al. | Nov 2017 | B2 |
9830371 | Hoffman et al. | Nov 2017 | B2 |
9839481 | Blumenkranz et al. | Dec 2017 | B2 |
9839487 | Dachs, II | Dec 2017 | B2 |
9850994 | Schena | Dec 2017 | B2 |
9855102 | Blumenkranz | Jan 2018 | B2 |
9855107 | Labonville et al. | Jan 2018 | B2 |
9872737 | Nixon | Jan 2018 | B2 |
9877718 | Weir et al. | Jan 2018 | B2 |
9883920 | Blumenkranz | Feb 2018 | B2 |
9888974 | Niemeyer | Feb 2018 | B2 |
9895813 | Blumenkranz et al. | Feb 2018 | B2 |
9901408 | Larkin | Feb 2018 | B2 |
9918800 | Itkowitz et al. | Mar 2018 | B2 |
9943375 | Blumenkranz et al. | Apr 2018 | B2 |
9948852 | Lilagan et al. | Apr 2018 | B2 |
9949798 | Weir | Apr 2018 | B2 |
9949802 | Cooper | Apr 2018 | B2 |
9952107 | Blumenkranz et al. | Apr 2018 | B2 |
9956044 | Gomez et al. | May 2018 | B2 |
9980778 | Ohline et al. | May 2018 | B2 |
10008017 | Itkowitz et al. | Jun 2018 | B2 |
10028793 | Griffiths et al. | Jul 2018 | B2 |
10033308 | Chaghajerdi et al. | Jul 2018 | B2 |
10034719 | Richmond et al. | Jul 2018 | B2 |
10052167 | Au et al. | Aug 2018 | B2 |
10085811 | Weir et al. | Oct 2018 | B2 |
10092344 | Mohr et al. | Oct 2018 | B2 |
10123844 | Nowlin | Nov 2018 | B2 |
10188471 | Brisson | Jan 2019 | B2 |
10201390 | Swarup et al. | Feb 2019 | B2 |
10213202 | Flanagan et al. | Feb 2019 | B2 |
10258416 | Mintz et al. | Apr 2019 | B2 |
10278782 | Jarc et al. | May 2019 | B2 |
10278783 | Itkowitz et al. | May 2019 | B2 |
10282881 | Itkowitz et al. | May 2019 | B2 |
10335242 | Devengenzo et al. | Jul 2019 | B2 |
10405934 | Prisco et al. | Sep 2019 | B2 |
10433922 | Itkowitz et al. | Oct 2019 | B2 |
10464219 | Robinson et al. | Nov 2019 | B2 |
10485621 | Morrissette et al. | Nov 2019 | B2 |
10500004 | Hanuschik et al. | Dec 2019 | B2 |
10500005 | Weir et al. | Dec 2019 | B2 |
10500007 | Richmond et al. | Dec 2019 | B2 |
10507066 | DiMaio et al. | Dec 2019 | B2 |
10510267 | Jarc et al. | Dec 2019 | B2 |
10524871 | Liao | Jan 2020 | B2 |
10548459 | Itkowitz et al. | Feb 2020 | B2 |
10575909 | Robinson et al. | Mar 2020 | B2 |
10592529 | Hoffman et al. | Mar 2020 | B2 |
10595946 | Nixon | Mar 2020 | B2 |
10881469 | Robinson | Jan 2021 | B2 |
10881473 | Itkowitz et al. | Jan 2021 | B2 |
10898188 | Burbank | Jan 2021 | B2 |
10898189 | McDonald, II | Jan 2021 | B2 |
10905506 | Itkowitz et al. | Feb 2021 | B2 |
10912544 | Brisson et al. | Feb 2021 | B2 |
10912619 | Jarc et al. | Feb 2021 | B2 |
10918387 | Duque et al. | Feb 2021 | B2 |
10918449 | Solomon et al. | Feb 2021 | B2 |
10932873 | Griffiths et al. | Mar 2021 | B2 |
10932877 | Devengenzo et al. | Mar 2021 | B2 |
10939969 | Swarup et al. | Mar 2021 | B2 |
10939973 | DiMaio et al. | Mar 2021 | B2 |
10952801 | Miller et al. | Mar 2021 | B2 |
10965933 | Jarc | Mar 2021 | B2 |
10966742 | Rosa et al. | Apr 2021 | B2 |
10973517 | Wixey | Apr 2021 | B2 |
10973519 | Weir et al. | Apr 2021 | B2 |
10984567 | Itkowitz et al. | Apr 2021 | B2 |
10993773 | Cooper et al. | May 2021 | B2 |
10993775 | Cooper et al. | May 2021 | B2 |
11000331 | Krom et al. | May 2021 | B2 |
11013567 | Wu et al. | May 2021 | B2 |
11020138 | Ragosta | Jun 2021 | B2 |
11020191 | Diolaiti et al. | Jun 2021 | B2 |
11020193 | Wixey et al. | Jun 2021 | B2 |
11026755 | Weir et al. | Jun 2021 | B2 |
11026759 | Donlon et al. | Jun 2021 | B2 |
11040189 | Vaders et al. | Jun 2021 | B2 |
11045077 | Stern et al. | Jun 2021 | B2 |
11045274 | Dachs, II et al. | Jun 2021 | B2 |
11058501 | Tokarchuk et al. | Jul 2021 | B2 |
11076925 | DiMaio et al. | Aug 2021 | B2 |
11090119 | Burbank | Aug 2021 | B2 |
11096687 | Flanagan et al. | Aug 2021 | B2 |
11098803 | Duque et al. | Aug 2021 | B2 |
11109925 | Cooper et al. | Sep 2021 | B2 |
11116578 | Hoffman et al. | Sep 2021 | B2 |
11129683 | Steger et al. | Sep 2021 | B2 |
11135029 | Suresh et al. | Oct 2021 | B2 |
11147552 | Burbank et al. | Oct 2021 | B2 |
11147640 | Jarc et al. | Oct 2021 | B2 |
11154373 | Abbott et al. | Oct 2021 | B2 |
11154374 | Hanuschik et al. | Oct 2021 | B2 |
11160622 | Goldberg et al. | Nov 2021 | B2 |
11160625 | Wixey et al. | Nov 2021 | B2 |
11161243 | Rabindran et al. | Nov 2021 | B2 |
11166758 | Mohr et al. | Nov 2021 | B2 |
11166770 | DiMaio et al. | Nov 2021 | B2 |
11166773 | Ragosta et al. | Nov 2021 | B2 |
11173597 | Rabindran et al. | Nov 2021 | B2 |
11185378 | Weir et al. | Nov 2021 | B2 |
11191596 | Thompson et al. | Dec 2021 | B2 |
11197729 | Thompson et al. | Dec 2021 | B2 |
11213360 | Hourtash et al. | Jan 2022 | B2 |
11221863 | Azizian et al. | Jan 2022 | B2 |
11234700 | Ragosta et al. | Feb 2022 | B2 |
11241274 | Vaders et al. | Feb 2022 | B2 |
11241290 | Waterbury et al. | Feb 2022 | B2 |
11259870 | DiMaio et al. | Mar 2022 | B2 |
11259884 | Burbank | Mar 2022 | B2 |
11272993 | Gomez et al. | Mar 2022 | B2 |
11272994 | Saraliev et al. | Mar 2022 | B2 |
11291442 | Wixey et al. | Apr 2022 | B2 |
11291513 | Manzo et al. | Apr 2022 | B2 |
12100124 | Gubbi Lakshminarasimha | Sep 2024 | B2 |
20020181767 | Deng | Dec 2002 | A1 |
20100067823 | Kopf et al. | Mar 2010 | A1 |
20110135200 | Chen et al. | Jun 2011 | A1 |
20150339811 | Zhong et al. | Nov 2015 | A1 |
20160005152 | Yang et al. | Jan 2016 | A1 |
20190005603 | Chen et al. | Jan 2019 | A1 |
20190287219 | Guo | Sep 2019 | A1 |
20200242409 | Wang | Jul 2020 | A1 |
20210152735 | Zhou | May 2021 | A1 |
Number | Date | Country |
---|---|---|
104217404 | Dec 2014 | CN |
105303532 | Feb 2016 | CN |
105741248 | Jul 2016 | CN |
110136057 | Aug 2019 | CN |
106127702 | Nov 2016 | IN |
101976318 | May 2019 | KR |
Entry |
---|
J.-H. Wang, K.-E. Lin, S.-K. Lee and Y.-C. Lai, “Underwater Image Restoration via Machine Learning Transmission Map of Atmospheric Scattering Model,” Oceans 2023—Limerick, Limerick, Ireland, 2023, pp. 1-4, doi: 10.1109/OCEANSLimerick52467.2023.10244677. (Year: 2023). |
T. Zhang, H.-M. Hu and B. Li, “A Naturalness Preserved Fast Dehazing Algorithm Using HSV Color Space,” in IEEE Access, vol. 6, pp. 10644-10649, 2018, doi: 10.1109/ACCESS.2018.2806372 (Year: 2018). |
“Image Dehazing Using Residual-Based Deep CNN,” in IEEE Access, vol. 6, pp. 26831-26842, 2018 (Year: 2018). |
Extended European Search Report Dated May 9, 2023 for European Application No. 19944884.6 (20 pages). |
Yoon et al., “Adaptive Defogging with Color Correction in the HSV Color Space for Consumer Surveillance System”, IEEE Transactions on Consumer Electronics, Feb. 1, 2012, pp. 111-116, vol. 58, No. 1. |
Bolun et al., “DehazeNet: An End-to-End System for Single Image Haze Removal”, IEEE Transactions on Image Processing, Nov. 1, 2016, pp. 5187-5198, vol. 25, No. 11. |
Shengdong et al., “Feed-Net: Fully End-to-End Dehazing”, 2018 IEEE International Conference on Multimedia and Expo (ICME), IEEE, Jul. 23, 2018, pp. 1-6. |
Bolkar et al., “Deep Smoke Removal from Minimally Invasive Surgery Videos”, 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE, Oct. 7, 2018, pp. 3403-3407. |
International Search Report mailed Jun. 4, 2020 and Written Opinion completed May 28, 2020 corresponding to counterpart Int'l Patent Application PCT/CN2019/105368. |
Number | Date | Country | |
---|---|---|---|
20240046415 A1 | Feb 2024 | US |