The present disclosure relates to image processing techniques. More specifically, the disclosure exemplifies techniques for improving image quality by identifying and filtering speckle noise from an image.
A myriad of imaging devices are able to capture images in many different environments for many different purposes. In doing so, the captured image may not be of optimal quality when considering the purpose for which it was captured. One field where image capture and correction is of paramount importance is the medical diagnostics field where a medical imaging device is used by one or more medical professionals to capture images from a patient with the intent to detect the presence of any abnormalities that are indicative of a medical condition for which later treatment is to be prescribed. To ensure captured images are of sufficient quality, it is known to apply some form of image processing via execution of one or more image processing algorithms. Examples of these image processing algorithms include, but are not limited to, noise reduction, color correction, brightening, etc.
Medical probes have the ability to provide images from inside the patient's body. One useful medical probe employs spectrally encoded endoscopy (“SEE”) technology, which is a miniature endoscopy technology that can conduct high-definition imaging through a sub-mm diameter probe. SEE uses wavelength to encode spatial information of a sample, thereby allowing high-resolution imaging to be conducted through small diameter endoscopic probes. SEE can be accomplished using broad bandwidth light input into one or more optical fibers. At the distal end of the fiber, a diffractive or dispersive optical component disperses the light across the sample, which returns back through the optic and then through optical fibers. Light is detected by a wavelength detecting apparatus, such as a spectrometer where each resolvable wavelength corresponds to reflectance from a different point on the sample.
Once images are captured, they are processed and stored as one or more types of image files including still images and moving images (e.g. video). These images may be selectively used by medical personnel for diagnostic or other patient-centric function. Thus, it is important that the quality of the images that are captured and subsequently output for use by others are of a sufficient quality to enable the medical personnel to understand and identify characteristics of the tissue represented in the image. Often times the images that are captured as original images are scanned or otherwise digitized or re-digitized in order to be placed in medical charts and/or used by medical personnel in diagnosing a patient.
A drawback associated with this process is the existence of artifact noise, also known as speckle or pepper noise, when the original image is scanned. Speckle or pepper noise is represented in an image by one or more pixels having high intensity values (e.g. white pixels) or low intensity (e.g. black pixels). These speckles distort the image and can cause a person viewing the image to misinterpret or misunderstand what is actually being depicted. This is particularly problematic in the medical context because it inhibits an accurate representation of the original image which may result in misdiagnosis of a patient. What is needed is a technique for de-speckle processing that can be applied on images having different distributions and that maintains contrasts and edges and which either preserves or enhances signal-to-noise ratio in the image.
Accordingly, it can be beneficial to address and/or overcome at least some of the deficiencies indicated herein above, and thus to provide an image processing algorithm that reduces speckle noise in an image.
According to at least one embodiment of the invention, an image processing device or image processing apparatus is provided and includes one or more processors and a memory storing instructions for performing image processing to correct image data by replacing one or pixels within the image data identified as being speckle data.
In one embodiment, an image processing device that processes image data comprising one or more processors; and one or more memory devices storing instructions that, when executed by the one or more processors, configures the one or more processors to generate a window having a predetermined size and including a geometric center point to analyze the image data, identify, as speckle data, one or more pixels of the image data positioned within the generated window, and generate corrected image data by replacing the one or more pixels identified as speckle data with a replacement pixel value derived from pixels surrounding to the one or more pixels identified as speckle data in a case where the one or more pixels identified as speckle data is equal to or greater than a confidence threshold.
In another embodiment, the image processing device is further configured to identify one or more pixels within the generated window as boundary pixels and exclude the boundary pixels from being used in deriving replacement pixel values to be used in replacing the one or more pixels identified as speckle data.
In another embodiment, the image processing device is further configured to maintain pixel values of the one or more pixels identified as speckle data in a case where the one or more pixel values is less than the confidence threshold.
In other embodiments, the image processing device is further configured to move the generated window over the image data and, at each position on the image data that the generated window is moved, determine if any additional pixels within the generated window should be identified as speckle data, and replace any additional pixels determined to be speckle data with the replacement pixel values derived from pixels surrounding each of the additional pixels determined to be speckle data.
In further embodiments, the image processing device is configured to identify the one or more pixels within the generated window as speckle data by generating a histogram of the image data indicating intensity values pixels that form the image data and frequency at which pixels of specific intensities occur and selecting pixel intensity values that exceed a predetermined intensity value as a global speckle threshold which, when exceeded by one or more pixels within the generated window indicates that the one or more pixels are speckle data.
In further embodiment, the image processing device is further configured to determine, using all pixel values from within the generated window, distribution data set from which the replacement pixel value is derived. In certain embodiments, the distribution data set is projected onto the distribution curve and when the one or more pixels identified as speckle data are equal to or greater than the confidence value, the replacement pixel value is derived by using a random pixel value from the distribution curve. In other embodiments, the replacement pixel value is derived by generating a mean pixel value from the distribution curve. In other embodiments, the replacement pixel value is derived by using a median pixel value from the distribution curve. In other embodiments, the distribution data set is one of (a) a normalized distribution curve, (b) a multimodal distribution curve, or (c) a skewed distribution curve.
In another embodiment, the image data is color image data and the one or more processors, for each color channel of the color image data, identify speckle data and generate corrected image data and combine generated corrected image data of each color into an color image to be displayed on a display device.
These and other objects, features, and advantages of the present disclosure will become apparent upon reading the following detailed description of exemplary embodiments of the present disclosure, when taken in conjunction with the appended drawings, and provided claims.
Further objects, features and advantages of the present disclosure will become apparent from the following detailed description when taken in conjunction with the accompanying figures showing illustrative embodiments of the present disclosure.
Throughout the figures, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the subject disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative exemplary embodiments. It is intended that changes and modifications can be made to the described exemplary embodiments without departing from the true scope and spirit of the subject disclosure as defined by the appended claims.
According to the present disclosure an image processing system and method are provided. The image processing system and method advantageously improves the quality of a captured image by reducing speckle noise in images that have different pixel distribution across the image. This improved image processing system executes an image processing algorithm that can operate to improve the contrast of an image having different levels of half-toning and gradients with varying pixel intensity distributions. Further improvements can be realized by the described image processing algorithm which minimizes edge blur in order to maintain edges and boundaries of objects present in the image while correcting speckle noise present therein. The image processing system effects the above improvement in image processing by executing one or more de-speckle algorithms that employ a structuring elements that traverses all pixels of an image to identify and correct pixels determined to be speckle noise. Within the structuring element, the algorithm fits values around one or more pixels determined to be speckle noise using a normalized distribution and replacing those pixel values with one or more replacement pixel values derived from the pixel values that surround the identified speckle.
The image processing device 100 is an example of a computing system. The term computing system as used herein includes but is not limited to one or more software modules, one or more hardware modules, one or more firmware modules, or combinations thereof, that work together to perform operations on electronic data. The physical layout of the modules may vary. A computing system may include multiple computing devices coupled via a network. A computing system may include a single computing device where internal modules (such as a memory and processor) work together to perform operations on electronic data. Also, the term resource as used herein includes but is not limited to an object that can be processed at a computing system. A resource can be a portion of executable instructions or data.
The processing unit 101 may comprise a single central-processing unit (CPU) or a plurality of processing units. The processing unit 101 executes various processes and controls the image processing apparatus 100 in accordance with various programs stored in memory. The processing unit 101 controls reading data and control signals into or out of memory. The processing unit 101 uses the RAM 102 as a work area and executes programs stored in the ROM 103 and the Storage Device 104. In some embodiments, the processor(s) 101 include one or more processors in addition to the CPU. By way of example, the processor(s) 101 may include one or more general-purpose microprocessor(s), application-specific microprocessor(s), and/or special purpose microprocessor(s). Additionally, in some embodiments the processor(s) 101 may include one or more internal caches for data or instructions.
The processor(s) 101 provide the processing capability required to execute an operating system, application programs, and various other functions provided on the image processing device 100. The processor(s) 101 perform or cause components of the image processing device 100 to perform various operations and processes described herein, in accordance with instructions stored in one or more memory devices 103 and 104 while using the capability of the work area memory RAM 102.
The RAM 102 is used as a work area during execution of various processes, including when various programs stored in the ROM 103 and/or the Storage Device 104 are executed. The RAM 102 is used as a temporary storage area for various data. In some embodiments, the RAM 102 is used as a cache memory.
The ROM 103 stores data and programs having computer-executable instructions for execution by the processing unit 101. The ROM 103 stores programs configured to cause the image processing device 100 to execute various operations and processes. In one embodiment, the ROM 103 has stored therein an operating system that includes one or more programs and data for managing hardware and software components of the image processing apparatus 100. The ROM 103 and storage device 104 may further store one or more applications that utilize or otherwise work in conjunction with the operating system 107 in executing various operations.
The Storage Device 104 stores application data, program modules and other information. Some programs and/or program modules stored in the Storage Device 104 are configured to cause various operations and processes described herein to be executed. The Storage Device 104 may be, for example, a hard disk or other non-transitory computer-readable storage medium. The Storage Device 104 may store, for example, an operating system. As shown herein, the storage device 105 stores an image processing application 105 that can be selectively executed to perform image processing algorithms that are able to identify and correct artifact noise present within or one more images. The image processing application 105 will be further described in detail hereinafter with respect to remaining figures. It should be noted that the term application may include one or more programs comprising a set of one or more instructions and/or algorithms to be executed by one or more processing units to achieve a desired processing result.
A communication interface 106 may include hardware and software for establishing and facilitating unidirectional and/or bidirectional communication between the image processing device 100 and one or more external apparatus(s) and servers 20. The communication interface 106 may include a network interface including hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between the image processing device 100 and one or more external apparatuses and/or servers 20 on the network 50. As an example and not by way of limitation, a network interface may include a network interface card (NIC) or a network controller for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network 50 and any suitable network interface for it. As an example and not by way of limitation, the image processing device 100 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks 110 may be wired or wireless.
The communication interface 106 may also include one or more mechanisms for establishing direct connection between an external apparatus and the image processing device 100 using one or more short distance communication protocols. One exemplary type of short distance communication protocol may include Near Field Communication (NFC) that enables bidirectional communication with a mobile computing device having NFC functionality. This may be provided by an NFC unit which includes circuitry and software that enables transmission (writes) and reception (reads) of commands and data with a non-contact type device using a short distance wireless communication technique such as NFC (Near Field Communication; ISO/IEC IS 18092). In other embodiments, the communication interface may also communicate according to the BLUETOOTH communication standard by including a transceiver capable of transmitting and receiving data via short wavelength radio waves ranging in frequency between 2.4 GHz and 2.485 GHz. In other instances, the communication interface 106 may also include an infrared (IR) unit that can emit and sense electromagnetic wavelengths of a predetermined frequency have data encoded therein. Furthermore, while not specifically shown, the short distance communication interface may also include a smart card reader, radio-frequency identification (RFID) reader, device for detecting biometric information, a keyboard, keypad, sensor(s), a combination of two or more of these, or other suitable devices.
The image processing device 100 includes an input/output (I/O) interface 107 that includes one or more ports for connecting external devices used for entering information and/or instructions as inputs for controlling one or more operations of the image processing device 100. The I/O interface 107 may, for example, include one or more input/output (I/O) port(s) including, but not limited to, a universal serial bus (USB) port, FireWire port (IEEE-1394), serial port, parallel port, HDMI port, thunderbolt port, display port and/or AC/DC power connection port. When connected to a respective port of the I/O interface 107, one or more external device(s) 108 to communicate with the image processing device 100 to one or provide input to or receive output from the image processing device 100. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. In some embodiments, the I/O interface 904 includes one or more device or software drivers enabling the processor(s) 901 to drive one or more of these I/O devices.
The image processing device 100 may also include a display 108 that is configured to output one or more display screens generated by one or more applications executing on the image processing device 100. The display 108 may be any type of display device including but not limited to a liquid crystal display (LCD), light emitting diode (LED) display, organic light emitting diode (OLED) display and the like. Further, while the display 108 is shown as part of the image processing device 100, it should be understood that this is not required and instead, the display 108 may be selectively connected to the image processing device 100 via the I/O interface 107 such that the display 108 is external from the image processing device 100. It should also be understood that the display 108 on which output generated by the image processing device 100 is to be displayed may be present in one or more external apparatus(s)/servers connected to the image processing device 100 either via the network 50 or direct wireless communication such as WIFI direct or the like.
The system bus 110 interconnects various components of the image processing apparatus 100 thereby enabling the transmission of data and execution of various processes. The system bus 110 may include one or more types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
Additionally, the computing system may include other storage media, such as non-volatile flash memory, removable memory, such as a compact disk (CD), digital versatile disk (DVD), a CD-ROM, memory card, magneto-optical disk or any combination thereof. All or a portion of a computer-readable storage medium of the computing system may be in the form of one or more removable blocks, modules, or chips. The computer-readable storage medium need not be one physical memory device, but can include one or more separate memory devices.
In exemplary operation, the image processing application 105 includes at least one or more of the instructions illustrated in
An exemplary manner for identifying and setting the global speckle threshold value is shown with respect to
In another embodiment, the global speckle threshold set in step S202 may be dynamically set based on previously set global speckle threshold values. In this manner, the image processing application 105 analyzes the entirety of the input image 300 and compares image characteristics of the input image 300 with a set of image characteristics determined from previously analyzed input images to determine and set the global speckle threshold value automatically. In another embodiment, the image processing application 105 generates the histogram 400 to present to a user in the UI with the selector 402 positioned along the x-axis thereof at a position that has been determined dynamically based on prior image processing operations thereby enabling the user to selectively refine the selection of the global speckle threshold value based on user experience.
In step S204, a geometric center for a structuring element is set and in step S206, the image processing application determines and sets a size of the structuring element that will be used in analyzing the input image 300 to identify and correct one or more pixel values from within the structuring element determined to be speckle based on the global speckle threshold value set in step S202. The size of the structuring element is set based on the determination of the geometric center point set in step S204. The structuring element has a height and width, in pixels, sufficient to cover area if the input image that includes one or more pixels indicated, based on the global speckle threshold value, to be speckle data and pixels that are not speckle and from which distribution data may be obtained and used, as described below, to correct the value of the one or more pixels identified as speckle data. It should be noted that the area within a particular structuring window can include more than one instance of speckle data and that the more than once instances of speckle data can be identified and corrected using pixel values surrounding the speckle data. Upon setting of the geometric center point, the image processing application 105 generates, in step S206, a structuring window for movement over the input image 300.
An exemplary structuring window 502 generated in step S206 and defined by the parameters set in S202 and S204, is shown in
The window generation and movement described above with respect to
Turning back to
Upon identifying that one or more pixels in the structuring window are speckle due to an intensity of the one or more pixels exceeding the intensity value of the global speckle threshold value, the image processing application 105 finds a best distribution fit for the remaining pixel values within the particular structuring window as shown in step S212. In step S212, the application 105 uses at least one distribution determination algorithm in determining a best fit pixel value that may be used to replace the one or more pixel values identified as speckle data. The at least one distribution determination algorithm includes one or more of (a) a normal distribution; (b) a multimodal distribution and (c) a skewed distribution.
The application 105 calculates the normal distribution of pixel values within the structuring window which presumes that a set of pixel values within the particular structuring window tends to lie around a central data value without any positive or negative preference (e.g. a bell curve). A standard deviation is calculated and measure how spread out the pixel value data is away from the center, generally follows a consistent pattern, where about 66% of values are within 1 standard deviation of the mean, 95% of values are 2 standard deviations from the mean, and 99.7% of values are 3 standard deviations from the mean. This calculation is performed by the application determining a mean pixel value (m) by Σ X/N where X is the data value (e.g. pixel value) and N is the number of data points (e.g. number of pixels within the structuring window. Thereafter a standard deviation is calculated as
The application 105 may calculate a multimodal distribution when the pixel values within the structuring window indicate two or more different peaks centered around two or more central values, b. The multimodal distribution may be computed according to the following equation:
where x is the data value, a is the amplitude, b is the center value of the curve, c is the peak width, and n is the number of peaks. The standard deviation for each peak is calculated with respect to each separate center value.
The image processing application 105 calculates a skewed distribution which indicates a bias around a particular central value m. The Fisher Pearson coefficient attempts to quantify the degree of skewness with respect to the current data value observed, and the mean and the standard deviation of the data set. For a skewed, non-normal distribution, the Fisher Pearson coefficient is calculated according to the following equation:
where Xi is the data value, N is the number of data points, m is the mean, and s is the standard deviation. For this case, the standard deviation is the square root of the variance:
The normal distribution has a skewness of zero. Negative values indicate left skewed distribution and positive values indicate right skewed distribution.
At the completion of the best fit determination in step S212, the image processing application 105 performs a local thresholding operation in step S214 to inform whether or not the one or more pixels identified as speckle data should be replaced with a value from one of the distribution sets determined in step S212. The local thresholding operation is an efficacy determination with respect to the best fit approximation and assigns a confidence value to the proposed pixel data values within the structuring window that may be used to replace the pixel values identified as speckle data. The confidence interval may be calculated in step S214 using a central limit theorem
Given a population with known sample mean (m) and standard deviation (s), the confidence interval is ±(1-C)/2 for a normal distribution and skewed distribution with large sample size. In the above C is the user defined Confidence Interval which is also known as the local threshold value in step S214. In certain embodiments, the confidence interval is preset by the user and is determined by how well the sample pixel population estimates a normal non speckled space. In one embodiment, the confidence value may be predetermined using characteristic information indicative of known imaging characteristics of the surface being imaged in combination with known image generating characteristics of the image capturing apparatus and its effect on common surfaces of interest, such as bone, tissue, cartilage, etc. In one embodiment, a calibration map that is specific to a particular surface of interest can be generated to produce a filter that may be selectable as the confidence interval for us in step S214.
In step S216, the application 105 compares a current pixel value of the one or more pixels identified as speckle data are less than the local threshold value. If the determination in step S216 indicates that the one or more pixel values are below the threshold (YES in S216), the application determines that the one or more pixel values should not be replaced in step S218 and the pixel value is kept. If the determination in step S216 determines that the one or more pixel values exceed the threshold, the application 105 determines that the one or more pixel values should be replaced in step S219 with a value derived from the distribution set based on the confidence value discussed above.
After pixel replacement determination is completed for all positions of Ki,j within a particular structuring window, the application determines, in step S220, whether a subsequent center point exists to which a subsequent window may be centered around. If a window can be moved in in either the horizontal or vertical direction at least M′ pixels from the current center point, then the determination in S220 is positive and the structuring window is moved to the next center point as shown in step S224. It should also be noted that in step S210, if the application determines that no spikes indicative of speckle are present within the particular window, the application also proceeds to step S224 to move the window to the next center point. If not further center point can be detected or set, the application 105 ends processing and generates an output image that modifies the original input image with pixel values replaced in accordance with the above instructions.
In another embodiment, the image processing application 105, when determining whether or not to replace one or more pixel values identified as speckle data, detects and removes from the best fit distribution calculation in step SS214 in
The edge detection processing performed in step S213 in
Next, the intensity gradient within the structural element in both the x and y directions are calculated. The Gradient direction is always perpendicular to edges.
A non-maximum suppression application is applied to check if at every pixel, there exists a local maximum in its neighborhood that is in the direction against its gradient. If this standard is upheld, then the pixel is classified as an edge. Finally Hysteresis thresholding is applied in order to prevent the breakup of an edge contour caused by the output of the non-maximum suppression to fluctuate above and below a pre-determined threshold. If a single threshold, T1 is applied with respect to the gradient of an image, and an edge has an average gradient equal to T1, then due to noise, there will be instances where the edge dips below the threshold. There will be equal instances where the edge will extend above the threshold making an edge look like a dashed line. To avoid this, hysteresis uses two thresholds, a high and a low. Any pixel in the image that has a value greater than T1 is presumed to be an edge pixel, and is marked as such immediately. Then, any pixels that are connected to this edge pixel and that have a value greater than T2 are also selected as edge pixels. Once the edge pixels are identified, they are removed from being used in the best fit distribution calculation. By removing edge pixels from being included in the best fit calculation, the overall output image quality is improved because any of the one or more pixels identified as speckle and which are replaced due to exceeding the local threshold value, is selected from a set of background pixels that surround the speckle data and not from edge pixels that might cause the replacement pixel value to be darker or brighter than it should be thereby generating a smoother, clearer output image.
As can be seen, the original input image 300 in
A comparison of the improved quality of the image in
A further metric used to show the improved image output by the image processing application described herein is determined by the differences in contrast between the two images.
The above described image processing application may be used to process images captured by an image capturing apparatus. An exemplary embodiment of an image capturing apparatus that captures a series of moving images from which individual image frame data may be extracted and processed according to the above image processing applications is shown in
In this embodiment, broadband light from the light source 1310 is coupled into a light guiding component which may be an illumination optical fiber 1312. The broadband light has sufficient bandwidth to allow for spatial resolution along the spectrally dispersed dimension. In some embodiments, the broadband light is a broadband visible light source that includes a blue band of light (including wavelength λB1 to λBN), a green band of light (λG1 to λGN), and a red band of light (λR1 to λRN). For example, the blue band contains 400-500 nm light, the green band contains 500-600 nm light, and the red band contains 600-800 nm. In other embodiments, the wavelengths of the broadband light are optimized for identifying specific features such as blood, tissue, etc., and may extend into the near-IR region, for example 1200 nm. In an embodiment, each wavelength band may have wavelength range that is greater than 30 nm. An embodiment may include at least three bands which would allow the SEE to produce color images. More bands may be used to acquire additional information.
The broadband light source 1310 may include a plurality of light sources or may be a single light source. The broadband light source 110 may include one or more of a laser, an OLED, a LED, a halogen lamp, an incandescent lamp, supercontinuum light source pumped by a laser, and/or a fluorescent lamp. The broadband light source 1310 may be any light source that provides light which can then be split up into at least three bands in which each band is further dispersed to provide light which is then used for spectral encoding of spatial information. The broadband light source 1310 may be fiber coupled or may be free space coupled to another component of the SEE probe system 1300.
A light guiding component may be an illumination fiber 1312 or some other optical waveguide which is connected to an SEE probe 1320. The illumination fiber 1312 may be a single-mode fiber, multi-mode fiber or double clad fiber. Preferably, a single fiber is used as the illumination fiber 1312. The probe 1320 or parts thereof may be rotated or oscillated as indicated by the arrow. For example, the illumination fiber and illumination optics may be rotated via a rotary junction.
After illumination of the diffracted light (e.g., red, green, and blue light) on the sample 1330 (e.g., a tissue or in vivo sample), light is reflected, scattered, photoluminescence by the sample 1330. This light is collected by the detection fiber 1340 which may or may not pass through a grating. Detection fiber(s) 1340 used to collect the light may be attached on or near the side surface of the lens of the probe 1320. The detection fiber 1340 may optionally be rotated along with the illumination optics or may be stationary. If rotated, the detection fiber 1340 may be connected, via a rotary junction, to a second non-rotating detection fiber.
As shown in
The probe 1320 of
After the spectrometer and one or more detectors detects the collected light, an image processor 1350 generates three 2D images (1352, 1354, 1356) for red, green, and blue from the data. In other embodiments, two, four, or more 2D images are formed using a probe with appropriate overlapping orders of diffracted light.
The image processor 1350 builds a 2D color image 1358 from the 3 substantially monochromatic images: a red image 1352; a green image 1354, and a blue image 1356. This color image 1358 may be created so as to simulate a true color image or may be adjusted to highlight differences in, for example, tissue type. In some embodiments, a two or four tone image may be built instead of or in addition to the color image 1358. The image processor 1350 further executes one or more image processing algorithms on the generated color image 1358 that have been discussed throughout the present disclosure.
In one embodiment, the image processor 1350 includes one or more computer unit(s) and one or more display unit(s) which may be connected to the image processor 1350 via a high definition multimedia interface (HDMI). The description of an HDMI connection is provided for exemplary purposes only and any other connection interface able to output high definition video image data maybe be used.
In one embodiment, the image processor 1350 may include hardware components, software components and/or a combination thereof. The image processor may include one or more processor(s) that execute one or more stored control algorithms. The one or more processors that comprise the image processor 1350 may be similar to those discussed above with the processing unit 101 in
The image processor 1350 may execute instructions to perform one or more functions for operating the image capture apparatus (a) automatically in response to trigger events, (b) in response to user input or (c) at a predetermined time. The image processor 1350 may include an I/O interface in which commands are received via one or more an included or separately attached touch panel screen, keyboard, mouse, joy-stick, ball controller, and/or foot pedal. A user/operator may cause a command to be initiated so as to observe or gather information about a subject which may be inside a human body through an exemplary front-view SEE probe using the image processor 1350. Other exemplar devices connectable through the I/O interface include but is not limited to a printing device, a touch screen, a light pen, an optical storage device, a scanner, a microphone, a camera, a drive. In another embodiment where the image processor 1350 is embodied as part of image processing device 100 in
According to another embodiment, a detector interface is provided which may include a detection system such as the spectrometer 1342, components within the spectrometer, for example a photomultiplier tube (PMT), a photodiode, an avalanche photodiode detector (APD), a charge-coupled device (CCD), multi-pixel photon counters (MPPC), or other and also components that provide information about the state of the probe such as a rotary encoder, motor drive voltage, thermocouple, etc. Also, the function of detector may be realized by computer executable instructions (e.g., one or more programs).
In an exemplary operation, the user may place the exemplary SEE probe into a sheath, and then may insert such arrangement/configuration into a body of a subject at a predetermined position thereof. The sheath alone may be inserted into the human body in advance, and it is possible to insert the SEE probe into the sheath after sheath insertion. The exemplary probe may be used to observe inside a human body and works as endoscope such as arthroscopy, bronchoscope, sinuscope, vascular endoscope and so on. The images captured during the exemplary operation may be stored in or more data formats including but not limited to video data or still image data. The image processing algorithms described herein may be applied to any image data captured by the image capture device such that artifact noise such as speckle noise can be corrected thereby enhancing the quality of the image by making the speckle noise less pronounced within the image data.
The above described speckle correction algorithm may also be applicable to images captured by an image capture apparatus such as the one described in
The workflow discussed above that generates a circularized image captured by an exemplary SEE probe system described in
In referring to the description, specific details are set forth in order to provide a thorough understanding of the examples disclosed. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily lengthen the present disclosure.
It should be understood that if an element or part is referred herein as being “on”, “against”, “connected to”, or “coupled to” another element or part, then it may be directly on, against, connected or coupled to the other element or part, or intervening elements or parts may be present. In contrast, if an element is referred to as being “directly on”, “directly connected to”, or “directly coupled to” another element or part, then there are no intervening elements or parts present. When used, term “and/or”, includes any and all combinations of one or more of the associated listed items, if so provided.
Spatially relative terms, such as “under” “beneath”, “below”, “lower”, “above”, “upper”, “proximal”, “distal”, and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the various figures. It should be understood, however, that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, a relative spatial term such as “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90° or at other orientations) and the spatially relative descriptors used herein are to be interpreted accordingly. Similarly, the relative spatial terms “proximal” and “distal” may also be interchangeable, where applicable.
The term “about,” as used herein means, for example, within 10%, within 5%, or less. In some embodiments, the term “about” may mean within measurement error.
The terms first, second, third, etc. may be used herein to describe various elements, components, regions, parts and/or sections. It should be understood that these elements, components, regions, parts and/or sections should not be limited by these terms. These terms have been used only to distinguish one element, component, region, part, or section from another region, part, or section. Thus, a first element, component, region, part, or section discussed below could be termed a second element, component, region, part, or section without departing from the teachings herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the”, are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms “includes” and/or “including”, when used in the present specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof not explicitly stated.
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described exemplary embodiments will be apparent to those skilled in the art in view of the teachings herein. Indeed, the arrangements, systems and methods according to the exemplary embodiments of the present disclosure can be used with any SEE system or other imaging systems.
In describing example embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the present disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.