The systems and methods described herein relate to improvements in imaging. More particularly, systems and methods for increasing dynamic range and mitigating artifacts in imaging systems, such as scanned beam imagers.
Imaging devices are used in a variety of applications; in particular medical imaging is critical in the identification, diagnoses and treatment of a variety of illnesses. Imaging devices, such as a Scanned Beam Imaging device (SBI) can be used in endoscopes, laparoscopes and the like to allow medical personnel to view, diagnose and treat patients without performing more invasive surgery. To be effective, such images are required to be accurate and relatively free of artifacts. In addition, the imaging system is required to have the light intensity range resolution to allow different tissue and the like to be distinguished. Such systems should not only detect a wide dynamic range of input light intensity, but should have sufficient range to manipulate or present the received data for further processing or display. Accordingly, the analog to digital (A/D) converters and internal data paths should have sufficient resolution to represent variations in light intensity, as well as the detectors that receive light. Effectiveness of imaging systems may also be limited by the resolution of the display media (e.g., CRT/TV, LCD or plasma monitor). Generally, such display media have limited intensity range resolution, such as 256:1 (8 bit) or 1024:1 (10 bit), while the SBI device may have the ability to capture large intensity range resolutions, such as 16384:1 (14 bits) or better.
In SBI devices, instead of acquiring an entire frame at one time, the area to be imaged is rapidly scanned point-by-point by an incident beam of light. The reflected or returned light is picked up by sensors and translated into a data stream representing the series of scanned points and associated returned light intensity values. Unlike charge coupled device (CCD) imaging, where all or half of the pixels are imaged simultaneously, each scanned point in an SBI image is temporally displaced in time from the previously scanned point.
Scanned beam imaging endoscopes using bi-sinusoidal and other scanning patterns are known in the art; see, for example U.S. Patent Application US 2005/0020926 A1 to Wikloff et al. An exemplary color SBI endoscope has a scanning element that uses dichroic mirrors to combine red, green and blue laser light into a single beam of white light that is then deflected off a small mirror mounted on a scanning biaxial MEMS (Micro Electro Mechanical System) device. The MEMS device scans a given area with the beam of white light in a predetermined bi-sinusoidal or other comparable pattern and the reflected light is sampled for a large number of points by red, green and blue sensors. Each sampled data point is then transmitted to an image processing device.
The following summary provides a basic description of the subject matter described herein. It is not an overview of the subject matter. Furthermore, it does not define or limit the scope of the claimed subject matter. The sole purpose is to provide an introduction and/or basic description of certain aspects.
The systems and methods are described herein can be used to enhance imaging by reducing artifacts and providing for dynamic range control. In certain embodiments, a modulator is used in conjunction with a scanned beam imaging system to mitigate artifacts caused by power fluctuations in the system light source. The system can include a detector that receives the scanning beam from the illuminator and an analysis component that determines the difference, if any, between the emitted scanning beam and the desired scanning beam. The analysis component can utilize the modulator to adjust the scanning beam, ensuring consistency in scanning beam output.
In an alternative embodiment, the modulator can be used to accommodate the wide dynamic range of a natural scene and represent the scene in the limited dynamic range of the display media. In scanned beam imaging, a beam reflected from a field of view is received at a detector and used to generate corresponding image data. An image frame obtained using a scanned beam imager can be used to predict whether a particular location or pixel will appear over or under illuminated for display of future image frames. Based upon such predictions, the modulator can adjust the beam emitted by the illuminator on a pixel by pixel basis to compensate for locations predicted to have low or high levels of illumination. In a further embodiment, the light source or sensitivity of the detectors can be adjusted, instead of utilizing a modulator.
In still another embodiment, localized gamma correction can be used to enhance image processing. Frequently, data is lost due to limitations of display medium and the human visual system. In many systems, image data is collected over a larger range of intensities than can be displayed by the particular display means. In such systems, image data is mapped to a display range. This mapping function is often referred to as the “gamma” correction, where a single gamma function is used for an image. Here, a plurality of regions are defined, such that separate gamma functions or values can be assigned to individual regions of the image.
The accompanying figures depict multiple embodiments of the systems and methods described herein. A brief description of each figure is provided below. Elements with the same reference numbers in each figure indicate identical or functionally similar elements. Additionally, as a convenience, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
It should be noted that each embodiment or aspect described herein is not limited in its application or use to the details of construction and arrangement of parts and steps illustrated in the accompanying drawings and description. The illustrative embodiments of the claimed subject matter may be implemented or incorporated in other embodiments, variations and modifications, and may be practiced or carried out in various ways. Furthermore, unless otherwise indicated, the terms and expressions employed herein have been chosen for the purpose of describing the illustrative embodiments for the convenience of the reader and are not for the purpose of limiting the subject matter as claimed herein.
It is further understood that any one or more of the following-described embodiments, examples, etc. can be combined with any one or more of the other following-described embodiments, examples, etc.
The electrical signals drive an image processor (not shown) that builds up a digital image and transmits it for further processing, decoding, archiving, printing, display, or other treatment or use via interface 120. The image can be archived using a printer, analog VCR, DVD recorder or any other recording means as known in the art.
Illuminator 104 may include multiple emitters such as, for instance, light emitting diodes (LEDs), lasers, thermal sources, arc sources, fluorescent sources, gas discharge sources, or other types of illuminators. In some embodiments, illuminator 104 comprises a red laser diode having a wavelength of approximately 635 to 670 nanometers (nm). In another embodiment, illuminator 104 comprises three lasers: a red diode laser, a green diode-pumped solid state (DPSS) laser, and a blue DPSS laser at approximately 635 nm, 532 nm, and 473 nm, respectively. Illuminator 104 may include, in the case of multiple emitters, beam combining optics to combine some or all of the emitters into a single beam. Illuminator 104 may also include beam-shaping optics such as one or more collimating lenses and/or apertures. Additionally, while the wavelengths described in the previous embodiments have been in the optically visible range, other wavelengths may be within the scope of the claimed subject matter. Emitted beam 106, while illustrated as a single beam, may comprise a plurality of beams converging on a single scanner 108 or onto separate scanners 108.
In a resonant scanned beam imager (SBI), the scanning reflector or reflectors 108 oscillate such that their angular deflection in time is approximately a sinusoid. One example of these scanners 108 employs a microelectromechanical system or (MEMS) scanner capable of deflection at a frequency near its natural mechanical resonant frequencies. This frequency is determined by the suspension stiffness, and the moment of inertia of the MEMS device incorporating the reflector and other factors such as temperature. This mechanical resonant frequency is referred to as the “fundamental frequency.” Motion can be sustained with little energy and the devices can be made robust when they are operated at or near the fundamental frequency. In one example, a MEMS scanner 108 oscillates about two orthogonal scan axes. In another example, one axis is operated near resonance while the other is operated substantially off resonance. Such a case would include, for example, the non-resonant axis being driven to achieve a triangular, or a sawtooth angular deflection profile as is commonly utilized in cathode ray tube (CRT)—based video display devices. In such cases, there are additional demands on the driving circuit, as it must apply force throughout the scan excursion to enforce the desired angular deflection profile, as compared to the resonant scan where a small amount of force applied for a small part of the cycle may suffice to maintain its sinusoidal angular deflection profile.
In accordance with certain embodiments, scanner 108 is a MEMS scanner. MEMS scanners can be designed and fabricated using any of the techniques known in the art as summarized in the following references: U.S. Pat. No. 6,140,979, U.S. Pat. No. 6,245,590, U.S. Pat. No. 6,285,489, U.S. Pat. No. 6,331,909, U.S. Pat. No. 6,362,912, U.S. Pat. No. 6,384,406, U.S. Pat. No. 6,433,907, U.S. Pat. No. 6,512,622, U.S. Pat. No. 6,515,278, U.S. Pat. No. 6,515,781, and/or U.S. Pat. No. 6,525,310, all hereby incorporated by reference. In one embodiment, the scanner 108 may be a magnetically resonant scanner as described in U.S. Pat. No. 6,151,167 of Melville, or a micromachined scanner as described in U.S. Pat. No. 6,245,590 to Wine et al. In an alternative embodiment, a scanning beam assembly of the type described in U.S. Published Application 2005/0020926A1 is used.
In an embodiment, the assembly is constructed with a detector 116 having adjustable gain or sensitivity or both. In one embodiment, the detector 116 may include a detector element (not shown) that is coupled with a means for adjusting the signal from the detector element such as a variable gain amplifier. In another embodiment, the detector 116 may include a detector element that is coupled to a controllable power source. In still another embodiment, the detector 116 may include a detector element that is coupled both to a controllable power source and a variable gain or voltage controlled amplifier. Representative examples of detector elements useful in certain embodiments are photomultiplier tubes (PMT's), charge coupled devices (CCD's), photodiodes, etc.
Referring now to the block diagram of an embodiment of an SBI system 200 with beam leveling, depicted in
Turning again to
The system 200 includes a modulation system 202 capable of compensating for power fluctuations in the illuminators 104. A separate modulation system 202 can be utilized to compensate for each illuminator 104 within the imaging system 200. In an embodiment, the modulation system includes a beam splitter 204 that splits the beam 106 emitted from the illuminator 104. In an embodiment, the beam splitter 204 is capable of diverting a portion of the beam of light 206 for analysis by the modulation system 202, while the remainder of the beam 208 is received at the scanner 108. Representative examples of beam splitters include polarizing beam splitter (e.g., a Wollaston prism using birefringent materials), a half-silvered mirror, and the like. The diverted beam 206 is deflected and travels to one or more modulation detectors 210 that receive the light. Modulation detectors 210 can include detector elements (not shown) that generate an electrical signal corresponding to the received beam. Representative examples of detector elements useful in certain embodiments are photomultiplier tubes (PMT's), charge coupled devices (CCD's), photodiodes, and the like.
The analysis component 212 receives the electrical signals and determines whether modulation of the beam is necessary, as well as the amount of any modulation. As used herein, the term “component” can include hardware, software, firmware or any combination thereof. The analysis component 212 compares the electrical signals that correspond to the beam of illumination 206 received at the modulation detector(s) 210, to a target level that corresponds to the desired output of the illuminator 104.
In an embodiment, the target level is a predetermined constant determined based, at least in part, upon the type or model of the illuminator 104. Alternatively, the target level can be initialized by detecting the beam at an initialization time, where the target level corresponds to the state of the beam at such time. Initialization can occur automatically at or after power on of the illuminator 104. In an embodiment, a user can elect initialization of the modulation system 202 at any point, setting the target level based upon the beam emitted at that particular point in time.
Based upon comparison of the current signal and the target level, the analysis component 212 determines that appropriate modulation to achieve the target level. The analysis component 212 directs the modulator 214 to modulate the beam 106 to produce a modulated beam 216 corresponding to the target level. In an embodiment, the analysis component 212 includes an analog comparator that compares the received signal and the target level, a processor that runs a control algorithm that determines the necessary modulation of the beam based upon the comparison and a modulator driver that controls the modulator(s) 214 based upon the computed modulation. In yet another embodiment, the analysis component 212 controls operation of the modulation detector(s) 210.
In an embodiment, the modulator 214 is implemented with a silicon-based electro-optic modulator (EOM). An EOM is an optical device which can modulate a beam of illumination in phase, frequency, amplitude or direction. Representative examples of devices for modulation include birefringent crystals (e.g., lithium niobate), an etalon and the like. The modulator 214 can be integrated into a single, monolithic MEMS device, enabling integration of a modulation system 202 with polychromatic laser sources as used in SBI systems. If a polychromatic source including multiple illuminators is used, the output of each illuminator 104 would be adjusted by a separate modulation system 202 or control loop, and the output of all of the modulation systems 202 would be passed on to the scanner. In an embodiment, the modulator 214 has a contrast ratio of greater than twenty to one (20:1) at modulation frequencies over 1 gigahertz and using relatively low voltage control signals, such as less than five volts (5V). In another embodiment, the modulator has a modulation frequency of greater than about one hundred Megahertz (100 MHz).
In certain embodiments, the sampling rate of the modulation system 202 can be significantly higher than the imaging rate of the scanned beam imager. Generally, SBI imagers sample reflected illumination at a rate of about fifty (50) million samples per second (MSPS). The speed of the modulation can be greater than 100 Megahertz (MHz), allowing the output power of the illuminator(s) 104 to be leveled before artifacts appear in images generated by the imaging system 200.
In a further embodiment, the beam of illumination produced by the illuminator 104 passes through an optic fiber (not shown) prior to reaching the scanner 108. For example, an SBI system implemented in an endoscope utilizes fiber optics to allow the beam to be transmitted into a body. An SBI system can be easily modified by positioning the beam splitter 204 between the illuminators 104 and the optic fiber. If beams from multiple illuminators 104 are used to generate polychromatic light, a beam splitter 204 capable of separating the polychromatic light into multiple beams (e.g., a dichroic mirrored prism assembly) can be used and the beams can be individually modulated.
In an endoscope utilizing an SBI system, the illuminator 104 is positioned exterior to the body and the beam passes through an optic fiber until reaching the scanner 108, positioned proximate to the tip of the endoscope inside the body. As the beam is transmitted along the optic fiber, beam intensity may be lost. The magnitude of the loss can be affected by relative curvature of the optic fiber. In an embodiment, the beam splitter 204 and modulator detectors 210 are positioned proximate to the scanner 108, such that the modulator 214 compensates for any loss in power due to the current position or curvature of the optic fiber. In another embodiment, a beam splitter 204 is positioned proximate to the scanner 108 and the diverted beam can be transmitted through a second optic fiber to modulator detectors 214 positioned exterior to the body. In a further embodiment, a second beam splitter (not shown), such as a dichroic mirrored prism assembly, can split the beam from the second optic fiber into multiple beams (e.g., red, blue and green), which can be received and processed by separate modulator detectors 214. Any power loss at the scanner 108 can be computed based upon total loss received at the modulator detector 214. This configuration may be particularly useful in an endoscope, where minimization of the components inserted into the body is critical.
Various aspects described herein can be implemented in a computing environment and/or utilizing processing units. For example, the analysis component 212 as well as various other components can be implemented using a microprocessor, microcontroller, or central processor unit (CPU) chip and printed circuit board (PCB). Alternatively, such components can include an application specific integrated circuit (ASIC), programmable logic controller (PLC), programmable logic device (PLD), digital signal processor (DSP), or the like. In addition, the components can include and/or utilize memory, whether static memory such as erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash or bubble memory, hard disk drive, tape drive or any combination of static memory and dynamic memory. The components can utilize software and operating parameters stored in the memory. In some embodiments, such software can be uploaded to the components electronically whereby the control software is refreshed or reprogrammed or specific operating parameters are updated to modify the algorithms and/or parameters used to control the operation of the modulator 214, illuminator 104 or other system components.
Flowcharts are used herein to further illustrate certain exemplary methodologies associated with image enhancement. For simplicity, the flowcharts are depicted as a series of steps or acts. However, the methodologies are not limited by the number or order of steps depicted in the flowchart and described herein. For example, not all steps illustrated may be necessary for the methodology. Furthermore, the steps may be reordered or performed concurrently, rather than sequentially as illustrated.
Turning now to
At reference number 308, a determination is made as to whether the beam of light requires modulation based at least in part upon the comparison of the signal to the target level. If no, the process ends and modulator 214 is left in its then current state. If yes, at reference number 310,, the necessary direction or command is transmitted to the modulator 214 to modify the beam. At reference number 312, the modulator 214 affects the beam, such that the beam received at the scanner 108 is modulated to compensate for any changes in the beam emitted by the illuminator 104.
Referring now to
In general, imaging systems have a limited dynamic range, where dynamic range is equal to the ratio of the returned light at the detector at the saturation level to the returned light at a level perceptible above the system noise of the detector circuits. This limited range limits the ability to discern detail in either brightly reflecting or dimly reflecting areas. In particular, in SBI imaging, bright regions are most often the result of specular reflections or highly reflective scene elements close to the tip of the SBI imager. Dark regions are most often the result of optically dark or absorbing field of view elements, such as blood, distant from the tip of the SBI imager. At the extremes, the image appears to be either over or under exposed.
In many imaging systems, such as charge coupled device (CCD) imaging, all or half of the pixels are imaged simultaneously. Consequently, illumination is identical for all or half of the pixels within the image. However, in SBI devices, instead of acquiring an entire frame at one time, the area to be imaged is rapidly scanned point-by-point by an incident beam of light. As used herein, the term “frame” is equal to the set of image data for the area to be imaged. Consequently, the intensity of illumination can vary from between pixels within the same image. The reflected or returned light is picked up by sensors and translated into a data stream representing the series of scanned points and associated returned light intensity values. To improve imaging at extremes, the beam emitted by the illuminator 104 can be modulated to add illumination intensity in areas where the field of view is dark or under-exposed and to reduce illumination in areas where the field of view is bright or appears over-exposed.
Turning once again to
In certain embodiments, the analysis component 212 records image data associated with the coordinates of the current pixel or location in an image frame in an image data store 402. As used herein, the term “data store” means any collection of data, such as a file, database, cache and the like. The image data includes the intensity information and data regarding any modulation applied to the beam as emitted by the illuminator 104 to obtain the current electrical signal. This image data can be used determine whether any modulation adjustment is necessary for the pixel or location for the next frame of image data. Typically, data changes slowly over successive image frames. Therefore, image data from the current frame can be used to adjust illumination for the next image frame.
When scanning the next frame of image data, the analysis component 212 can retrieve the electrical signal and modulation information for the current location to be scanned, referred to herein as the scanning location. The analysis component 212 compares the electrical signal to one or more threshold values to determine whether any further modulation is to be applied to the beam, or whether the current level of modulation is sufficient. For example, if the signal indicates that the reflected beam is of low intensity, the emitted beam can be modulated to increase intensity. Conversely, if the signal indicates that the reflected beam is of high intensity, the emitted beam can be modulated to decrease intensity the next time the location (x, y) is scanned. If the signal indicates that the reflected beam is of an acceptable intensity, the previous level of modulation can be applied to the beam. In an alternative embodiment, the electrical signal and modulation value for the location just scanned can be used to set values for the next location.
The modulation system 400 is capable of performing localized automatic gain control, synchronized with the particular requirements of the field of view. If a set of illuminators are utilized, such as a red, blue and green laser, multiple modulators can be used, each modulating a separate illuminator. In an embodiment, a separate modulator 214 is utilized for each laser component of the illuminators.
Based upon such coordinates, the image data for that location in a previous frame is obtained at reference number 506. Image data includes intensity information and data regarding any modulation applied to achieve such intensity. At reference number 508, the retrieved image data is analyzed. In particular, the intensity information is compared to one or more thresholds to determine whether the location was over or under exposed in the previous frame. In an embodiment, the thresholds are predetermined constants. In another embodiment, thresholds can be determined based upon user input.
At reference number 510, a determination is made as to whether the beam is to be modulated for the current scan location based upon the analysis of the previous information. The determination is based upon comparison of intensity information to the thresholds and the record of prior modulation of the beam. For example, the intensity from the previous image may be within the acceptable range, indicating that the location was sufficiently illuminated without being excessively illuminated. However, the modulation information may indicate that to achieve that intensity, the modulator 214 modified the emitted beam. Accordingly, the same modulation should be utilized in the current scan of the location.
If no modulation is required, the process terminates and no additional direction is provided to the modulator 214. If yes, direction or controls for the modulator 214 are generated at reference number 512 and at reference number 514, the beam emitted from the illuminator is modulated. The methodology 500 is-repeated for successive locations in an image frame, automatically performing gain control.
In an embodiment, an analysis component 212 receives the electrical signals from the optical sampler and determines the appropriate modulation of the beam produced by the illuminator 104. In particular, the analysis component 212 compares the electrical signals to a target level that corresponds to the desired output of the illuminator 104. Based upon this comparison, the analysis component 212 determines that appropriate modulation to achieve the target level. The analysis component 212 directs the modulator 214 to achieve this target level.
In this embodiment, the target level is not necessarily constant; instead the target level is computed to perform automatic gain control. As described above with respect to
When scanning a location (x, y) to generate an image frame, the analysis component 212 can retrieve the electrical signal or intensity information and modulation information for that location from an image data store 402. The analysis component 212 can compare the retrieved electrical signal information to one or more threshold values to determine the appropriate target level for the beam. For example, if the signal information indicates that the reflected beam was of low intensity, a target level is selected such that the emitted beam is modulated to increase intensity. Conversely, if the signal indicates that the reflected beam was of high intensity, the target level is selected such that the emitted beam is modulated to decrease intensity when the location (x, y) is scanned. If the signal indicates that the reflected beam was of an acceptable intensity, no further modulation is necessary.
Referring now to
The controller 118 includes an analysis component 212 that evaluates the electrical signal obtained from the detector(s) and determines whether a particular location is over or under illuminated. In an embodiment, analysis is based solely upon the current data received from the detectors 116. In a further embodiment, image data can be maintained in an image data store 402 and used to predict whether a particular location will be over or under illuminated in a future image frame. Image data can include data regarding intensity of reflected beam, regulation of the illuminator 104 by the illuminator component, and adjustment of the detector 116 by the detector component 704.
The detector component 704 is operatively connected to the detector 116 to modify the detector gain through control ports, Sensitivity 706 and Gain 708. In an embodiment, the sensitivity port 706 is operably connected to a controllable power source such as a Voltage Controlled Voltage Source (VCVS) (not shown). In one embodiment the sensitivity control port 706 employs analog signaling. In another embodiment, the sensitivity control port 706 employs digital signaling. The gain port 708 is operably connected to a voltage controlled amplifier (VCA) (not shown). In one embodiment, the gain control port 708 employs analog signaling. In another embodiment, the gain control port 708 employs digital signaling. The detector component 704 apportions detector gain settings to the sensitivity and gain control ports. The detector component 704 can update settings during each detector sample period or during a small number of temporally contiguous sample periods.
In a particular detector, an APD or Avalanche Photo Diode, sensitivity can be controlled by the applied bias voltage (controlled by the VCVS). This type of gain control is relatively slow. In one embodiment, this control can best be used to adjust the gain or “brightness level” of the overall image, not individual locations within the image. Another method to control the gain is to provide a Voltage Controlled Amplifier (sometimes referred to as a Variable Gain Amplifier) just prior to sending the detector output to the A/D converter. These circuits have extremely rapid response and can be used to change the gain many times during a single oscillation of the scanning mirror.
In general, the inability to discern subtle differences in highlights and shadows is impacted most by limitations of display medium and the human visual system. In many systems, image data is collected over a larger range of intensities than can be displayed by the particular display means. In such systems, image data is mapped to a display range. This mapping function is often referred to as the “gamma” correction, which can be represented as follows:
D(x, y)=Gamma(I(x, y))
Here, I(x, y), is the intensity at coordinates (x, y) and D(x, y) is the displayed intensity. The function Gamma, may be linear or non-linear. In an embodiment, the Gamma function can be represented as follows:
y=xy
Here, x is the image intensity and y is the displayed intensity. Gamma value, γ, can be selected to optimize the displayed image. The graphs depicted in
In addition to adjusting fixed image data, gamma correction can also be applied to video or motion image processes, if the image capture medium (e.g., film, video tape, mpeg and the like) has the same fixed mapping to the display medium (e.g., projection screen, CRT, plasma screen and the like). Motion images can be treated as a series of still images, referred to as frames of a scene. Accordingly, gamma correction can be applied to each frame of a motion image.
Turning now to
The localized gamma correction system 900 receives or obtains image data as an input. In one embodiment, the image data includes a single image frame. In alternative embodiments, the input image data includes multiple frames of a motion image or a data stream, which is updated in real-time, providing for presentation of gamma corrected image data. A region component 902 identifies or defines two or more separate regions within an image frame for gamma correction. As used herein, a region is a portion of an image frame. Regions can be specified by listing pixels or locations contained within the region, by defining the boundaries of the region, by selection of a center point and a radius of a circular region or using any other suitable means. In an embodiment, as few as two regions are defined. In a further embodiment, each location (x, y) or pixel within the image frame is treated as a separate region and can have a separate, associated gamma function or value.
In an embodiment, the system 900 includes a user interface 904 that allows users to direct gamma correction. In one embodiment, the user interface 904 is a simple on/off control such that users can elect whether to apply gamma correction. In an alternative embodiment, the user interface 904 is implemented as a graphic user interface (GUI) that provides users with a means to adjust certain parameters and control gamma correction. For example, a GUI can include controls to turn gamma correction on and off and/or to specify different levels or magnitudes of gamma correction for each of the individual regions. In certain embodiments, the user interface 904 can be implemented using input devices (e.g., mouse, trackball, keyboard, and microphone), and/or output devices (e.g., monitor, printer, and speakers).
In a further embodiment, the region component 902 utilizes user input to determine regions for gamma correction. Users can enter coordinates using the keyboard, select points or areas on a display screen using a mouse or enter gamma correction information using any means as known in the art. The region component 902 defines regions based at least in part upon the received user input.
In another embodiment, the region component 902 automatically defines one or more regions for gamma correction based upon the input image data and/or previous image frames. In a further embodiment, the region component 902 sub-samples image data using pixel averaging or any other suitable spatial filter to create a low resolution version of the image data. Each data point in the low resolution version represents multiple pixels of image data or a region within the image data. The region component 902 detects one or more candidate regions for gamma correction using the low resolution version of the image data and one or more predetermined thresholds. For example, each data point in the low resolution version can be compared to a threshold to determine if the region represented by that data point received excessive illumination. Using a spatial locality function, the region component 902 condenses candidate regions based upon the thresholds. The identified regions or data points are then used for localized gamma correction. In an alternative embodiment, users define or modify threshold values used to automatically select regions for gamma correction. In yet another embodiment, identification of regions is performed in real time, such that regions are individually identified for each image frame as the frame is processed.
The system 900 includes a gamma component 906 that determines an appropriate gamma function or value for each region. In an embodiment, the gamma function is equal to y=xy, where gamma value, γ, controls the gamma function mapping and is selected to optimize mapping of the image data to display or corrected data. The gamma component 906 can compute a gamma value for a region based upon image data associated with the region from the current frame. In a further embodiment, the gamma component 906 compares the image data for the region to one or more threshold values. For example, if the region is equal to a single location or pixel, the gamma component 906 compares the pixel value to one or more thresholds to determine if the pixel is low intensity and would therefore benefit from a low gamma value (e.g., 0.5), or if the pixel is high intensity and would therefore benefit from a high gamma value (e.g., 2.0). In yet another embodiment, if a region is composed of multiple pixels, an average, mean value or other combination of the image data for the pixels is evaluated to determine a gamma value for the region. The gamma component 906 can maintain a set of gamma values for use based upon image data.
In an alternative embodiment, the gamma component 906 utilizes image data from neighboring or proximate locations or pixels to determine an appropriate gamma value for a region. In yet another embodiment, the gamma component 906 uses a convolution kernel to determine an appropriate value. In general, convolution involves the multiplication of a group of pixels in an input image with an array of pixels in a convolution kernel. The resulting value is a weighted average of each input pixel and its neighboring pixels. Convolution can be used in high-pass (Laplacian) filtering and/or low-pass filtering.
In yet another embodiment, the gamma component 906 utilizes information regarding the image data or pixel values over time to compute gamma values. The system 900 includes an image data store 908 that maintains one or more frames of image data. In general, in a motion image or series of images obtained from an SBI or other imaging system, content of the field of view changes gradually during successive frames. Accordingly, the gamma component 906 can use a causal filter to predict future content for each location or pixel in the input image frame, based upon image data associated with the location in the previous image frame. In an embodiment, the prediction is based solely upon the contents of the particular location (x, y) for which a value is to be predicted. In another embodiment, the filter utilizes image data from proximate locations or pixels to predict content for a specific location. The gamma component 906 can utilize a temporal convolution kernel when predicting content. For example, if content changes relatively slowly, a linear predictor, such as a first derivative of the intensity curve, can be utilized. If the content varies more rapidly, second or third order filters can be used for content prediction.
The gamma component 906 determines gamma value based upon the predicted content values. For example, if it is known that the next value at an image location (x, y) is likely to be low, the gamma component 906 selects a low gamma value (e.g., 0.5) for that location, adding details to a portion of the image previously in shadow. Similarly, if it is predicted that the next value at the image location (x, y) is likely to be high, the gamma component 906 selects a high gamma value (e.g., 2.0) for that location, adding details to a highlighted area of the image.
In a further embodiment, the system 900 includes a gamma data store 910 that maintains a set of gamma values for use in gamma correction of the plurality of regions. In yet another embodiment, the set of gamma values is a matrix equal in dimension to the image data frame, such that each location (x, y) or pixel has an associated gamma value. However, basing gamma correction on small regions or even individual locations or pixels, could result in an image frame that contains artifacts. Such artifacts can be misleading, reducing the utility of the resulting image frame.
In certain embodiments, the system 900 includes a gamma filter component 912 that filters or smoothes gamma values to mitigate artifacts. The gamma filter component 912 can use convolution to decrease the likelihood of such artifacts. Artifacts may be further reduced if the two-dimensional convolution filter is expanded to three-dimensions, adding a temporal component to filtering. For example, gamma values can be adjusted based upon averaging or weighted averaging of past frames. Alternatively, the gamma filter component 912 can apply a three-dimensional convolution kernel to a temporal series of data regions.
A correction component 914 applies the gamma functions to image data to produce a corrected image or frame. Once corrected, the frame can be presented on a display medium, stored or further processed. In an embodiment, the correction component 914 retrieves the appropriate gamma value or function for each individual location (x, y) from the gamma matrix and determines the corrected image data for that location utilizing the gamma function. The corrected image seeks to optimize both the low intensity areas and the high intensity areas, enhancing the quality of the image and any imaging system. The localized gamma correction system 900 can operate in real time, updating each frame for display. The localized gamma correction component 900 can be implemented in connection with an imaging system, such as a scanned beam imager, or independently, such as in a general purpose computer.
Various aspects of the systems and methods described herein can be implemented using a general purpose computer, where a general purpose computer can include a processor (e.g., microprocessor or central processor chip (CPU)) coupled to dynamic and/or static memory. Static or nonvolatile memory includes, but is not limited to, read only memory (ROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash and bubble memory. Dynamic memory includes random access memory (RAM), including, but not limited to synchronous RAM (SRAM), dynamic RAM (DRAM) and the like. The computer can also include various input and output devices, such as those described above with respect to the user interface 904.
Additionally, the computer can operate independently or in a network environment. For example, the computer can be connected to one or more remotely located computers via a local area network (LAN) or wide area network (WAN). Remote computers can include general purpose computers, workstations, servers, or other common network nodes. It is to be appreciated that many additional combinations of components can be utilized to implement a general purpose computer.
Turning now to
At reference number 1004, the gamma component 906 determines a gamma function or value for each region in the image frame. Gamma values can be chosen from a lookup table or calculated based upon the image data. In an embodiment, gamma values are determined based solely upon values of locations or pixels within the region. In another embodiment, gamma values are computed based at least in part upon convolution of a selected pixel and a set of proximate pixels using a convolution kernel. In a further embodiment, gamma values and/or received image data are maintained over time and used to calculate the present gamma value for a location or region. In still another embodiment, users may adjust the amount or magnitude of gamma correction via a user interface 904. The magnitude adjustment can be general and applied to all regions in the image frame, or may be specific to one or more particular regions.
The correction component 914 applies the gamma values to the image frame at reference number 1006. Application of the gamma values expands dynamic range at illumination extremes, allowing users to perceive details that might otherwise have remained hidden. At reference number 1008, a determination is made as to whether there are additional image frames to update. If no, the process terminates, if yes, the process returns to reference number 1002, where one or more regions are identified within the next image frame for localized gamma correction.
In an alternate embodiment, the process returns to reference number 1004, where gamma values are determined anew for the previously identified regions. The regions selected for localized gamma correction remain the constant between image frames, but the gamma values are updated based at least in part upon the most recent image data. For example, if a user selects specific regions for localized gamma correction, the imaging system continues to utilize the user-selected regions until the user selects different regions, turns off gamma correction, or opts for automatic region identification.
In still another embodiment, to process the next image frame, the process returns to reference number 1006, where the gamma values computed for the previous frame are applied to a new image frame. If successive image frames are similar, such that the image changes gradually over time, the gamma correction computed using the previous image frame can be used to correct the current image frame.
Turning now to
In the elastic sheet model, elasticity and tension of the elastic sheet are constants that determine the manner in which the sheet reacts to the localized changes in gamma. Location of regions, size and direction of the changes to gamma are real-time inputs to the model. The output of the model is a matrix or set of gammas values, where gamma values vary smoothly over the image frame to optimize local dynamic range. If no local regions for gamma enhancement are specified, the model behaves as traditional gamma correction, where a single gamma value or function is applied equally across an image frame.
In an embodiment, the elastic sheet model is implemented by the gamma filter component 912 of localized gamma correction system 900 illustrated in
Using the elastic sheet model, the gamma filter component 912 passes the initial gamma matrix M through a two-dimensional spatial filter, such as a median filter, to arrive at the output matrix, E. The size of the two-dimensional kernel used for the spatial filter is proportional to tension constant, T, and defines the extent of the filter effect. For example, in an embodiment, the size of the two-dimensional kernel is 2T+1 by 2T+1. The overall shape of the filter is determined by the elasticity constant, Y. For example, high values for the elasticity constant can represent greater elasticity, such that a change in gamma at one point or pixel will have a relatively strong effect on a relatively small area around the point. Conversely, low values for the elasticity constant can represent lower elasticity, such that a change in gamma will have a relatively weak effect over a larger area. The filter is constructed to reflect the effects of the relative elasticity of the model. If y>1 then “light” areas are enhanced. If y<1 then “dark” areas are enhanced. If y=1 then no enhancement takes place. The further the difference from 1, the greater the enhancement effect.
At reference number 1204, an initial gamma matrix, M, is generated. The initial gamma matrix is of the same dimension as the image frame and can be defaulted to a predetermined value. In an embodiment a gamma component 906 determines a gamma function or value for each region or control point in the image frame. Gamma values can be chosen from a lookup table or calculated based upon the image data. In an embodiment, gamma values are determined based solely upon values of locations or pixels within the region. In another embodiment, gamma values are computed based at least in part upon convolution of a selected pixel and a set of proximate pixels using a convolution kernel. In a further embodiment, gamma values and/or received image data are maintained over time and used to calculate the present gamma value for a location or region. In still another embodiment, users may adjust the amount or magnitude of gamma correction via a user interface 904. The magnitude adjustment can be general and applied to all regions in the image frame, or may be specific to one or more particular regions. Initial gamma matrix can be generated based upon the gamma values generated for each of the regions in the image frame.
Based upon the elastic sheet model, a filter is generated at reference number 1206. The filter size and shape are determined based upon the elasticity, Y, and tension, T, of the model. In an embodiment, the two-dimensional kernel or filter has dimensions of 2T+1 by 2T+1. The overall shape of the filter is determined by the elasticity constant, Y.
At reference number 1208, the filter is applied to the initial gamma matrix, M, smoothing the gamma values, and generating an enhanced gamma matrix, E. The enhanced gamma matrix is applied to the image frame at 1210, minimizing the number and/or effect of artifacts in the image frame.
It will be understood that the figures and foregoing description are provided by way of example. It is contemplated that numerous other configurations of the disclosed systems, processes and devices for imaging may be created utilizing the subject matter disclosed herein. Such other modifications and variations there may be made by persons skilled in the art without departing from the scope and spirit of the subject matter as defined by the appended claims.