The example embodiments relate generally to image processing, and more specifically to techniques for edge-enhancement.
Some image processing systems use edge-enhancement techniques to improve the apparent sharpness of images. For example, conventional edge-enhancement techniques may include unsharp masking, or polynomial-based high pass filtering. Often such techniques may be employed in a conventional image signal processing pipeline after an input image has been processed by a noise reduction block. However, conventional edge-enhancement techniques can result in the addition of unwanted anomalies to the resulting edge-enhancement image. For example, unwanted overshoot, ringing, and haloing may be introduced near detected edges.
SUMMARY
This Summary is provided to introduce in a simplified form a selection of concepts that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to limit the scope of the claimed subject matter.
Aspects of the present disclosure are directed to methods and apparatus for generating an overshoot-corrected set of edges are disclosed. In one example, a method for image processing is disclosed. The example method includes receiving an input image, and generating a coarse set of edge pixels based on the received input image. The method further includes, for each given pixel in the coarse set of edge pixels, identifying a first pixel of the input image that corresponds to the given pixel in the coarse set of edge pixels, and determining a modified edge pixel for the given pixel based at least in part on differences between the first pixel and one or more pixels of the input image that are located within a vicinity of the first pixel. The method further includes generating an edge-enhanced version of the input image based at least in part on the modified edge pixels.
In another example, an image processing system is disclosed. The image processing system includes one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the image processing system to receive an input image, and generate a coarse set of edge pixels based on the received input image. Execution of the instructions further causes the image processing system to, for each given pixel in the coarse set of edge pixels, identify a first pixel of the input image that corresponds to the given pixel in the coarse set of edge pixels, and determine a modified edge pixel for the given pixel based at least in part on differences between the first pixel and one or more pixels of the input image that are located within a vicinity of the first pixel. Execution of the instructions further causes the image processing system to generate an edge-enhanced version of the input image based at least in part on the modified edge pixels.
In another example, a non-transitory computer-readable storage medium is disclosed, storing instructions that, when executed by one or more processors of an image processor, cause the image processor to perform a number of operations including receiving an input image, and generating a coarse set of edge pixels based on the received input image. Execution of the instructions causes the image processor to perform operations further including, for each given pixel in the coarse set of edge pixels, identifying a first pixel of the input image that corresponds to the given pixel in the coarse set of edge pixels, and determining a modified edge pixel for the given pixel based at least in part on differences between the first pixel and one or more pixels of the input image that are located within a vicinity of the first pixel. Execution of the instructions causes the image processor to perform operations further including generating an edge-enhanced version of the input image based at least in part on the modified edge pixels.
In another example, an image processing system is disclosed. The image processing system includes means for receiving an input image, and means for generating a coarse set of edge pixels based on the received input image. For each given pixel in the coarse set of edge pixels, the image processing system further includes means for identifying a first pixel of the input image that corresponds to the given pixel in the coarse set of edge pixels, and means for determining a modified edge pixel for the given pixel based at least in part on differences between the first pixel and one or more pixels of the input image that are located within a vicinity of the first pixel. The image processing system further includes means for generating an edge-enhanced version of the input image based at least in part on the modified edge pixels.
The example embodiments are illustrated by way of example and are not intended to be limited by the figures of the accompanying drawings, where:
Like reference numerals refer to corresponding parts throughout the drawing figures.
In the following description, numerous specific details are set forth such as examples of specific components, circuits, and processes to provide a thorough understanding of the present disclosure. The term “coupled” as used herein means connected directly to or connected through one or more intervening components or circuits. Also, in the following description and for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the example embodiments. However, it will be apparent to one skilled in the art that these specific details may not be required to practice the example embodiments. In other instances, well-known circuits and devices are shown in block diagram form to avoid obscuring the present disclosure. Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the relevant art to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present application, discussions utilizing the terms such as “accessing,” “receiving,” “sending,” “using,” “selecting,” “determining,” “normalizing,” “multiplying,” “averaging,” “monitoring,” “comparing,” “applying,” “updating,” “measuring,” “deriving” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
In the figures, a single block may be described as performing a function or functions; however, in actual practice, the function or functions performed by that block may be performed in a single component or across multiple components, and/or may be performed using hardware, using software, or using a combination of hardware and software. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the example embodiments. Also, the example image processing devices may include components other than those shown, including well-known components such as one or more processors, memory and the like.
The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof, unless specifically described as being implemented in a specific manner. Any features described as modules or components may also be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a non-transitory processor-readable storage medium comprising instructions that, when executed, performs one or more of the methods described above. The non-transitory processor-readable data storage medium may form part of a computer program product, which may include packaging materials.
The non-transitory processor-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, other known storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer or another processor.
The various illustrative logical blocks, modules, circuits and instructions described in connection with the embodiments disclosed herein may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), application specific instruction set processors (ASIPs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. The term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured as described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The example embodiments are not to be construed as limited to specific examples described herein but rather to include within their scopes all embodiments defined by the appended claims.
As mentioned above, conventional edge-enhancement techniques may result in unnatural-looking images, due to the introduction of unwanted overshoot, ringing, and haloing. Overshoot may result in portions of an image adjacent to an edge being unnaturally dark or light. For example, relatively light-colored portions of an image near an edge with a relatively dark region may be unnaturally light. Ringing and haloing may be caused by Gibbs oscillation near an edge, resulting in ghosting or echo-like artifacts.
As discussed above, conventional edge-enhancement techniques may result in haloing and ringing near edges.
In accordance with the example embodiments, an image processing system may reduce overshoot by reprocessing a coarse set of edges, and applying the reprocessed set of edges to an input image to generate an overshoot-reduced edge enhanced image. The example embodiments may reprocess the coarse set of edges by performing a series of operations for each given pixel in the coarse set of edges, including generating and populating a weighting window centered on the given pixel, and determining a modified edge pixel based on the coarse set of edges and the weighting window. A set of modified edge pixels may be generated through these operations, which may comprise an overshoot-cancelled set of edges of the input image. Application of this overshoot-canceled set of edges to the input image may result in an edge-enhanced image with reduced haloing and ringing as compared to conventional techniques.
The example embodiments may reduce overshoot by reprocessing a coarse set of edges of an input image, to maintain contrast while suppressing overshoot and oscillation. For example, a modified edge pixel may be determined for each given pixel in the coarse set of edges. In some embodiments, the given pixels may comprise each pixel in the coarse set of edges, while in some other embodiments, the given pixels may comprise a subset of the coarse set of edges, such as a subset which excludes pixels within a threshold number of rows or columns from a border of the coarse set of edges. The modified edge pixel for each given pixel may be determined based at least in part on the input image and on a weighting window centered on the given pixel. For example, each modified edge pixel may be determined based at least in part on a summation of products of pixels of the weighting window with corresponding pixels of the coarse set of edges. In some embodiments, each modified edge pixel may be normalized based on a summation of the values of the pixels of the weighting window.
Each pixel in the weighting window may be determined and populated based at least in part on an absolute difference in pixel intensity between an input image pixel corresponding to the populated pixel and an input image pixel corresponding to the given pixel. In some implementations, each populated pixel in the weighting window may have a value based on a distance factor and on an intensity factor. The distance factor may be based at least in part on a distance between the populated pixel and the center pixel of the weighting window (i.e., the pixel of the weighting window corresponding to the given pixel). The intensity factor may be based on a comparison of the absolute difference to a threshold value. This threshold value may be proportional to a pixel intensity of an input image pixel corresponding to the center pixel of the weighting window. The intensity factor may have a maximum value if the absolute difference is less than the threshold value, and may have a minimum value if the absolute difference is more than an integer multiple of the threshold value. In some examples the minimum value is zero, and the integer multiple of the threshold value may be twice the threshold value. For some implementations, the intensity factor may be interpolated between the maximum and the minimum values based on a comparison between the absolute difference and the threshold value. For example, if the absolute difference is greater than the threshold value, but less than the integer multiple of the threshold value, the intensity factor may be interpolated between the maximum value and the minimum value.
For some implementations, the weighting window may be a square weighting window having an odd number of pixels on each side. For example, the weighting window may be a square 5×5 window. However, in other implementations, the weighting window may have other dimensions.
In an example implementation, each modified edge pixel may be calculated using the following equation:
edgeout=Σi,j ϵWedgei,j×weighti,j/Σi,j ϵWweighti,j
where edgeout is the modified edge pixel, W is the weighting window comprising weights weighti,j, and edgey is the coarse edge pixel corresponding to the location (i,j) in the weighting window W.
The value for each pixel in the weighting window—for example, weighti,j at pixel (i,j) of the window—has a value which is based at least in part on an absolute difference in pixel intensity between an input image pixel corresponding to the pixel(i,j) and an input image pixel corresponding to the center pixel of the window. For example, with respect to
dif fi,j,=|levelt,j-leveli,j−level2,2|
where leveli,j refers to the pixel intensity level at pixel (i,j) and level2,2 refers to the pixel intensity level of the pixel corresponding to the center pixel of the weighting window. For example, if pixel (i,j) is the pixel 530B of edge window 500B, then leveli,j may be the pixel intensity of pixel 530C, and level2,2 may be the pixel intensity of the center pixel 510C of pixel intensity level window 500C. Note that the values for the pixels of the weighting window may be based on pixel intensity and distance factors, and not on the coarse set of edges.
The value for each pixel in the weighting window may be further based on a threshold value which may be proportional to a pixel intensity level of the pixel corresponding to the center pixel of the weighting window. For example,
The value for each pixel in the weighting window may further be based on a measure of distance from each pixel from the center pixel of the window. More particularly, pixels in the weighting window which are nearer the center pixel may have a larger weight as compared to pixels which are further from the center pixel of the weighting window. In an example implementation employing 5×5 weighting windows, this measure of distance may include the use of a distance weighting factor Di,j. An example distance weighting factor may be given by:
In accordance with some example implementations, the weighting value for each pixel in the weighting window may include a distance factor, such as Di,j above, and an intensity factor, such as the absolute difference and the threshold described above. For example, each weighting value may be given by
where th is a threshold value proportional to the center pixel intensity, as discussed above, and k is a positive integer value. In some embodiments k may be at least 2. In other words, the intensity factor may have a maximum value if the absolute difference is less than the threshold, may have a minimum value if the absolute difference exceeds the integer multiple of the threshold, and may be interpolated between the maximum value and the minimum value if the absolute difference is between the threshold and the integer multiple of the threshold.
After the weighting window has been populated, for example using the weighti,j calculation described above, the weighting values may be used to determine each edgeout as discussed above, to generate the modified edge pixels for each given pixel in the set of coarse edges. These modified edge pixels may then be applied to the reduced noise version of the input image to generate an edge enhanced version of the input image with reduced overshoot. For example,
Similar results can be seen by comparing images that are edge-enhanced using conventional techniques to images that are edge-enhanced using the overshoot-reduced techniques of the present embodiments. For example,
Memory 930 may include a non-transitory computer-readable medium (e.g., one or more nonvolatile memory elements, such as EPROM, EEPROM, Flash memory, a hard drive, and so on) that may store at least the following instructions:
coarse edge generation instructions 931 to process an input image and generate a coarse set of edges based on the input image (e.g., as described for one or more operations of
modified edge determination instructions 932 to determine modified edge pixel values based on differences between a first pixel corresponding to the given pixel in the coarse set of edge pixels, and on one or more pixels of the input image that are within a vicinity of the first pixel (e.g., as described for one or more operations of
edge-enhanced image generation instructions 933 to generate an edge-enhanced version of the input image based at least in part on the modified edge pixels (e.g., as described for one or more operations of
The instructions, when executed by processor 920, cause the device 900 to perform the corresponding functions. The non-transitory computer-readable medium of memory 930 thus includes instructions for performing all or a portion of the operations depicted in
Processor 920 may be any suitable one or more processors capable of executing scripts or instructions of one or more software programs stored in device 900 (e.g., within memory 930). For example, processor 920 may execute coarse edge generation instructions 931 to process an input image and generate a coarse set of edges based on the input image. The processor 920 may also execute modified edge determination instructions 932 to determine modified edge pixel values based on differences between a first pixel corresponding to the given pixel in the coarse set of edge pixels, and on one or more pixels of the input image that are within a vicinity of the first pixel. Processor 920 may also execute edge-enhanced image generation instructions 933 to generate an edge-enhanced version of the input image based at least in part on the modified edge pixels.
An input image may be received (1010). For example, the input image may be received by image input/output module 910 of device 900 of
For example, a first pixel of the input image may be identified, where the first pixel corresponds to the given pixel in the coarse set of edge pixels (1031). In some embodiments, the first pixel may be identified by executing modified edge determination instructions 932 of device 900 of
After performing the operations for each given pixel in the coarse set of edge pixels, an edge-enhanced version of the input image may be determined based at least in part on the modified edge pixels (1040). For example, the edge-enhanced version of the input image may be determined by edge addition module 430 of image processing system 400 of
Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
The methods, sequences or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
In the foregoing specification, the example embodiments have been described with reference to specific example embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.