An image sensor can be used to capture images of objects, such as printed materials, display device screens, three-dimensional (3D) objects, and so forth. In some situations, captured images can contain noise, which reduces the quality of the images.
Some implementations of the present disclosure are described with respect to the following figures.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings.
In the present disclosure, use of the term “a,” “an”, or “the” is intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, the term “includes,” “including,” “comprises,” “comprising,” “have,” or “having” when used in this disclosure specifies the presence of the stated elements, but do not preclude the presence or addition of other elements.
When an image sensor such as a digital camera captures an image displayed by a display device such as computer monitor or digital television that has a lower resolution compared with the resolution of the camera, the captured image may show a noise grid pattern due to oversampling. A “noise grid pattern” can refer to a pattern of visible artifacts in an image that appear generally at multiple points of a grid or other regular pattern. The noise grid pattern is an example of noise that is included in the captured image. The noise grid pattern causes distortion of the target content in the captured image.
The noise grid pattern can be visible to a user when viewing the captured image. Moreover, when processing is to be applied to the image that includes the noise grid pattern, the quality of the output produced by the processing can suffer. An example type of processing is optical character recognition (OCR), which can be applied to an image containing text to recognize text characters in the image and convert the text into a machine representation that can be useful for other purposes. For example, the machine representation of the text can be edited using a word processing application or other type of application. If OCR is applied to an image containing a noise grid pattern, then the OCR may fail to recognize certain text characters or may produce incorrect text characters.
Although reference is made to OCR as an example of processing that can be applied to a captured image, it is noted that other types of processing can be applied in other examples, such as filtering an image, and so forth.
Traditional noise removal techniques do not adequately remove noise that is arranged in a grid or other regular pattern. In accordance with some implementations of the present disclosure, as shown in
The noise removal process further includes converting (at 104) at least a portion of the captured image into a frequency domain image. The captured image is in a first domain that is different from the frequency domain. For example, the first domain can be a spatial domain that defines points in space. Pixels of the captured image are arranged at different points in space. The space can be a two-dimensional (2D) space that has coordinates in two dimensions (e.g., X and Y dimensions). In other examples, the captured image can be an image in three-dimensional (3D) space.
In alternative examples, the first domain of the captured image can be a time domain, where pixels of the captured image are arranged along different points in time. In other examples, the first domain of the captured image can have other dimensions, or combinations of different dimensions (such as spatial dimensions and time dimensions).
The converting performed (at 104) can be a conversion of the captured image in the spatial domain or time domain to the frequency domain. In some implementations, the frequency domain can include two frequency dimensions, such that the converted image is a frequency domain image in multiple frequency dimensions.
The noise removal process further identifies (at 106) a boundary position in the frequency domain image, where the boundary position indicates a boundary between target content in the frequency domain image and noise in the frequency domain image. As used here, “target content” can refer to the content of an image that is intended to be captured, such as the content of an image displayed by a display device, or the content that represents an object that is the target of the image capture. The content of the frequency domain image thus includes both the target content and noise. A goal of the noise removal process according to some implementations of the present disclosure is to remove the noise from the target content, such that the resulting image includes just the target content or includes the target content with attenuated noise.
Details regarding the determination of the boundary position are provided further below.
Once the boundary position is identified (at 106), the boundary between the target content in the frequency domain image and noise can be derived. The noise removal process removes (at 108) a content portion in the frequency domain image outside the boundary, to produce a noise-attenuated image. A content portion that is outside the boundary is considered noise, and can be removed. A content portion inside the boundary is considered target content, and is kept. Removing a content portion can refer to completely deleting or eliminating the content portion, or applying attenuation on the content portion to reduce intensity or amplitude values of the attenuated content portion.
A content portion “outside the boundary” can refer to the content portion having a specified positional relationship to the boundary in the frequency domain.
As explained further below, the boundary position P2 is used to define a rectangular boundary 216 that has four corners at the following respective positions: P2, 210, 212, and 214. A content portion of the image 200 inside the rectangular boundary 216 is the target content of the image 200, while a content portion of the image 200 outside the rectangular boundary 216 is considered to be noise.
In some examples, the image 200 can be captured by an image sensor. The captured image 200 is that of an image displayed by a display device. As a result, the image 200 may show a noise grid pattern. In general, a target content of an image does not include repeated objects at different frequencies. In contrast, a noise grid pattern can include repeating noise peaks at multiple frequencies. The non-repeating target content is located at lower frequencies closer to the center of the frequency domain (including the F1 and F2 dimensions), while the repeating noise content has repeating high intensity values at higher frequencies (higher values of F1 and F2).
The following describes an example of how the other positions 210, 212, and 214 are defined once the boundary position P2 is identified. From the positions P2, 210, 212, and 214, the four corners of the rectangular boundary 216 can be determined.
A horizontal line 206 and a vertical line 208 both intersect P2 in the frequency domain. P2 is a first distance D1 on the left of the vertical axis 204 along an axis parallel to the horizontal axis 202, and a second distance D2 above the horizontal axis 202 along an axis parallel to the vertical axis 204.
Once D1 and D2 are determined based on the position P2, the other positions 210, 212, and 214 can be determined based on D1 and D2. The position 210 is located on the horizontal line 206 the distance D1 to the right of the vertical axis 204. The position 212 is located on the vertical line 208 the distance D2 below the horizontal axis 202. The position 214 is located at a point that is the distance D1 on the right of the vertical axis 204 and the distance D2 below the horizontal axis 202.
Although
The following describes how the position P2 is identified in the context of
The image 200 includes a collection of pixels. Each pixel in the collection of pixels has an intensity value, such as a value that represents a color of the pixel. The scan area 200 includes a subset of the pixels in the collection of pixels. P2 is a position within the scan area 200 that is determined based on the intensity value of a pixel at position P1.
In the example of
In other examples, the scan area 220 can be defined in a different way, with different distances from the horizontal axis 202, the vertical axis 204, the left edge 200-L, and the right edge 200-U. In other examples, the scan area 220 can have a different shape, such as a circular shape, an oval shape, and so forth. In addition, although the scan area 220 is depicted as being in the left, upper quadrant of the image 200, the scan area 220 can alternatively be located in other quadrants of the image 200 in other examples. Moreover, the scan area 220 can include a contiguous region in the image 200, or a collection of discrete and separate regions in the image 200.
Within the scan area 220, the noise removal process finds a peak intensity value of pixels in the scan area 220. Each pixel has an intensity value (a color intensity value). Among the intensity values of pixels in the scan area 220, the noise removal process identifies the peak intensity value as the intensity value that is larger than the other intensity values of pixels in the scan area 220. The position of the pixel with the peak intensity value is identified as P1.
Once the peak intensity value is identified, then a pixel having an intensity value that is a specified percentage (e.g., 10% or other percentage) of the peak intensity value is identified. The identified peak intensity value in the scan area 220 is the peak noise value. The specified percentage is a threshold applied on the peak noise value to find neighboring noise pixels around the pixel having the peak noise value.
In the example of
The boundary 216 derived from the position P2 allows for removal of as much noise as possible, while keeping the non-noise pixels (including those that are part of the target content of the image 200).
The image processing process performs tasks 304-316 for each image of the multiple images for respective color channels divided from the captured image. For each image of a respective color channel, the image processing process applies (at 304) a multi-dimensional Discrete Fourier Transform (DFT) (at 304) on the image. The multi-dimensional OFT converts the image that is initially in one domain (spatial domain or time domain) into a multi-dimensional frequency domain. In some examples, the multi-dimensional DFT is a two-dimensional DFT, such that the converted image in the frequency domain has two frequency dimensions.
The following represents a one-dimensional OFT that is applied on an image represented as {xn}, where n=0 to N−1 (N≥2 and representing the number of pixels in the image for the respective color channel in the first domain, such as the spatial domain or time domain).
where xn represents the intensity value of the pixel at index n, and k is the current frequency being considered. Xk is the intensity value (corresponding to amplitude and phase, for example) of the pixel at frequency k, and is a complex number. Each frequency k is part of , which represents the set of frequency values. Using Euler's formula to simplify the function of Eq. 1, the following is derived:
Because an image includes pixel values in a two-dimensional matrix, the DFT is applied in two dimensions. The DFT applied in two dimensions is expressed as:
where M represent the number of pixels of the image in the vertical axis, N represents the number of pixels of the image in the horizontal axis, in and n are indexes of a pixel being considered, (m=0, . . . , M−1) and (n=0, . . . , N−1), and xmn is the intensity value of the pixel at a position defined by in and n. In Eq. 4, k1 is the horizontal frequency dimension, while k2 is the vertical frequency dimension. Accordingly, Xk
Once the multi-dimensional OFT is applied (at 304) on the image for the respective color channel, an image in the frequency domain is obtained. The image in the frequency domain is also referred to as a spectrum image. In the spectrum image, the image processing process determines (at 306) a scan area, such as the scan area 220 shown in
The image processing process then identifies (at 308) a first position of the peak intensity value in the scan area. This first position is the position P1 in
Once the first position of the peak intensity value is identified, the image processing process identifies (at 310) a second position of a pixel that has an intensity value that is a specified percentage of the peak intensity value. The identified second position is position P2 in
The image processing process determines (at 312) a boundary (e.g., 216 in
The image processing further applies (at 316) an inverse DFT on the noise-attenuated spectrum image to produce an image in the original domain (spatial domain or time domain). The image in the original domain produced from the noise-attenuated spectrum image is a noise attenuated image in the original domain.
The inverse OFT can be performed according to the following:
Once each of the images in the different color channels has been processed according to
The electronic device 402 includes a noise removal engine 408, which receives a captured image from the image sensor 404, and applies a noise removal process, such as those depicted in
An “engine” can refer to a hardware processing circuit, which can include any or some combination of the following: a microprocessor, a core of a multi-core microprocessor, a microcontroller, a programmable gate array, a programmable integrated circuit device, or any other hardware processing circuit. In other examples, the noise removal engine 408 can be implemented as a combination of a hardware processing circuit and machine-readable instructions executable on the hardware processing circuit.
The electronic device 402 can also include an image processing engine 410, which is to apply image processing (e.g., OCR, filtering, etc.) to the noise-attenuated image produced by the noise removal engine 408. For example, the electronic device 402 can receive a user request to perform an OCR on the captured image. In response to the request, the image processing engine 410 applies the OCR on the noise-attenuated image from the noise removal engine 408 to produce a user-requested output (e.g., a document containing text in the image captured by the image sensor 404).
The tasks that can be performed by the hardware processor 502 includes an image receiving task 504 to receive an image captured by an image sensor. The tasks further include an image converting task 506 to convert at least a portion of the image into a frequency domain image. The tasks further include a boundary position identifying task 508 to identify a position in the frequency domain image, the position indicating a boundary between target content in the frequency domain image and noise in the frequency domain image. The tasks additionally include a noise content removing task 510 to remove content in the frequency domain image outside the boundary, to produce a noise-attenuated image.
The machine-readable instructions include image converting instructions 606 to convert an image produced from image data captured by an image sensor into a frequency domain image. The machine-readable instructions further include peak intensity value determining instructions 608 to determine, in a scan area that is located a distance from a center of the frequency domain image, a peak intensity value. The machine-readable instructions additionally include boundary identifying instructions 610 to identify a boundary based on the peak intensity value. The machine-readable instructions also include noise content removing instructions 612 to remove content in the frequency domain image outside the boundary, to produce a noise-attenuated image.
The process applies (at 706) a multi-dimensional DFT on the respective image portion to produce a frequency domain image comprising a plurality of frequency dimensions, and identifies (at 708) a position in the frequency domain image, the position indicating a boundary between target content in the frequency domain image and noise in the frequency domain image. The process further includes removing (at 710) content in the frequency domain image outside the boundary, to produce a noise-attenuated image.
The storage medium 604 of
In the foregoing description, numerous details are set forth to provide an understanding of the subject disclosed herein. However, implementations may be practiced without some of these details. Other implementations may include modifications and variations from the details discussed above. It is intended that the appended claims cover such modifications and variations.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2017/095258 | 7/31/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/023875 | 2/7/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5173788 | Ohta | Dec 1992 | A |
6407404 | Yokoyama | Jun 2002 | B1 |
20050276515 | Shekter | Dec 2005 | A1 |
20070126893 | Wang | Jun 2007 | A1 |
20110247042 | Mallinson | Oct 2011 | A1 |
20150116506 | Yeh | Apr 2015 | A1 |
Number | Date | Country |
---|---|---|
101326549 | Dec 2008 | CN |
101877124 | Nov 2010 | CN |
102201110 | Sep 2011 | CN |
103745443 | Apr 2014 | CN |
105301810 | Feb 2016 | CN |
20160026281 | Mar 2016 | KR |
Entry |
---|
Abu-Ein, A.A.; A Novel Methodology for Digital Removal of Periodic Noise Using 2d Fast Fourier Transforms; http://www.m-hikari.com/ces/ces2014/ces1-4-2014/abueinCES1-4-2014.pdf: Jan. 2014: 14 pages. |
International Search Report / Written Opinion; PCT/CN2017/095258; dated Mar. 12, 2018; 9 pages. |
Number | Date | Country | |
---|---|---|---|
20210158485 A1 | May 2021 | US |