Method for determining filter coefficients

Information

  • Patent Grant
  • 10762610
  • Patent Number
    10,762,610
  • Date Filed
    Monday, March 25, 2019
    5 years ago
  • Date Issued
    Tuesday, September 1, 2020
    4 years ago
  • CPC
  • Field of Search
    • CPC
    • G06T5/20
    • G06T5/00
    • G06T3/40
    • G06T3/4007
    • G06T3/4015
    • G06T3/4038
    • G06T7/44
    • G06T5/005
    • G06T2207/20008
    • G06T2207/20024
    • G06T2207/20172
    • G06K9/00
    • H04N9/0455
    • H04N9/3114
    • H04N9/3117
    • H04N9/646
    • H04N19/117
    • H04N21/23418
  • International Classifications
    • G06T5/20
    • Term Extension
      45
Abstract
Disclosed is a method for determining filter coefficients. The method includes: obtaining the coefficients of a target filter and calculating the response of the target filter; computing according to collected data and/or predetermined data in accordance with a first data pattern so as to have the response of a first filter approximate to the response of the target filter and thereby determine the coefficients of the first filter; and computing according to the collected data and/or the predetermined data in accordance with a second data pattern so as to have the response of a second filter approximate to the response of the target filter and thereby determine the coefficients of the second filter. Accordingly, the difference between the responses of the first filter and the second filter is insignificant and results in less negative influence; and the first and the second filters can replace the target filter to reduce cost.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention relates to the design of a filter, especially to a method for determining the coefficients of a filter.


2. Description of Related Art

A filter is usually used for signal processing. In consideration of the cost, the design of a filter aims at decreasing the number of signals to be processed by the filter so as to reduce the complexity of calculation, or a front-end circuit lowers the sampling rate for signals so as to decrease the amount of data inputted into a filter. For instance, a color filter array (CFA) of an image sensor allows each pixel sensing position of a photosensitive component of the image sensor to record the intensity of a certain color so as to reduce the production cost; however, since each pixel sensing position only records the information of one color, the other color information of the same pixel sensing position is lost and needs to be recovered by a filter according to the color information recorded by the neighboring pixel sensing positions; more specifically, when a target pixel sensing position associated with a target pixel only records green color information, the filter executes interpolation according to the red color information recorded by the pixel sensing positions close to the target pixel sensing position to generate the red color information of the target pixel, and executes interpolation according to the blue color information recorded by the pixel sensing positions close to the target pixel sensing position to generate the blue color information of the target pixel.


Please refer to FIGS. 1-3. FIG. 1 shows an exemplary image pixel array of a photosensitive component recording red color information (R), green color information (G), and blue color information (B); FIG. 2 shows the red color information recorded by the image pixel array; and FIG. 3 shows the blue color information recorded by the image pixel array. FIGS. 1-3 merely shows partial information in order to prevent these figures from being too complicated. As shown in FIG. 2/FIG. 3, two kinds of filters are used to recover the red/blue color information of the pixel T1/T3 and the pixel T2/T4 respectively; more specifically, one filter executes interpolation according to the red/blue color information of the pixel above the pixel T1/T3 and the pixel below the pixel T1/T3 and thereby generates the red/blue color information of the pixel T1/T3, and the other one executes interpolation according to the red/blue color information of the pixels located at the upper left, the upper right, the bottom left, and the bottom right in relation to the position of the pixel T2/T4 and thereby generates the red/blue color information of the pixel T2/T4. In the current arts, the filter responses of the two kinds of filters are usually different, which often leads to image distortion (e.g., zipper effect).


SUMMARY OF THE INVENTION

An object of the present invention is to provide a method for determining the coefficients of a filter so as to make an improvement over the prior art.


An embodiment of the method of the present invention includes the following steps: obtaining coefficients of a target filter and thereby calculating a target filter response, in which the target filter response relates to a way of the target filter to process original data; computing according to collected data and/or predetermined data in accordance with a first data pattern so as to have a difference between a first filter response and the target filter response be less than a threshold and thereby determine coefficients of a first filter, in which the first filter response relates to a way of the first filter to process first data originated from the original data; and computing according to the collected data and/or the predetermined data in accordance with a second data pattern so as to have a difference between a second filter response and the target filter response be less than the threshold and thereby determine coefficients of a second filter, in which the second filter response relates to a way of the second filter to process second data originated from the original data. In light of the above, both the first filter response and the second filter response approximate to the target filter response so that the negative influence caused by the difference between the first filter response and the second filter response is reduced.


Another embodiment of the method of the present invention includes the following steps: obtaining coefficients of a target filter and thereby calculating a target filter response, in which the target filter response relates to a way of the target filter to process original data; and computing according to collected data and/or predetermined data in accordance with the type (e.g., the pattern for filtering signals) of a front-end filter and thereby having a difference between a designed filter response and the target filter response be less than a threshold so as to determine coefficients of a designed filter, in which the designed filter response relates to a way of the designed filter to process filtered data originated from the front-end filter processing the original data. In light of the above, the designed filter response approximates to the target filter response and the designed filter is simpler than the target filter so that the design filter can replace the target filter to reduce the production cost.


These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiments that are illustrated in the various figures and drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an exemplary image pixel array recording color information.



FIG. 2 shows the red color information recorded by the image pixel array of FIG. 1 and the way to use the red color information for interpolation.



FIG. 3 shows the blue color information recorded by the image pixel array of FIG. 1 and the way to use the blue color information for interpolation.



FIG. 4 shows an embodiment of the method of the present invention for determining the coefficients of a filter.



FIG. 5 shows red color signals captured by an image sensor without being processed by a color filter array.



FIG. 6 shows another embodiment of the method of the present invention for determining the coefficients of a filter.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

As modern data science and machine learning develop, an eye-catching theory refers to the approximation to the regularity of data relationship with a trainable computation model. The method of the present invention for determining the coefficients of a filter is based on the research achievement of the above-mentioned theory. This theory can be found in the following literature: professor LIN, XUAN-TIAN, “Machine Learning Foundations”, handout for lecture, Department of Computer Science & Information Engineering, National Taiwan University (source of literature: page number 21/27, “https://www.csie.ntu.edu.tw/˜htlin/mooc/doc/01_handout.pdf”).



FIG. 4 shows an embodiment of the method of the present invention for determining the coefficients of a filter. This embodiment can be carried out by a general-purpose computer or a dedicated device, and includes the following steps:

  • Step S410: obtaining coefficients of a target filter and thereby calculating a target filter response, in which the target filter response relates to a way of the target filter to process original data. The way of the target filter to process the original data can be understood as the pattern of the target filter for filtering the original data. In an exemplary implementation, the original data relates to signals of a specific color (e.g., red) generated by an image sensor without using a color filter array (CFA) circuit; for instance, the signals are the red information data of nine pixels








[




R
11




R
12




R
13






R
21




R
22




R
23






R
31




R
32




R
33




]






centered at the position of the pixel R22 in FIG. 5, and one can notice that the position of the pixel R22 is corresponding to the position of the pixel T1 in FIG. 2. In order to make the explanation simple for better understanding, provided the original data include a one-dimensional image matrix qT=[R0 R1 R2 R3 R4 R5 R6 R7 R8] and the coefficients of the target filter are fT=[f0 f1 f2 f3 f4 f5 f6 f7 f8], the target filter response can be expressed as fTq, in which the superscript “T” stands for the transposition of the matrix. It should be noted that the original data may include other data such as signals of other colors (e.g., green and blue) generated by the same image sensor without using the CFA circuit.

  • Step S420: computing according to collected data and/or predetermined data (e.g., given data such as the below-mentioned diagonal matrix and orthonormal basis b0, . . . , bm-1) in accordance with a first data pattern (e.g., the distribution pattern








[




G
11




R
12




G
13






B
21




G
22




B
23






G
31




R
32




G
33




]






of pixels that are taken into consideration by a filter for generating the red information of the pixel T1 of FIG. 2, in which G22 is associated with the position of the pixel T1 of FIG. 2 while the others are associated with the positions of the eight pixels around T1 respectively) so as to have a difference between a first filter response and the target filter response be less than a threshold and thereby determine the coefficients of a first filter, in which the first filter response relates to a way of the first filter to process first data (e.g., the data generated by the aforementioned CFA circuit processing the original data, such as the data of the nine pixels centered at the position of the pixel T1 of FIG. 2) originated from the original data and the threshold can be determined according to the demand for implementation. In an exemplary implementation, if the collected data include a one-dimensional image array pT=[G0 R1 G2 R3 G4 R5 G6 R7 G8] (e.g., the values of the nine pixels of the first row of the array in FIG. 1) and the coefficients of the first filter are gT=[g0 g1 g2 g3 g4 g5 g6 g7 g8], the first filter response gTp should approximate to the target filter response fTq (i.e., gT≈fTq), and the effect of the coefficients gT of the first filter on data of the color except red (i.e., G0, G2, G4, G6, G8) should be zero/negligible and can be expressed with the following equations:






Cg
=



[



1


0


0


0


0


0


0


0


0




0


0


1


0


0


0


0


0


0




0


0


0


0


1


0


0


0


0




0


0


0


0


0


0


1


0


0




0


0


0


0


0


0


0


0


1



]



[




g
0











g
8




]


=


[




g
0






g
2






g
4






g
6






g
8




]

=


[



0




0




0




0




0



]

=
d









    • (the above equation is suitable for the target filter being any kind of filters)









Cg
=



[



1


0


0


0


0


0


0


0


0




0


0


1


0


0


0


0


0


0




0


0


0


0


1


0


0


0


0




0


0


0


0


0


0


1


0


0




0


0


0


0


0


0


0


0


1




1


1


1


1


1


1


1


1


1



]



[




g
0











g
8




]


=


[




g
0






g
2






g
4






g
6






g
8







g
0

+

g
2

+

+

g
8





]

=


[



0




0




0




0




0




1



]

=
d









    • (the above equation is suitable for the target filter being a low pass filter)









Cg
=



[



1


0


0


0


0


0


0


0


0




0


0


1


0


0


0


0


0


0




0


0


0


0


1


0


0


0


0




0


0


0


0


0


0


1


0


0




0


0


0


0


0


0


0


0


1




1


1


1


1


1


1


1


1


1



]



[




g
0











g
8




]


=


[




g
0






g
2






g
4






g
6






g
8







g
0

+

g
2

+

+

g
8





]

=


[



0




0




0




0




0




0



]

=
d









    • (the above equation is suitable for the target filter being a high pass filter)

    • Accordingly, Step S420 is used for calculating the coefficients gT of the first filter when the difference between gTp and fTq is minimum or the difference achieves (e.g., is equal to or less than) the aforementioned threshold. It should be noted that in the above equations, the matrix “C” and the matrix “d” are determined according to a data pattern (i.e., the pattern of data to be filtered) (while the matrix “C” and the matrix “d” in the embodiment of FIG. 6 are determined according to the type of a front-end filter); more specifically, a different data pattern results in a different matrix “C” and a different matrix “d” so as to have the effect of the filter coefficients on the data of a non-targeted color be zero/negligible. It should also be noted that when the collected data are obtained by sampling a lot of various data, the collected data may adequately reflect the regularity of the complete data and the difference between gTp and fTq may be minimized. In another exemplary implementation, regarding a 5×5 RGB pixel matrix obtained without using a CFA circuit, the whole RGB image data “q” (including twenty-five red pixel data R0˜R24, twenty-five green pixel data G0˜G24, and twenty-five blue pixel data B0˜B24) can be expressed as follows:

      qT=[q0q1 . . . q74]=[R0 . . . R24G0 . . . G24B0 . . . B24]

    • After the whole RGB image data pass through a CFA array, if the central pixel of the RGB pixel matrix is the target pixel and the red information data R12 of the target pixel is what we want, the coefficients of the target filter “f ”can be expressed as follows:











f
T

=

[


f
0







f
1



















f
74


]


,


f
i

=

{





1




if





i

=
12





0


otherwise








0


i

74









    • In addition, the coefficients “g” of the first filter should fulfill the followings requirements (which can be expressed in the form of Cg=d):

      k∉ΩRgk=0  1.
      k∉ΩGg25+k=0  2.
      k∉ΩBg50+k=0  3.
      Σk∈ΩRgk=1  4.
      Σk∈ΩGg25+k=0  5.
      Σk∈ΩBg50+k=0  6.

    • The definition of ΩC in the above conditions is shown below:

      ΩC={k|0≤k≤24∧pixel k is pixel of C channel},C∈{R,G,B}

    • ΩC stands for the indexes respectively associated with R, G, B channels in the 5×5 matrix obtained with the CFA circuit. The aforementioned requirements 1˜3 are basic requirements for achieving the purpose of filtering C channel without referring to the pixels of the other channels; the effects of the requirements 4˜6 are equivalent to a low pass filter “LPF” acquiring twenty-five values, a high pass filter “HPF0” acquiring twenty-five values, and a high pass filter “HPF1” acquiring twenty-five values respectively while “LPF”, “HPF0”, and “HPF1” can be expressed as follows:

      LPF=[g0 . . . g24]
      HPF0=[g25 . . . g49]
      HPF1=[g50 . . . g74]

    • Additionally, the following equation should be sustained:

      gTq=LPF(R)+HPF0(G)+HPF1(B)≈APF(R)

    • In the above equation, “APF” stands for an all pass filter, and “APF(R)” stands for the whole data of R channel without being filtered by a CFA circuit.



  • Step S430: computing according to the collected data and/or the predetermined data in accordance with a second data pattern (e.g., the distribution pattern









[




R
14




G
15




R
16






G
24




B
25




G
26






R
34




G
35




R
36




]






of pixels that are taken into consideration by a filter for generating the red information of the pixel T2 of FIG. 2, in which B25 is associated with the position of the pixel T2 of FIG. 2 while the others are associated with the positions of the eight pixels around T2 respectively) so as to have a difference between a second filter response and the target filter response be less than the threshold and thereby determine coefficients of a second filter, in which the second filter response relates to a way of the second filter to process second data (e.g., the data generated by the aforementioned CFA circuit processing the original data, such as the data of the nine pixels centered at the position of the pixel T2 of FIG. 2) originated from the original data. Accordingly, both the first filter response and the second filter response approximate to the target filter response and thus the first filter response and the second filter response are similar so that the negative influence (e.g., image distortion) caused by the different filter responses exerted on the data originated from the same source can be prevented. Since step S430 is similar to step S420 and the difference between the two steps is the difference between the data patterns to which the two steps refer, those of ordinary skill in the art can appreciate the detail and the modification of step S430 with the detail description of step S420.


An exemplary implementation of calculating the difference between gTp and fTq is described below. First, the difference between gTp and fTq can be quantized in the form of square error and then optimized as shown below:

Minimize(gTq−fTq)2

In the above equation, “Minimize” stands for finding the value of “g” making the result of the equation (i.e., the value of (gTq−fTq)2) minimum. The aforementioned collected data can be expressed as follows:

Q={qi|0≤i<n}

In the above equation, “Q” stands for the collected data, “n” stands for the number (e.g., positive integer) of records of data, and each record of data could be a data group such as the aforementioned pixel array qT=[R0 R1 R2 R3 R4 R5 R6 R7 R8]. Therefore, the way to find out the coefficients gT of the first filter according to the collected data “Q” can be understood as finding the result of the following equation:







Minimize







1
n

·






[




q
0
T











q

n
-
1

T




]


g

-


[




q
0
T











q

n
-
1

T




]


f




2







subject





to





Cg

=
d





In the above equation, “subject to Cg=d” stands for the execution of the Minimize operation under the condition of Cg=d; the description of the matrix “C” and the matrix “d” is found in the preceding paragraph. The way to derive the result of the above equation is found in the following literature: “L. Vandenberghe, “Constrained least squares”, page numbers 11-14˜11-15, ECE133A (Winter 2018)” (source of literature: http://www.seas.ucla.edu/˜vandenbe/133A/lectures/cls.pdf). The aforementioned error term







1
n

·






[




q
0
T











q

n
-
1

T




]


g

-


[




q
0
T











q

n
-
1

T




]


f




2






can be rewritten as follows:









1
n

·


(



[




q
0
T











q

n
-
1

T




]


g

-


[




q
0
T











q

n
-
1

T




]


f


)

T




(



[




q
0
T











q

n
-
1

T




]


g

-


[




q
0
T











q

n
-
1

T




]


f


)


=


(


g
T

-

f
T


)



M


(

g
-
f

)








The matrix M of the above equation is shown below:






M
=



1
n

·


[


q
0













q

n
-
1



]



[




q
0
T











q

n
-
1

T




]



=



1
n

·




k
=
0


n
-
1





q
k

·

q
k
T




=

E


[

qq
T

]









In the above equation, “E[qqT]” stands for the expected value of “qqT” and is dependent on the statistic characteristic of the collected data “Q” while the size of the matrix “M” is merely related to the dimension “m” of the data “q” (e.g., m=9, if qT=[R0 R1 R2 R3 R4 R5 R6 R7 R8]) and unrelated to the aforementioned number “n” of records of data; accordingly, the process of minimizing the filter coefficients does not consume a lot of storage space. In addition, the matrix “M” is a symmetric positive semidefinite matrix and thus the mathematic technique “Eigen Decomposition” can be used to process the matrix “M” as shown below:






M
=



[


e
0













e

m
-
1



]



[




λ
0



0





0




0



λ
1




















0




0





0



λ

m
-
1





]




[




e
0
T











e

m
-
1

T




]







In the above equation, e0, . . . , em-1 custom characterm (i.e., m-dimensional Euclidean_space, that is to say m-dimensional real number space) are eigen vectors and constitute an orthonormal basis whose eigen values are λ0≥ . . . ≥λm-1≥0. The theoretical foundation of doing Eigen Decomposition to a positive semidefinite matrix is found in the following literature: “Lecture 3 Positive Semidefinite Matrices”, Theorem 2” (source of literature: http://www.math.ucsd.edu/˜njw/Teaching/Math271C/Lecture_03.pdf); and Zico Kolter, “Linear Algebra Review and Reference”, section 3.13, Sep. 30, 2015 (source of literature: http://cs229.stanford.edu/section/cs229-linalg.pdf). In light of the above, a matrix “S” can be obtained as follows:






S
=


[





λ
0




0





0




0




λ
1





















0




0





0




λ

m
-
1






]



[




e
0
T











e

m
-
1

T




]







The matrix “S” holds true for the following equation:

m=STS

Consequently, the aforementioned error term can be expressed as follows:








1
n

·






[




q
0
T











q

n
-
1

T




]


g

-


[




q
0
T











q

n
-
1

T




]


f




2


=



(


g
T

-

f
T


)



M


(

g
-
f

)



=




(

Sg
-
Sf

)

T



(

Sg
-
Sf

)


=




Sg
-
Sf



2








In light of the above, the equation of minimizing the difference between gTp and fTg (i.e., Minimize (gTq−fTq)2) can be rewritten as follows:

Minimize∥Sg−Sf∥2 subject to Cg=d

In the above equation, the matrix “S” is originated from the known collected data “Q”, the matrix “f” are the given coefficients of the target filter, and thereby the coefficients gT of the first filter can be derived therefrom. It should be noted that the way to calculate the coefficients of the aforementioned second filter is the same as the way to calculate the coefficients gT of the first filter. It should also be noted that other mathematic techniques suitable for calculating the difference between gTp and fTq are applicable to the present invention.


On the basis of the above, when the collected data “Q” is insufficient for obtaining an effective statistic result and a reliable matrix “S” cannot be obtained through machine learning, an implementor carrying out the present invention can refer to the existing image statistic information (e.g., statistic data related to the color change of a natural image) to determine a diagonal matrix, which is treated as the aforementioned predetermined data, as shown below:








[





w
0




0





0




0




w
1





















0




0





0




w

m
-
1






]






Then the implementor can determine an orthonormal basis b0, . . . , bm-1 of an custom characterm vector space, and thereby use the diagonal matrix and the orthonormal basis to simulate the matrix “S” and then determine the coefficients of a filter. In the above equation, a non-restrictive example of wi is







e

(

-


i
2


2






σ
2




)


,





in which 0≤i≤(m−1), m is the dimension of the aforementioned data “q”, and σ is the standard deviation of data distribution. More specifically, taking one-dimensional image signals for example, a discrete cosine transform (DCT) basis bkT can be obtained according to the signals and expressed as follows:







b
k
T



=
def



{






1
m




[


cos


(


π
m

·

(

0
+

1
2


)

·
k

)














cos


(


π
m

·

(

m
-
1
+

1
2


)

·
k

)



]






if





k

=
0








2
m




[


cos


(


π
m

·

(

0
+

1
2


)

·
k

)














cos


(


π
m

·

(

m
-
1
+

1
2


)

·
k

)



]






if





0

<
k
<
m









(Reference: https://en.wikipedia.org/wiki/Discrete_cosine_transform#DCT-II)


In addition, based on the inherent smoothness characteristic of the natural image, which implies that the weight of high frequency components is relatively low, wi can be set as







w
i

=


e

(

-


i
2


2






σ
2




)


.





In an exemplary implementation, each of the aforementioned first data and second data include pixel data of a specific color. For instance, the pixel data of the specific color are pixel data of the red color or the pixel data of the blue color. In an exemplary implementation, the data amount of the first pixel data of the specific color is less than the data amount of all pixel data of the specific color of the original data, and the data amount of the second pixel data of the specific color is less than the data amount of all pixel data of the specific color of the original data; in other words, the first/second pixel data of the specific color are (or originate from) a part of all pixel data of the specific color of the original data. In an exemplary implementation, the first filter and the second filter are known filters except their filter coefficients. In an exemplary implementation, the first data do not include pixel data of the specific color of a target pixel, and therefore the first filter is configured to generate the pixel data of the specific color of the target pixel according to the first data by interpolation; the second data do not include pixel data of the specific color of another target pixel, and the second filter is configured to generate the pixel data of the specific color of the another target pixel according to the second data by interpolation.



FIG. 6 shows another embodiment of the method of the present invention for determining the coefficients of a filter. This embodiment can be carried out by a general-purpose computer or a dedicated device, and includes the following steps:

  • Step S610: obtaining coefficients of a target filter and thereby calculating a target filter response, in which the target filter response relates to a way of the target filter to process original data.
  • Step S620: computing according to collected data and/or predetermined data in accordance with a type of a front-end filter (e.g., a CFA circuit or a self-developed mask) and thereby having a difference between a designed filter response and the target filter response be less than a threshold so as to determine coefficients of a designed filter, in which the designed filter response relates to a way of the designed filter to process filtered data originated from the front-end filter processing the original data.


In light of the above, the embodiment of FIG. 6 has the filter response of a filter be similar to the filter response of a target filter. In comparison with the embodiment of FIG. 6, the embodiment of FIG. 4 has the filter responses of two filters be similar to the filter response of a target filter. Accordingly, the applicability of the embodiment of FIG. 6 is more flexible. In an exemplary implementation, the filtered data include the pixel data of a specific color, but the filtered data do not include the pixel data of the specific color of a target pixel; therefore, the designed filter is configured to generate the pixel data of the specific color of the target pixel according to the filtered data. In an exemplary implementation, the data amount of the pixel data of the specific color of the filtered data is less than the data amount of all pixel data of the specific color of the original data.


Since those of ordinary skill in the art can refer to the embodiment of FIG. 4 to appreciate the detail and the modification of the embodiment of FIG. 6, which implies that the techniques of the embodiment of FIG. 4 can be applied to the embodiment of FIG. 6 in a reasonable way, repeated and redundant description is omitted.


It should be noted that people of ordinary skill in the art can selectively use some or all of the features of any embodiment in this specification or selectively use some or all of the features of multiple embodiments in this specification to implement the present invention as long as such implementation is practicable, which implies that the present invention can be carried out flexibly.


To sum up, the present invention can adequately determine the coefficients of one or more filters and have the filter response(s) of the filter(s) be similar to the filter response of a target filter; accordingly, the present invention can not only reduce the negative influence caused by the different filter responses of different filters exerted on the same image data, but also replace the target filter with the filter(s) for the reduction of cost provided there are not much data to be processed by the filter(s).


The aforementioned descriptions represent merely the preferred embodiments of the present invention, without any intention to limit the scope of the present invention thereto. Various equivalent changes, alterations, or modifications based on the claims of present invention are all consequently viewed as being embraced by the scope of the present invention.

Claims
  • 1. A method for determining filter coefficients, comprising: obtaining coefficients of a target filter and thereby calculating a target filter response, in which the target filter response relates to a way of the target filter to process original data;computing according to collected data and/or predetermined data in accordance with a first data pattern so as to have a difference between a first filter response and the target filter response be less than a threshold and thereby determine coefficients of a first filter, in which the first filter response relates to a way of the first filter to process first data originated from the original data; andcomputing according to the collected data and/or the predetermined data in accordance with a second data pattern so as to have a difference between a second filter response and the target filter response be less than the threshold and thereby determine coefficients of a second filter, in which the second filter response relates to a way of the second filter to process second data originated from the original data.
  • 2. The method of claim 1, wherein the original data are image data, both the first data and the second data are data generated by a color filter array (CFA) circuit processing the original data.
  • 3. The method of claim 2, wherein the first data include first pixel data of a specific color and the second data include second pixel data of the specific color.
  • 4. The method of claim 3, wherein a data amount of the first pixel data of the specific color is less than a data amount of all pixel data of the specific color of the original data, and a data amount of the second pixel data of the specific color is less than the data amount of all the pixel data of the specific color of the original data.
  • 5. The method of claim 3, wherein the first data do not include pixel data of the specific color of a first target pixel, and the first filter is configured to generate the pixel data of the specific color of the first target pixel according to the first data; the second data do not include pixel data of the specific color of a second target pixel, and the second filter is configured to generate the pixel data of the specific color of the second target pixel according to the second data.
  • 6. The method of claim 3, wherein the specific color is red or blue.
  • 7. A method for determining filter coefficients, comprising: obtaining coefficients of a target filter and thereby calculating a target filter response, in which the target filter response relates to a way of the target filter to process original data; andcomputing according to collected data and/or predetermined data in accordance with a type of a front-end filter and thereby having a difference between a designed filter response and the target filter response be less than a threshold so as to determine coefficients of a designed filter, in which the designed filter response relates to a way of the designed filter to process filtered data originated from the front-end filter processing the original data.
  • 8. The method of claim 7, wherein the original data are image data.
  • 9. The method of claim 8, wherein the filtered data include pixel data of a specific color, the filtered data do not include pixel data of the specific color of a target pixel, and the designed filter is configured to generate the pixel data of the specific color of the target pixel according to the filtered data.
  • 10. The method of claim 9, wherein the front-end filter is a color filter array (CFA) circuit.
  • 11. The method of claim 9, wherein the specific color is red, green, or blue.
  • 12. The method of claim 9, wherein a data amount of the pixel data of the specific color of the filtered data is less than a data amount of all pixel data of the specific color of the original data.
Priority Claims (1)
Number Date Country Kind
107126784 A Aug 2018 TW national
US Referenced Citations (18)
Number Name Date Kind
6188803 Iwase Feb 2001 B1
6256068 Takada Jul 2001 B1
6631206 Cheng et al. Oct 2003 B1
6970597 Olding Nov 2005 B1
7050649 Slavin May 2006 B2
7305141 Jaspers Dec 2007 B2
7319496 Uchida Jan 2008 B2
7373020 Tsukioka May 2008 B2
8525895 Cote Sep 2013 B2
8630507 De Haan Jan 2014 B2
8817120 Silverstein Aug 2014 B2
9118933 Alshin Aug 2015 B1
9177370 Chen Nov 2015 B2
9466282 Park Oct 2016 B2
10034008 Alshina Jul 2018 B2
10083512 Tateno Sep 2018 B2
20090087119 Dorrell Apr 2009 A1
20100155587 Nikittin Jun 2010 A1
Non-Patent Literature Citations (11)
Entry
Lin, Xuan-Tien, “Machine Learning Foundations”, handout for lecture, Department of Computer Science & Information Engineering, National Taiwan University (source: p. No. 21/27, “https://www.csie.ntu.edu.tw/˜htlin/mooc/doc/01_handout.pdf”.
L. Vandenberghe, “Constrained least squares”, p. Nos. 11-14-11-15, ECE133A (Winter 2018)(source: http://www.seas.ucla.edu/˜vandenbe/133A/lectures/cls.pdf).
“Lecture 3 Positive Semidefinite Matrices”, Theorem 2 (source: http://www.math.ucsd.edu/˜njw/Teaching/Math271C/Lecture_03.pdf).
Zico Kolter, “Linear Algebra Review and Reference”, section 3.13, Sep. 30, 2015 (source: http://cs229.stanford.edu/section/cs229-linalg.pdf).
Wojciech Jarosz, “Computational Aspects of Digital Photography—Sensors & Demosaicing”, CS 89.15/189.5, Fall 2015.
Wikipedia “Discrete cosine transform”, Wikipedia (source: https://en.wikipedia.org/wiki/Discrete_cosine_transform#Applications).
“Karhunen-Loeve Transform” (source: http://fourier.eng.hmc.edu/e161/lectures/klt/node3.html).
“Mahalanobis distance” (source: http://slideplayer.com/slide/5139522/16/images/8/Mahalanobis+Distance+%EF%81%93+is+the+covariance+matrix+of+the+input+data+X.jpg).
Wikipedia “Positive-definite matrix”, Wikipedia (source: https://en.wikipedia.org/wiki/Positive-definite_matrix).
I-Chen Lin, “Computer Vision: 8. Camera Models (b)”, Dept. of CS, National Chiao Tung University.
OA letter of the counterpart TW application (appl. No. 107126784) mailed on Mar. 29, 2019. Summary of the TW OA letter: Claims 1, 7-9 are rejected as being anticipated by the cited reference 1 (U.S. Pat. No. 6,631,206 B1).
Related Publications (1)
Number Date Country
20200043152 A1 Feb 2020 US