Method and system for characterizing color variation of a surface

Information

  • Patent Grant
  • 9200999
  • Patent Number
    9,200,999
  • Date Filed
    Wednesday, September 11, 2013
    10 years ago
  • Date Issued
    Tuesday, December 1, 2015
    8 years ago
Abstract
A system and method of characterizing a color variation of a surface includes a device having a light source and a plurality of sensors positioned at respective viewing angles. An algorithm is stored on and executable by a controller to cause the controller to direct a beam of light at the measurement location with the light source and measure the light leaving the measurement location with the sensors at a plurality of azimuth angles to obtain respective measured color values. The controller is configured to define a color vector function F(θ, φ) to represent the color variation of the surface. The controller is configured to determine the color vector function F(θ, φ) based at least partially on the respective measured color values. The system allows for a representation of color space of a surface at any azimuth and viewing angle.
Description
TECHNICAL FIELD

The disclosure relates generally to a method and system for characterizing the color variation of a surface.


BACKGROUND

The coating of automobile bodies may include metallic and other types of special-effect paints containing various pigments. Such paints may contain a coating of lustrous particles such as aluminum flakes, mica flakes and xirallic that act as tiny mirrors. When the paint is illuminated from a single direction, light is reflected from the surfaces of these particles. The directional reflectance of metallic and other types of special-effect paints results in a variation in color of the surface based on the angle of observation of the viewer and other factors.


SUMMARY

A system and method of characterizing a color variation of a surface includes a device having a light source configured to direct a beam of light on a measurement location on the surface. The system may be used to represent or model color space at any azimuth and viewing angle, for example, the surface of a vehicle having angular variations due to non-uniform distribution of flake orientations in paint film. In one example, the system is used on a prototype coated surface to determine the esthetic appeal of the prototype coated surface to a viewer, based on variations in color of the surface at different angles of observation. An accurate representation of the color variation of a prototype coated surface enhances decision making between different metallic/special-effect paint selections. In addition, this mathematical representation of colors enable digitalization of colors so that computer graphic and rendering software can represent and reproduce colors more accurately in displays or prints.


The device is rotatable to a plurality of azimuth angles (θ1, θ2 . . . θe), relative to the measurement location. The total number of the plurality of azimuth angles is at least a first number (e). The device includes a plurality of sensors positioned at respective viewing angles (φ1, φ2. . . φg). The total number of the respective viewing angles is at least a second number (g).


A controller is operatively connected to the device. An algorithm is stored on and executable by the controller to cause the controller to direct a beam of light at the measurement location with the light source being at an illumination angle α relative to the measurement location. The controller is further configured to measure the light leaving the measurement location with the plurality of sensors at each of the plurality of azimuth angles (θ1, θ2. . . θe) to obtain respective measured color values.


The controller is configured to define a color vector function F(θ, φ) as a product of a first vector P(θ) and a second vector Q(φ) such that F(θ, φ)=P(θ).Q(φ). The color vector function F(θ, φ) represents the color variation of the surface. The controller is configured to determine the color vector function F(θ, φ) based at least partially on the respective measured color values.


The color vector function F(θ, φ) and first and second vectors P(θ), Q(φ) each include respective first, second and third components, such that F(θ, φ)=[F1 (θ, φ), F2(θ, φ), F3(θ, φ)], P(θ)=[P1(θ), P2(θ), P3(θ)] and Q(φ)=[Q1(φ), Q2(φ), Q3(φ)]. The respective first component of the color vector function F(θ, φ) may represent one of a lightness component (L*), a redness/greenness component (a*), and a yellowness/blueness component (b*).


Determining the color vector function F(θ, φ) based at least partially on the respective measured color values includes expressing the first components P1(θ), Q1(φ) as a product of respective plurality of matrices and obtaining respective first and second set of coefficients by inputting the measured color values into the respective plurality of matrices. The first component F1(θ, φ) of the color vector function F (θ, φ) is obtained based on the first and second set of coefficients.


Defining the color vector function F(θ, φ) includes expressing the respective first component P1(θ) as: P1(θ)=a0/2+j=1 to f[aj cos(jθ)+bj sin(jθ)]; wherein a0, aj, bj are a first set of coefficients and bj=f is zero if the first number (e) is even. The integer parameter f is defined such that f=e/2 and f=(e−1)/2 if the first number (e) is even and odd, respectively.


Defining the color vector function F(θ, φ) includes expressing the respective first component Q1(φ) as: Q1(φ)=c0/2+k=1 to h[ck cos(kφ)+dk sin(kφ)]; wherein c0, ck, dk are a second set of coefficients and dk=h is zero if the second number (g) is even. The integer parameter h is defined such that h=g/2 and h=(g−1)/2 if the second number (g) is even and odd, respectively.


The plurality of azimuth angles (θ1, θ2. . . θe) may be evenly-spaced. In one example, the illumination angle is approximately 45 degrees.


The above features and advantages and other features and advantages of the present invention are readily apparent from the following detailed description of the best modes for carrying out the invention when taken in connection with the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a system for characterizing a color variation of a surface;



FIG. 2 is a three-dimensional schematic diagram of the system of FIG. 1;



FIG. 3 is an example flow chart for a method or algorithm for characterizing the color variation of the surface shown in FIG. 1; and



FIG. 4 is a flow chart for a portion of the method or algorithm of FIG. 3.





DETAILED DESCRIPTION

Referring to the Figures, wherein like reference numbers refer to the same or similar components throughout the several views, FIG. 1 is a schematic diagram of a system 10 for characterizing a color variation of a surface 12. The system 10 includes a device 14 (lightly shaded) having a light source 16 configured to direct a beam of light 18 on a measurement location 20 on the surface 12. The device 14 may include a window 19 that allows light to pass through. The light source 16 may emit electromagnetic radiation covering the visible spectrum, also known as white light. Alternatively, the light source 16 may emit broadband radiation, covering the visible and non-visible spectrum.


Referring to FIG. 1, the surface 12 is a painted surface having a plurality of metallic particles 22 embedded in the paint film 24. The surface 12 may be an exterior surface of an automobile that is coated with a metallic or other special-effect paint. The metallic particles 22 include, but are not limited to, aluminum flakes, mica flakes and xirallic that act as tiny mirrors. The metallic particles 22 may be non-uniformly distributed under the surface 12. The system 10 described herein may be used to represent or model color variation on an exterior surface of an automobile or other painted surfaces due to non-uniform distribution of the metallic particles 22.


The measurement plane of FIG. 1 (i.e., the plane of the page) is indicated at numeral 30 in FIG. 1. FIG. 2 is a three-dimensional schematic diagram of the system 10. For clarity, some components of FIG. 1 are not shown in FIG. 2. Measurement plane 30 of FIG. 1 is indicated in FIG. 2 and lightly shaded. Referring to FIGS. 1-2, the light source 16 is positioned at an illumination angle 26 relative to the measurement location 20. Any value for the illumination angle 26 may be chosen. In one example, the illumination angle 26 is approximately 45 degrees. Referring to FIG. 1, when the beam of light 18 illuminates the surface 12 from a single direction, light is reflected from the surfaces of the metallic particles 22. Referring to FIGS. 1-2, the reflected light also includes a specular component indicated at specular line 28, which is a mirror-like reflection such that the angle of incidence equals the angle of reflection, as well as a diffuse light component (not shown).


Referring to FIG. 2, the device 14 may be rotated to an azimuth angle 32 (θ). The azimuth angle 32 (θ) is formed between a reference direction 34 (the x-axis in FIG. 2) and a line 36 extending from the measurement location 20 to a point of interest 38 projected from the measurement plane 30 to the same plane as the reference direction 34. The device 14 is configured to be rotatable to a plurality of azimuth angles 321, θ2. . .θe), relative to the measurement location 20. The total number of the azimuth angles 32 is at least a first number (e). In one example, the azimuthal angles 32 are 0°, 90°, 180°, and 270° (such that e=4). The azimuth angles 321, θ2 . . . θe) may be evenly-spaced.


Referring to FIG. 1, the device 14 includes a plurality of sensors 40A-F positioned at respective viewing angles 421, φ2. . . φg). Referring to FIGS. 1-2, the viewing angles 42 (φ) are measured from the specular line 28. For clarity, only sensor 40B is indicated in FIG. 2. It is to be appreciated that any number of sensors positioned at any angle may be employed. As shown in FIG. 1, the sensors 40A-F and light source 16 are positioned in the same measurement plane 30. The total number of the respective viewing angles is at least a second number (g). In the non-limiting example shown in FIG. 1, the respective viewing angles 42 for sensors 40A-F are −15°, 15°, 25°, 45°, 75°, and 110°, respectively (such that g=6).


Referring to FIG. 1, a controller 50 is operatively connected to the device 14. Controller 50 executes a method or algorithm 100 which is stored on or is otherwise readily executable by the controller 50. FIG. 3 is a flow chart showing the method 100. The controller 50 is adapted to characterize a color variation of the surface 12. The controller 50 may be a general-purpose digital computer, a microprocessor, central processing unit, or a computer-readable storage medium.


The device 14 may be a spectrophotometer that physically measures color values which are outputted to the controller 50. A spectrophotometer such as the Byk-mac (manufactured by BYK-Gardner USA) may be employed for the device 14. The device 14 may be configured to measure color values in L*a*b* (CIELAB 1978) color space, specified by the International Commission on Illumination. However, any other color space system known to those skilled in the art may be employed.


Each of the respective measured color values (measured by the device 14 of FIG. 1) may include a lightness component (L*), a redness/greenness component (a*) and yellowness/blueness component (b*). The lightness component (L*) represents the lightness of the measurement, for example, L*=0 may indicate black and L*=100 may indicate diffuse white. The redness/greenness component (a*) indicates position between red/magenta and green, for example, negative values of a* may indicate green while positive values indicate red/magenta. The yellowness/blueness component (b*) indicates position between yellow and blue, for example, negative values of b* may indicate blue while positive values may indicate yellow.


Referring to FIG. 3, the algorithm 100 may begin at step 102 in which the controller 50 is configured to (i.e., the algorithm 100 causes the controller 50 to) direct a beam of light 18 through the light source 16 at the measurement location 20 (see FIG. 1). The steps may be carried out in an order other than that indicated below and one or more steps may be omitted.


In step 104 of FIG. 3, the controller 50 is configured to measure the light leaving the measurement location 20 with the plurality of sensors 40A-F at each of the plurality of azimuth angles 321, θ2. . .θe) to obtain respective measured color values. As previously noted, any number of sensors and azimuth angles 32 may be employed.


In step 106 of FIG. 3, the controller 50 is configured to define a color vector function F(θ, φ) as a product of a first vector P(θ) dependent on the azimuth angles 32 (θ) and a second vector Q(φ) dependent on the respective viewing angles 42 (φ) such that F(θ, φ)=P(θ).Q(φ). The color vector function F(θ, φ) represents the color variation of the surface 12.


The color vector function F(θ, φ) and first and second vectors P(θ), Q(φ) each include respective first, second and third components, such that F(θ, φ)=[F1 (θ, φ), F2(θ, φ), F3(θ, φ)], P(θ)=[P1(θ), P2(θ), P3(θ)] and Q(φ)=[Q1(φ), Q2(φ), Q3(φ)]. The respective first components F1(θ, φ), P1(θ) and Q1(φ) may represent the lightness value (L*) or redness/greenness value (a*) or yellowness/blueness value (b*). All the respective first components F1(θ, φ), P1(θ) and Q1(φ) represent the same color space value. Steps 106 and 108 (indicated below) may be applied to each of the first, second and third components of the color vector function F(θ, φ) in turn.


Defining the color vector function F(θ, φ) in step 106 includes representing the first component P1(θ) of the first vector P(θ) as a first function dependent on the azimuth angle 32 (θ) (see FIG. 1) and a first set of coefficients. In one example, the first function is represented as:

P1(θ)=a0/2+j=1 to f[aj cos(jθ)+bj sin(jθ)]  eq. (1)

If the first number (e) is even, an integer parameter f is defined such that f=e/2. If e is odd, f=(e−1)/2. The first set of coefficients in this case includes a0, aj, bj, where j=1 to f. The coefficient bj=f is set as zero if the first number (e) is even.


Defining the color vector function F(θ, φ) in step 106 includes representing the first component Q1(φ) of the second vector as a second function dependent on the viewing angle 42 (φ) (see FIGS. 1-2) and a second set of coefficients. In one example, the second function is represented as:

Q1(φ)=c0/2+k=1 to h[ck cos(kφ)+dk sin(kφ)]  eq. (2)

If the second number (g) is even, an integer parameter h is defined such that h=g/2. If g is odd, h=(g−1)/2. The second set of coefficients in this case includes c0, ck, dk, where k=1 to h. The coefficient dk=h is set as zero if the second number (g) is even.


In step 108 of FIG. 3, the controller 50 is configured to determine the color vector function F(θ, φ) based at least partially on the respective measured color values. Step 108 may be completed through a plurality of steps 110, 112 and 114, shown in FIG. 4.


At step 110 of FIG. 4, if the respective viewing angles 42 (φ) (see FIG. 1) are not equally spaced, the controller 50 may be configured to convert the measured color values into corresponding respective equally-spaced color values. For t non-equally spaced measuring data points, (φ0, Q0), (φ1, Q1), . . . , (φk, Qk), . . . , (φt-1, Qt-1), the corresponding equally-spaced measuring data points are (φ′0, Q′0), (φ′1, Q′1), . . . , (φ′k, Q′k), . . . , (φ′t-1, Q′t-1) with the following relationship:


φ′00, Q′0=Q0, φ′t−1t-1, and Q′t−1=Qt−1. For k=1 to t−1, Q′k is determined by:

[(φ′k−φ′k−1)(Qk−Qk−1)/(φk−φk−1)]+Qk−1  eq. (3)
where [(φ′k=φ′0+k/tφ′t−1−φ′0)]  eq. (4)


At step 112 of FIG. 4, controller 50 is configured to express the respective first components P1(θ), Q1(φ) of the first and second vectors P(θ), Q(φ) as a product of respective plurality of matrices. Setting P1(θ)=A p and Q1(θ)=qT BT, leads to:

F1(θ,φ)=P1(θ)Q1(φ)=ApqTBT  eq. (5)

As known to those skilled in the art, the transpose of a matrix B is another matrix BT where the rows of B are the columns of BT and the columns of B are the rows of BT. The unknowns in Eq. (5) are pqT, a matrix comprising the first and second set of coefficients (such as in Equations 1-2). Solving this matrix equation by double matrix inversion leads to:

A−1ApqTBTB−T=A−1F1B−T  eq. (6)
pqT=A−1F1B−T  eq. (7)

Here F1 is a matrix with elements comprising the respective measured color values.


At step 114 of FIG. 4, controller 50 is configured to input the measured color values (matrix F1) into the respective plurality of matrices to obtain the first and second set of coefficients. In the example outlined above, the first and second set of coefficients are a0, ai, bi and c0, cj, dj, where i=1 . . . f, and j=1 . . . h. The first component F1 (θ, φ) of the color vector function F (θ, φ) is obtained based on the first and second set of coefficients determined in step 114. The results obtained may be converted back to physical values from linearized color space using the reverse of equations (3) and (4).


A numerical example is described below. This example includes four measurements (e=4) at azimuthal angles 32 (θ) of 0°, 90°, 180°, and 270°. Thus f=e/2=2 in equation (2) above. The illumination angle 26 is fixed at 45°. There are six (g=6) aspecular viewing angles φ, −15°, 15°, 25°, 45°, 75°, and 110°. Thus h=3 in equation (3) above. Per step 104 of FIG. 3, the controller 50 is configured to measure the light leaving the measurement location 20 with the plurality of sensors 40A-F at each of the plurality of azimuth angles 321, θ2 . . . θe) to obtain respective measured color values. Referring to Table 1, a total of 24 sets (4 times 6) of measured L*, a*, b* values are shown.













TABLE 1






Aspecular





Azimuthal
Viewing


Angle
Angle
L*
a*
b*



















 0°
−15° 
125.79
5.35
33.51


 0°
15°
116.03
5.05
30.86


 0°
25°
89.98
5.2
28.2


 0°
45°
56.63
6.31
25.9


 0°
75°
40.05
8.73
27.82


 0°
110° 
34.63
10.91
30.38


 90°
−15° 
127.66
5.56
34.98


 90°
15°
116.14
6.26
32.31


 90°
25°
90.76
5.87
28.87


 90°
45°
57.57
7.57
26.54


 90°
75°
40.71
8.76
28.29


 90°
110° 
36.09
12.28
32.06


180°
−15° 
125.99
5.32
34.63


180°
15°
115.34
5.69
32.20


180°
25°
89.87
5.13
27.56


180°
45°
55.60
7.57
25.66


180°
75°
40.03
7.31
27.78


180°
110° 
34.86
12.21
31.97


270°
−15° 
126.89
6.42
36.59


270°
15°
116.55
7.13
32.87


270°
25°
90.97
6.03
28.33


270°
45°
56.06
8.94
27.30


270°
75°
40.50
8.79
28.30


270°
110° 
35.50
12.97
33.48









Per step 106 of FIG. 3, the controller 50 is configured to represent the color variation of the surface by a color vector function F(θ, φ) as P(θ).Q(φ). The first component is set to be the lightness L* component. The steps are repeated for the second and third components (a*, b*).


The first function P1(θ) for odd values of e may be represented by Eq. (8):

a0/2+a1 cos(φ)+b1 sin(θ)+ . . . +af cos(fθ)+bf sin(fθ)  (eq. 8)

The first function P1(θ) for even values of e may be represented by Eq. (9):

a0/2+a1 cos(θ)+b1 sin(θ)+ . . . +af/2 cos(fθ)+bf sin(fθ)  (eq. 9)

In both cases, coefficients ax and bx may be estimated from the measured color values using the following equations:











a
x

=


2
/
e






k
=
0


e
-
1








(


L
k


cos



2





k





π





x

e


)









and







b
x

=


2
/
e






k
=
0


e
-
1








(


L
k


sin



2





k





π





x

e


)








(

eq
.




10

)







Note that the term for af becomes af/2 in equation 9 in order to employ the same (eq. 10) for both even and odd values of the total number of azimuth angles (e). The bf term may be set to zero if the first number (e) is even since:







sin


(


f
·
2






k






π
/
e


)


=


sin


e
2



(

2





k






π
/
e


)


=


sin





k





π

=
0










for





k

=
0

,
1
,





,

e
-
1





In this example, the first function P1(θ) is represented as: P1=[a0/2+a1 cos(θ)+b1 sin(θ)+a2/2 cos(2θ)]. For θ=0, P1=(a0/2+a1+a2/2). For θ=π/2, P1=(a0/2+b1−a2/2). For θ=π, P1=(a0/2−a1+a2/2). For θ=3π/2, P1=(a0/2−b1−a2/2). Similarly, the second function is represented as: Q1(φ)=[c0/2+c1 cos(φ)+d1 sin(φ) c0/2+c2 cos(2φ)+d2 sin(2φ)+c3/2. cos(3φ)].


Per step 108 of FIG. 3, the controller 50 is configured to determine the color vector function F(θ, φ) based at least partially on the respective measured color values. Per step 110 of FIG. 4, the non-equally-spaced aspecular viewing angles φ are converted to equally spaced data points using piecewise linearization equation using equations (4). The converted raw data in radians is shown in Table 2.













TABLE 2






Aspecular





Azimuthal
Viewing


Angle
Angle
L*
a*
b*



















0.00
−0.26
125.79
5.35
33.51


0.00
0.17
117.66
5.10
31.30


0.00
0.61
63.96
5.35
25.54


0.00
1.05
31.64
7.14
24.18


0.00
1.48
34.53
9.54
28.46


0.00
1.92
34.63
10.91
30.38


1.57
−0.26
127.66
5.56
34.98


1.57
0.17
118.07
6.14
32.76


1.57
0.61
65.40
5.48
25.44


1.57
1.05
32.70
8.84
24.79


1.57
1.48
35.10
9.15
28.87


1.57
1.92
36.09
12.28
32.06


3.14
−0.26
125.99
5.32
34.63


3.14
0.17
117.12
5.63
32.61


3.14
0.61
64.42
4.57
22.93


3.14
1.05
29.92
9.39
24.23


3.14
1.48
34.84
7.23
28.48


3.14
1.92
34.86
12.21
31.97


4.71
−0.26
126.89
6.42
36.59


4.71
0.17
118.27
7.01
33.49


4.71
0.61
65.42
4.92
23.81


4.71
1.05
29.90
11.13
26.52


4.71
1.48
35.33
8.74
28.63


4.71
1.92
35.50
12.97
33.48









Per step 112 of FIG. 4, the respective first components P1(θ), Q1(φ) of the first and second vectors P(θ), Q(φ) are expressed as a product of respective plurality of matrices: P1(θ)=A p and Q1(θ)=qT BT. Here p=[a0/2, a1, b1, a2/2], a 4×1 vector, and A is the 4×4 matrix:








[



1


1


0


1




1


0


1



-
1





1



-
1



0


1




1


0



-
1




-
1




]





The matrix q=[c0/2, c1, d1, c2, d2, c3/2]T is a 6×1 vector, and B is the 6×6 matrix:








[



1


1


0


1


0


1




1


.5


.866



-
.5



.866



-
1





1



-
.5



.866



-
.5




-
.866



1




1



-
1



0


1


0



-
1





1



-
.5




-
.866




-
.5



.866


1




1


.5



-
.866




-
.5




-
.866




-
1




]





Per step 114 of FIG. 4, controller 50 is configured to input the measured color values (matrix F1) into the respective plurality of matrices to obtain the first and second set of coefficients. Here F1 is the 4×6 matrix with its 24 elements taken from the L* values shown in Table 2. Matrix F1 is inputted into equation (7), p qT=A−1F1 B−T. In this case, p qT becomes:








[



68.4


40.7


32.4


10.4


15.2


7.0




.088



-
.230



.089


.292


.133



-
.250





.309



-
.285




-
.084




-
.144




-
.144




-
.223






-
.457




-
.151




-
.061




-
.019



.139



-
.065




]





The first component F1 (θ, φ) of the color vector function F (θ, φ) may now be expressed as:







[



1



cos





θ




sin





θ




cos





2





θ




]

*


[



68.4


40.7


32.4


10.4


15.2


7.0




.088



-
.230



.089


.292


.133



-
.250





.309



-
.285




-
.084




-
.144




-
.144




-
.223






-
.457




-
.151




-
.061




-
.019



.139



-
.065




]



[



1





cos





φ






sin





φ






cos





2





φ






sin





2





φ






cos





3





φ




]







The first component F1(θ, φ) allows the calculation of the first component (L* in this case, repeated in a similar fashion for a* or b*) values for any azimuth or viewing angle (θ, φ).


The controller 50 of FIG. 1 may be configured to employ any of a number of computer operating systems and generally include computer-executable instructions, where the instructions may be executable by one or more computers. Computer-executable instructions may be compiled or interpreted from computer programs created using a variety of well-known programming languages and/or technologies, including, without limitation, and either alone or in combination, Java™, C, C++, Visual Basic, Java Script, Perl, etc. In general, a processor (e.g., a microprocessor) receives instructions, e.g., from a memory, a computer-readable medium, etc., and executes these instructions, thereby performing one or more processes, including one or more of the processes described herein. Such instructions and other data may be stored and transmitted using a variety of known computer-readable media.


A computer-readable medium (also referred to as a processor-readable medium) includes any non-transitory (e.g., tangible) medium that participates in providing data (e.g., instructions) that may be read by a computer (e.g., by a processor of a computer). Such a medium may take many forms, including, but not limited to, non-volatile media and volatile media. Non-volatile media may include, for example, optical or magnetic disks and other persistent memory. Volatile media may include, for example, dynamic random access memory (DRAM), which typically constitutes a main memory. Such instructions may be transmitted by one or more transmission media, including coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to a processor of a computer. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.


The detailed description and the drawings or figures are supportive and descriptive of the invention, but the scope of the invention is defined solely by the claims. While some of the best modes and other embodiments for carrying out the claimed invention have been described in detail, various alternative designs and embodiments exist for practicing the invention defined in the appended claims. Furthermore, the embodiments shown in the drawings or the characteristics of various embodiments mentioned in the present description are not necessarily to be understood as embodiments independent of each other. Rather, it is possible that each of the characteristics described in one of the examples of an embodiment can be combined with one or a plurality of other desired characteristics from other embodiments, resulting in other embodiments not described in words or by reference to the drawings. Accordingly, such other embodiments fall within the framework of the scope of the appended claims.

Claims
  • 1. A system of characterizing a color variation of a surface, the system comprising: a device having a light source configured to direct a beam of light on a measurement location on the surface, the light source being at an illumination angle relative to the measurement location;wherein the device is rotatable to a plurality of azimuth angles (θ1, θ2. . .θe), relative to the measurement location, a total number of the plurality of azimuth angles being at least a first number (e);wherein the device includes a plurality of sensors positioned at respective viewing angles (φ1, φ2. . .φg), a total number of the respective viewing angles being at least a second number (g);a controller operatively connected to the device; andan algorithm stored on and executable by the controller to cause the controller to: direct a beam of light at the measurement location through the light source;measure the light leaving the measurement location with the plurality of sensors at each of the plurality of azimuth angles (θ1, θ2. . .θe) to obtain respective measured color values;define a color vector function F(θ, φ) as a product of a first vector P(θ) dependent on the plurality of azimuth angles and a second vector Q(φ) dependent on the respective viewing angles such that F(θ, φ)=P(θ).Q(φ), the color vector function F(θ, φ) representing the color variation of the surface; anddetermine the color vector function F(θ, φ) based at least partially on the respective measured color values.
  • 2. The system of claim 1, wherein the device is a spectrophotometer.
  • 3. The system of claim 1, wherein the plurality of azimuth angles (θ1, θ2. . .θe) are evenly spaced.
  • 4. The system of claim 1, wherein the illumination angle is approximately 45 degrees.
  • 5. The system of claim 1, wherein: the color vector function F(θ, φ) and first and second vectors P(θ), Q(φ)each include respective first, second and third components, such that F(θ, φ)=[F1 (θ, φ), F2(θ, φ), F3(θ, φ)], P(θ)=[P1(θ), P2(θ), P3(θ)] and Q(φ)=[Q1(φ), Q2(φ), Q3(φ)].
  • 6. The system of claim 5, wherein the respective first component of the color vector function F(θ, φ) represents one of: a lightness component (L*), a redness/greenness component (a*) and a yellowness/blueness component (b*).
  • 7. The system of claim 5, wherein execution of the algorithm to determine the color vector function F(θ, φ) based at least partially on the measured color values causes the controller to: express the respective first components P1(θ), Q1(φ) of the first and second vectors P(θ), Q(φ) as a product of respective plurality of matrices;obtain a first and second set of coefficients by inputting the measured color values into the respective plurality of matrices; andobtain the respective first component F1 (θ, φ) of the color vector function F (θ, φ) based on the first and second set of coefficients.
  • 8. The system of claim 5, wherein execution of the algorithm to define the color vector function F(θ, φ) causes the controller to: define an integer parameter f such that f=e/2 and f=(e−1)/2 if the first number (e) is even and odd, respectively;represent the respective first component P1(θ) of the first vector P(θ) as: P1(θ)=a0/2+Σj=1 to f[aj cos(jθ)+bj sin(jθ)];wherein a first set of coefficients include a0, aj, bj (j=1 . . . f); andbj=f is zero if the first number (e) is even.
  • 9. The system of claim 5, wherein execution of the algorithm to define the color vector function F(θ, φ) causes the controller to: define an integer parameter h such that h=g/2 and h=(g−1)/2 if the second number (g) is even and odd, respectively;represent the respective first component Q1(φ) of the second vector Q(φ) as: Q1(φ)=c0/2+Σk=1 to h[ck cos(kφ)+dk sin(kφ)];wherein a second set of coefficients include c0, ck, dk (k=1. . .h); anddk=h is zero if the second number (g) is even.
  • 10. A method of characterizing a color variation of a surface, the method comprising: positioning a device having a light source placed at an illumination angle relative to a measurement location on the surface;wherein the device is rotatable to a plurality of azimuth angles (θ1, θ2. . .θe), relative to the measurement location, a total number of the plurality of azimuth angles being at least a first number (e);positioning a plurality of sensors in the device at respective viewing angles (φ1, φ2. . .φg), a total number of the respective viewing angles being at least a second number (g);directing a beam of light at the measurement location with the light source;measuring the light leaving the measurement location with the plurality of sensors at each of the plurality of azimuth angles (θ1, θ2. . .θe) to obtain respective measured color values;defining a color vector function F(θ, φ) as a product of a first vector P(θ) dependent on the plurality of azimuth angles and a second vector Q(φ) dependent on the respective viewing angles such that F(θ, φ)=P(θ).Q(φ), the color vector function F(θ, φ) representing the color variation of the surface; anddetermining the color vector function F(θ, φ) based at least partially on the respective measured color values.
  • 11. The method of claim 10, wherein each of the respective measured color values include a lightness component (L*), a redness/greenness component (a*) and yellowness/blueness component (b*).
  • 12. The method of claim 10, wherein: the color vector function F(θ, φ) and first and second vectors P(θ), Q(φ) each include respective first, second and third components, such that F(θ, φ)=[F1(θ, φ), F2(θ, φ), F3(θ, φ)], P(θ)=[P1(θ), P2(θ), P3(θ)] and Q(φ)=[Q1(φ), Q2(φ), Q3(φ)].
  • 13. The method of claim 12, wherein the respective first component of the color vector function F(θ, φ) represents one of: a lightness component (L*), a redness/greenness component (a*) and a yellowness/blueness component (b*).
  • 14. The method of claim 12, wherein the determination of the color vector function F(θ, φ) based at least partially on the measured color values includes: expressing the respective first components of the first and second vectors P(θ), Q(φ) as a product of respective plurality of matrices;obtaining a first and a second set of coefficients by inputting the measured color values into the respective plurality of matrices; andobtaining the respective first component F1 (θ, φ) of the color vector function F (θ, φ) based on the first and second set of coefficients.
  • 15. The method of claim 12, wherein defining the color vector function F(θ, φ) includes: representing the respective first component P1(θ) of the first vector P(θ) as a first function dependent on a first set of coefficients; andrepresenting the respective first component Q1(φ) of the second vector as a second function dependent on a second set of coefficients.
  • 16. The method of claim 12, wherein: the first function is P1(θ)=a0 /2+Σj+1 to f[aj cos(jθ)+bjsin(jθ)];f is an integer parameter such that f=e/2 and f=(e-1)/2 if the first number (e) is even and odd, respectively;the first set of coefficients include a0, aj, bj (j=1 . . . f); andbj=f is zero if the first number (e) is even.
  • 17. The method of claim 12, wherein: the second function is Q1(φ)=c0/2+Σk =to h[ck cos(kφ)+dk sin(kφ)];h is an integer parameter such that h=g/2 and h=(g-1)/2 if the second number (g) is even and odd, respectively;the second set of coefficients include c0, ck, dk (k=1. . . h); anddk=h is zero if the second number (g) is even.
US Referenced Citations (7)
Number Name Date Kind
3690771 Armstrong et al. Sep 1972 A
5590251 Takagi Dec 1996 A
5929998 Kettler et al. Jul 1999 A
20040190768 Nonogaki et al. Sep 2004 A1
20070104663 Henglein et al. May 2007 A1
20090213120 Nisper et al. Aug 2009 A1
20140242271 Prakash et al. Aug 2014 A1
Related Publications (1)
Number Date Country
20150070694 A1 Mar 2015 US