Information
-
Patent Grant
-
6462777
-
Patent Number
6,462,777
-
Date Filed
Monday, July 13, 199826 years ago
-
Date Issued
Tuesday, October 8, 200222 years ago
-
Inventors
-
Original Assignees
-
Examiners
- Miller; John
- Yenke; Brian
Agents
-
CPC
-
US Classifications
Field of Search
US
- 348 180
- 348 184
- 348 189
- 348 191
- 348 185
- 348 186
- 348 655
- 348 656
- 348 657
- 348 658
- 348 807
- 348 745
- 348 746
- 348 747
- 348 177
- 348 179
- 348 190
- 348 188
- 348 47
- 348 139
- 348 159
- 348 187
- 348 806
-
International Classifications
-
Abstract
A color display characteristic measurement apparatus is provided with a first image pickup device for picking up an image of a first color component, a second image pickup device for picking up an image of a second color component, a measurement image generator for causing a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible, and a calculator for calculating, based on a first picked-up image of the first image pickup device and a second picked-up image of the second image pickup device, calibration data for calibrating a relative displacement between the first and second image pickup devices.
Description
This application is based on patent application No. 9-190194 filed in Japan, the contents of which is hereby incorporated by reference.
BACKGROUND OF THE INVENTION
This invention relates to a display characteristic measurement apparatus for a color display apparatus which picks up an image of a measurement pattern displayed on a color display apparatus such as a color CRT (Cathode Ray Tube) and measures a display characteristic such as a convergence based on this image signal.
Conventionally, there has been known a display characteristic measurement apparatus for measuring a display characteristic such as a convergence of a display apparatus such as a color CRT, a color LCD (Liquid Crystal Display) or a color PDP (Plasma Display Panel). This display characteristic measurement apparatus includes an image pickup unit for picking up an image of a specified measurement pattern color-displayed on a display apparatus to be measured while separating this image into images of the respective color components of R (red), G (green) and B (blue), an image processor for conducting a predetermined measurement after processing the images of the respective color components, and a display device for displaying the measurement result.
For example, as disclosed in Japanese Unexamined Patent Publication No. 8-307908, a convergence measurement apparatus picks up an image of a specified white measurement pattern displayed on a color CRT to be measured by a camera provided with a color area sensor such as CCDs, calculates a luminance center for every picked image of each color component R, G or B during the image processing, and displays a relative displacement of these luminance centers as a misconvergence amount.
The convergence measurement apparatus calculates a luminous position (luminous center position) of the measurement pattern of each color component on a display surface of the color CRT to be measured based on a focusing position (luminance center position) of the measurement pattern of each color component on a sensing surface of the color camera, and calculates a relative displacement of the luminous positions of the respective color components. Accordingly, the accuracy of the convergence measurement is largely affected by the calculation accuracy of the focusing position (luminance center position) of the measurement pattern of each color component on the sensing surface.
Particularly in the case that a color camera of three-plate type is used, a light image is separated into images of three primary color components, and three area sensors are independently provided so as to conform to the respective color components of R, G, B at the respective emergence ends of a dichroic prism for emitting the images of the respective color components in different directions. Accordingly, the measurement accuracy easily varies due to displacements of the respective area sensors caused by temperature and humidity variations.
Accordingly, the conventional convergence i measurement apparatus is calibrated using a special calibration chart before a measurement as shown in
FIG. 9. A
calibration method shown in
FIG. 9
is such that an image of a calibration chart
103
(a chart in which a crosshatched pattern
105
is drawn on an opaque white plate) illuminated by a fluorescent lamp
104
is picked up by an image pickup unit
101
of the convergence measurement apparatus
100
and a calibration data representing a relative positional relationship of the respective area sensors is calculated using the picked image. The calculated calibration data is stored in a memory in an apparatus main body
102
and used as a data for calibrating a displacement of the luminance center position of the measurement pattern of each color component during the convergence measurement.
According to the conventional method for calibrating the relative displacement of the area sensors, the positions (absolute positions) of the respective area sensors in reference coordinate systems in a convergence measuring system are calculated using the image data of the respective color components obtained by picking up the image of the special calibration chart, and the relative displacement of the area sensors is calculated based on this calculation result. Thus, there is a disadvantage of many operation parameters which necessitates a longer operation time.
Further, since not the measurement pattern displayed on the CRT to be measured, but the special calibration chart is used, it is inconvenient and difficult to calibrate convergence measuring systems at a production line.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a display characteristic measurement apparatus which has overcome the problems residing in the prior art.
It is another object of the present invention to provide a method for calibrating a display characteristic measurement apparatus which has overcome the problems residing in the prior art.
According to an aspect of the present invention, a display characteristic measurement apparatus for a color display apparatus, comprises: a first image pickup device which picks up an image of a first color component; a second image pickup device which picks up an image of a second color component; a measurement image generator which causes a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible; and a calculator which calculates, based on a first picked-up image picked up by the first image pickup device and a second picked-up image picked up by the second image pickup device, calibration data for calibrating a relative displacement between the first image pickup device and the second image pickup device.
According to another aspect of the present invention, a display characteristic measurement apparatus for a color display apparatus, comprises: a first image pickup device which picks up an image of a first color component; a second image pickup device which picks up an image of a second color component; a measurement image generator which causes a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible; a first position calculator which calculates, based on a first picked-up image picked up by the first image pickup device, a position on the first image pickup device that corresponds to a specified position of the measurement image on the color display apparatus; a second position calculator which calculates, based on a second picked-up image picked up by the second image pickup device, a position on the second image pickup device that corresponds to the specified position of the measurement image on the color display apparatus; and a displacement calculator which calculates, based on first and second positions calculated by the first and second position calculators, a relative displacement between the first image pickup device and the second image pickup device.
According to another aspect of the present invention, a method for calibrating a relative displacement between a first image pickup device for picking up an image of a first color component and a second image pickup device for picking up an image of a second color component provided in a display characteristic measurement apparatus for a color display apparatus, comprises the steps: causing a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible; picking up images of the measurement image displayed on the color display apparatus by the first and second image pickup devices, respectively; calculating, based on picked-up images of the first and second image pickup devices, calibration data for calibrating a relative displacement between the first image pickup device and the second image pickup device.
These and other objects, features and advantages of the present invention will become more apparent upon a reading of the following detailed description and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a schematic diagram showing a construction of a convergence measurement apparatus for a color CRT embodying the invention;
FIG. 2
is a diagram showing a crosshatched pattern displayed on a color CRT;
FIG. 3
is a diagram showing a vertical line included in a measurement area separated into lines of the respective color components of R, G, B;
FIG. 4
is a diagram showing a horizontal line included in a measurement area separated into lines of the respective color components of R, G, B;
FIG. 5
is a perspective view showing an arrangement of a color display apparatus and an image pickup unit in a convergence measurement by a stereoscopic method;
FIG. 6
is a perspective view showing hv coordinate systems defined on a sensing surface of the CCD area sensor;
FIG. 7
is a diagram showing coordinates of an optical axis and a focusing point on the sensing surface of the CCD area sensor;
FIG. 8
is a flowchart showing an operation sequence of the convergence measurement apparatus; and
FIG. 9
is a perspective view showing a conventional method for calibrating a convergence measurement apparatus using a calibration chart.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION
A convergence measurement apparatus for a color CRT embodying the invention will be described.
FIG. 1
shows a schematic construction of a convergence measurement apparatus
1
for a color CRT. The convergence measurement apparatus
1
includes an image pickup unit
2
and a measuring unit
3
.
The image pickup unit
2
picks up an image of a specified measurement pattern (e.g., crosshatched pattern, dot pattern, etc.) displayed on a display surface of a color display
4
to be measured and is provided with a pair of image pickup cameras
21
,
22
so as to enable the sensing of images by a stereoscopic method.
The measuring unit
3
calculates a misconvergence amount of the color display
4
using the image data of the measurement pattern obtained by the image pickup unit
2
and displays the calculation result on a display device
36
.
The image pickup camera
21
in the image pickup unit
2
is a color image pickup apparatus of three-plate type which is constructed such that a dichroic prism
212
for separating a light into rays of three colors is provided behind a taking lens
211
, solid-state image sensing devices
213
R,
213
G,
213
B including CCD area sensors are arranged in positions opposite to emergent surfaces of dichroic prism
212
from which the rays of the respective colors of R, G, B emerge. The image pickup camera
22
is also a color image pickup apparatus of three-plate type similar to the image pickup camera
21
.
In the image pickup camera
21
are also provided an image pickup controller
214
for controlling the operations of the respective solid-state image sensing devices (hereinafter, “CCDs”)
213
R,
213
G,
213
B, a focusing circuit
215
for effecting an automatic focusing by driving the taking lens
211
, and a signal processing circuit
216
for applying specified image processings to image signals sent from the CCDs
213
R,
213
G,
213
B and outputting them to the measuring unit
3
. Likewise, an image pickup controller
224
, a focusing circuit
225
and a signal processing circuit
226
are provided in the image pickup camera
22
.
The sensing controller
214
is controlled by a sensing control signal sent from the measurement apparatus
3
and controls the image pickup operations (electric charge storing operations) of the CCDs
213
R,
213
G,
213
B in accordance with this sensing control signal. Likewise, the sensing controller
224
is controlled by a sensing control signal sent from the measurement apparatus
3
and controls the image pickup operations of the CCDs
223
R,
223
G,
223
B in accordance with this sensing control signal.
The focusing control circuit
215
is controlled by a focusing control signal sent from the measurement apparatus
3
and drives a front group
211
A of the taking lens
211
in accordance with this focusing control signal to focus a light image of the measurement pattern displayed on the display surface of the color display
4
on the sensing surfaces of the CCDs
213
R,
213
G,
213
B. Likewise, the focusing control circuit
225
is controlled by a focusing control signal sent from the measurement apparatus
3
and drives a front group
221
A of the taking lens
221
in accordance with this focusing control signal to focus the light image of the measurement pattern displayed on the display surface of the color display
4
on the sensing surfaces of the CCDs
223
R,
223
G,
223
B.
The focusing control is performed, for example, according to a hill-climbing method in accordance with the focusing control signal from a controller
33
. Specifically, in the case of, e.g. the image pickup camera
21
, the controller
33
extracts, for example, high frequency components (edge portion of the measurement pattern) of the green image picked up by the CCD
213
G and outputs such a focusing control signal to the focusing control circuit
215
as to maximize the high frequency components (make the edge of the measurement pattern more distinctive). The focusing control circuit
215
moves the front group
211
A of the taking lens
211
forward and backward with respect to an in-focus position in accordance with the focusing control signal, thereby gradually decreasing a moving distance to finally set the taking lens
211
in the in-focus position.
The focusing control is performed using the picked image in this embodiment. However, the image pickup camera
21
,
22
may, for example, be provided with distance sensors and the taking lenses
211
,
221
may be driven using distance data between the cameras
21
,
22
and the display surface of the color display
4
which are detected by the distance sensors.
The measurement apparatus
3
includes analog-to-digital (A/D) converters
31
A,
31
B, image memories
32
A,
32
B, the controller
33
, a data input device
34
, a data output device
35
and the display device
36
.
The A/D converters
31
A,
31
B convert the image signal (analog signals) inputted from the image pickup cameras
21
,
22
into image data in the form of digital signals. The image memories
32
A,
32
B store the image data outputted from the A/D converters
31
A,
31
B, respectively.
Each of the A/D converters
31
A,
31
B is provided with three A/D converting circuits in conformity with the image signals of the respective color components R, G, B. Each of the image memories
32
A,
32
B includes a memory of three frames in conformity with the image data of the respective color components R, G, B.
The controller
33
is an operation control circuit including a microcomputer and is provided with a memory
331
including a ROM (read only memory) and a memory
332
including a RAM (random access memory).
In the memory
331
are stored a program for the convergence measurement processing (including a series of operations including the driving of the optical system, the image pickup operation, the calculation of the image data) and data (correction values, a data conversion table, etc.) necessary for the calculation. Further, the memory
332
provides a data area and a work area for performing a variety of operations for the convergence measurement.
The misconvergence amount (measurement result) calculated by the controller
33
is stored in the memory
332
and outputted to the display device
36
to be displayed in a predetermined display format. The misconvergence amount can also be outputted to an equipment (printer or an external storage device) externally connected via the data output device
35
.
The data input device
34
is operated to input a variety of data for the convergence measurement and includes, e.g. a keyboard. Via the data input device
34
, data such as an arrangement interval of the pixels of the CCDs
213
,
223
and a position of the measurement point on the display surface of the color display
4
are inputted.
The color display
4
to be measured includes a color CRT
41
displaying a video image and a drive control circuit
42
for controlling the drive of the color CRT
41
. A video signal of a measurement pattern generated by a pattern generator
5
is inputted to the drive control circuit
42
of the color display
4
, which in turn drives a deflection circuit of the color CRT
41
in accordance with this video signal, thereby causing, for example, a crosshatched measurement pattern as shown in
FIG. 2
to be displayed on the display surface.
In this convergence measurement apparatus
1
, images of the measurement pattern displayed on the color display
4
are stereoscopically picked up by the image pickup cameras
21
,
22
of the image pickup unit
2
and a misconvergence amount is measured using the image data obtained by the image pickup cameras
21
,
22
.
Next, a method for measuring the misconvergence amount is described, taking as an example a case where a crosshatched pattern is used as a measurement pattern.
FIG. 2
is a diagram showing a crosshatched pattern
6
displayed on the color CRT
41
. The crosshatched pattern
6
is formed by intersecting a plurality of vertical lines and a plurality of horizontal lines, and is displayed in a suitable size such that a plurality of intersections are included within a display surface
41
a
of the color CRT
41
. Misconvergence amount measurement areas A(
1
) to A(n) are so set in desired positions within the display surface
41
a
as to include at least one intersection.
In each measurement area A(r) (r=1, 2, . . . n), a horizontal (X-direction in XY coordinate systems) misconvergence amount ΔD
X
is calculated based on a picked image of the vertical line included in this measurement area A(r), and a vertical (Y-direction in XY coordinate systems) misconvergence amount ΔD
Y
is calculated based on a picked image of the horizontal line.
FIG. 3
is a diagram showing the vertical line included in the measurement area A(r) and separated into lines of the respective color components, and
FIG. 4
is a diagram showing the horizontal line included in the measurement area A(r) and separated into lines of the respective color components.
Assuming that X
R
, X
G
, X
B
denote luminous positions (luminance center positions) of the vertical lines of R, G, B along X-direction on the display surface
41
a
of the color CRT
41
, respectively, the horizontal misconvergence amount ΔD
X
is expressed as displacements ΔD
RGX
(=X
R
−X
G
), ΔD
BGX
(=X
B
−X
G
) of the luminous positions with respect to any of the luminous positions X
R
, X
G
, X
B
, e.g. the luminous position X
G
of the color component G.
Further, assuming that Y
R
, Y
G
, Y
B
denote luminous positions (luminance center positions) of the horizontal lines of R, G, B along Y-direction on the display surface
41
a
of the color CRT
41
, respectively, the vertical misconvergence amount ΔD
Y
is expressed as displacements ΔD
RGY
(=Y
R
−Y
G
), ΔD
BGY
(=Y
B
−Y
G
) of the luminous positions with respect to any of the luminous positions Y
R
, Y
G
, Y
B
, e.g. the luminous position Y
G
of the color component G.
Next, a specific method for calculating the misconvergence amount is described taking an example a case where the image of the measurement pattern is picked up according to a stereoscopic method.
In order to facilitate the description, the color components at the light emitting side of the color display
4
are written in capital letters of R(red), G(green), B(blue) and those at the light receiving side of the image pickup unit
2
are written in small letters of r(red), g(green), b(blue).
First, the coordinate systems of the convergence measuring system is described. It is assumed that the image pickup unit
2
be arranged in front of the color display
4
as shown in
FIG. 5
in order to pick up the image of the measurement pattern displayed on the display surface
41
a
of the color display
4
.
In
FIG. 5
, XYZ coordinate systems of the convergence measuring system which have an origin O in an arbitrary position on a normal line passing a center M of the display surface
41
a
of the color display
4
are set such that Z-axis is parallel to this normal line, Y-axis is parallel to the vertical direction of the display surface
41
a
, and X-axis is parallel to the horizontal direction of the display surface
41
a
. It should be noted that (+)-directions of Z-axis, Y-axis and X-axis are a direction from the origin O toward the center M, an upward direction from the origin O and a leftward direction from the origin O when the color display
4
is viewed from the origin O, respectively.
Q
J
(X
J
, Y
J
, Z
J
) denotes coordinates of the luminous center (luminance center position) of a phosphor of a color component J (J=R, G, B) at a measurement point Q (e.g., a cross point in a crosshatched pattern or a dot point in a dotted pattern) on the display surface
41
a
of the color display
4
, and P
1
(X
P1
, Y
P1
, Z
P1
), P
2
(X
P2
, Y
P2
, Z
P2
) denote the positions or coordinates of principal points P
1
, P
2
of the taking lenses
211
,
221
of the image pickup unit
2
.
Further, hv coordinate systems having an origin o in the center of the sensing surface of each of the CCDs
213
R,
213
G,
213
B,
223
R,
223
G,
223
B is set as shown in
FIG. 6
such that h-axis is parallel to the vertical direction of the CCD area sensor and v-axis is parallel to the horizontal direction of the CCD area sensor. It should be noted that (+)-directions of v-axis and h-axis are an upward direction and a rightward direction, when facing the sensing surface, from the origin o.
It is assumed that each sensing surface is in such a position that an optical axis L
J
(J=R, G, B) for each color component J is incident in a position displaced from the origin o of the hv coordinate systems on its sensing surface. In this case, coordinates of an incidence point o
j1′
of an optical axis L
J1
on the sensing surface of the CCD of the color component j (j=r, g, b) of the image pickup camera
21
and coordinates of an incidence point o
j2′
of an optical axis L
J2
on the sensing surface of the CCD of the color component j of the image pickup camera
22
are expressed as (h
j1O
, v
j1O
), (h
j2O
, v
j2O
), respectively, as shown in FIG.
7
. Further, coordinates of a focusing point I
j1J
of a light image at the measurement point Q
J
of the display surface
41
a
on the sensing surface of the CCD of the color component j of the image pickup camera
21
and coordinates of a focusing point I
j2J
thereof on the sensing surface of the CCD of the color component j of the image pickup camera
22
are expressed as (h
j1J
, v
j1J
), (h
j2J
, v
j2J
), respectively.
Assuming that a, b, f denote a distance from a principal point of a lens to an object, a distance therefrom to an image and a focal length of the lens, there is, in general, a relationship between them: 1/a+1/b=1/f. If y, y′ are sizes of the object and the image, respectively, there is a relationship between them: y′/y=b/a. The following relationship (
1
) is obtained from the above relationships:
y′=y·f
/(
a−f
) (1)
By applying Equation (1) to the aforementioned positional relationship of the measurement point Q
J
and the focusing points I
j1J
, I
j2J
, Equations (2) to (5) can be obtained.
h
j1J′
=h
j1J
−h
j1O
=(
X
J
−X
P1
)·
f
j1J
/(
Z
J
−f
j1J
) (2)
v
j1J′
=v
j1J
−v
j1O
=(
Y
J
−Y
P1
)·
f
j1J
/(
Z
J
−f
j1J
) (3)
where f
i1J
denotes a focal length of the optical system for the color component j of the image pickup camera
21
with respect to the emission of phosphors of the color J on the display surface
4
a.
h
j2J′
=h
j2J
−h
j2O
=(
X
J
−X
P2
)·
f
j2J
/(
Z
J
−f
j2J
) (4)
v
j2J′
=v
j2J
−v
j2O
=(
Y
J
−Y
P2
)·
f
j2J
/(
Z
J
−f
j2J
) (5)
where f
i2J
denotes a focal length of the optical system for the color component j of the image pickup camera
22
with respect to the emission of phosphors of the color J on the display surface
4
a.
Next, a method for calculating a misconvergence amount according to the stereoscopic method is described.
In order to facilitate the description, a case where the image pickup unit
2
is arranged such that the optical axes L
1
, L
2
of the image pickup cameras
21
,
22
are located in XZ-plane is described.
If the principal points P
1
, P
2
are located in XZ-plane, the coordinates thereof are:
P
1
(
X
P1
, 0
, Z
P1
),
P
2
(
X
P2
, 0
, Z
P2
)
since Y-coordinate is “0”.
Equations (6), (7) correspond to Equations (2), (3) with respect to the light image at the measurement point Q
J
of the color display
4
and the focusing point I
j1J
on the image pickup camera
21
.
h
j1J′
=h
j1J
−h
j1O
=(
X
J
−X
P1
)·
f
j1J
/(
Z
J
−f
j1J
) (6)
v
j1J′
=v
j1J
−v
j1O
=Y
J
·f
j1J
/(Z
J
−f
j1J
) (7)
When hv coordinates of the focusing points I
r1J′
, I
g1J′
, I
b1J′
on the respective sensing surfaces of the CCDs
213
R,
213
G,
213
B are calculated by replacing j of Equations (6), (7) by the color components r, g, b, Equations (8) to (13) can be obtained.
h
r1J′
=(
X
J
−X
P1
)·
f
r1J
/(
Z
J
−f
r1J
) (8)
v
r1J′
=Y
J
·f
r1J
/(
Z
J
−f
r1J
) (9)
h
g1J′
=(
X
J
−X
P1
)·
f
g1J
/(
Z
J
−f
g1J
) (10)
v
g1J′
=Y
J
·f
g1J
/(
Z
J
−f
g1J
) (11)
h
b1J′
=(
X
J
−X
P1
)·
f
b1J
/(
Z
J
−f
b1J
) (12)
v
b1J′
=Y
J
·f
b1J
/(
Z
J
−f
b1J
) (13)
Further, Equations (14), (15) correspond to Equations (2), (3) with respect to the light image at the measurement point Q
J
of the color display
4
and the focusing point I
j2J
on the image pickup camera
22
.
h
j2J′
=(
X
J
−X
P2
)·
f
j2J
/(
Z
J
−f
j2J
) (14)
v
j2J′
=Y
J
·f
j2J
/(
Z
J
−f
j2J
) (15)
When hv coordinates of the focusing points I
r2J′
, I
g2J′
, I
b2J′
on the respective sensing surfaces of the CCDs
223
R,
223
G,
223
B are calculated by replacing j of Equations (14), (15) by the color components r, g, b, Equations (16) to (21) can be obtained.
h
r2J′
=(
X
J
−X
P2
)·
f
r2J
/(
Z
J
−f
r2J
) (16)
v
r2J′
=Y
J
·f
r2J
/(
Z
J
−f
r2J
) (17)
h
g2J′
=(
X
J
−X
P2
)·
f
g2J
/(
Z
J
−f
g2J
) (18)
v
g2J′
=Y
J
·f
g2J
/(
Z
J
−f
g2J
) (19)
h
b2J′
=(
X
J
−X
P2
)·
f
b2J
/(
Z
J
−f
b2J
) (20)
v
b2J′
=Y
J
·f
b2J
/(
Z
J
−f
b2J
) (21)
The coordinate X
J
is calculated as in Equation (22) by eliminating f
r1J
/(Z
J
−f
r1J
), f
r2J
/(Z
J
−f
r2J
) and Y
J
from Equations (8), (9), (16), (17).
X
J
=(v
r1J′
·h
r2J′
·X
P1
−v
r2J′
·h
r1J′
·X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
) (22)
Further, the coordinate Y
J
is calculated as in Equation (23) by eliminating f
r1J
/(Z
J
−f
r1J
) from Equations (8), (9) and replacing X
J
by Equation (22).
Y
J
=v
r1J′
·v
r2J′
·(
X
P1
−X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
) (23)
If Equation (23) is put into Equation (9) or (17), the coordinate Z
J
is calculated as in Equation (24) or (25).
Z
J
=f
r1J
+{f
r1J
·v
r2J′
·(
X
P1
−X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
)} (24)
=
f
r2J
+{f
r2J
·v
r1J′
·(
X
P1
−X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·
r1J′
)} (25)
Equations (22) to (25) are Equations for calculating the XYZ coordinates of the measurement point Q
J
on the display surface
41
a
from the red component images picked by the image pickup cameras
21
,
22
. Accordingly, if the calculations similar to the above are performed using Equations (10), (11), (18), (19), there can be obtained Equations for calculating the XYZ coordinates of the measurement point Q
J
on the display surface
41
a
from the green component images picked by the image pickup cameras
21
,
22
. Further, if the calculations similar to the above are performed using Equations (12), (13), (20), (21), there can be obtained Equations for calculating the XYZ coordinates of the measurement point Q
J
on the display surface
41
a
from the blue component images picked by the image pickup cameras
21
,
22
.
Assuming that the measurement points on the display surface
41
a
calculated from the picked images of r, g, b of the image pickup unit
2
are Q
Jr
, Q
Jg
, Q
Jb
and these XYZ coordinates, are Q
Jr
(X
Jr
, Y
Jr
, Z
Jr
), Q
Jg
(X
Jg
, Y
Jg
, Z
Jg
), Q
Jb
(X
Jb
, Y
Jb
, Z
Jb
), the respective XYZ coordinates of the measurement points Q
Jr
, Q
Jg
, Q
Jb
are calculated in accordance with Equations (26) to (37):
X
Jr
=(
v
r1J′
·h
r2J′
·X
P1
−v
r2J′
·h
r1J′
·X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
) (26)
Y
Jr
=v
r1J′
·v
r2J′
·(
X
P1
−X
P2)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
) (27)
Z
Jr
=f
r1J
+{f
r1J
·v
r2J′
·(X
P1
−X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
)} (28)
=
f
r2J
+{f
r2J
·v
r1J′
·(X
P1
−X
P2
)/(
v
r1J′
·h
r2J′
−v
r2J′
·h
r1J′
)} (29)
X
Jg
=(
v
g1J′
·h
g2J′
·X
P1
−v
g2J′
·h
g1J′
·X
P2
)/(
v
g1J′
·h
g2J′
−v
g2J′
·h
g1J′
) (30)
Y
Jg
=v
g1J′
·v
g2J′
·(
X
P1
−X
P2)/(
v
g1J′
·h
g2J′
−v
g2J′
·h
g1J′
) (31)
Z
Jg
=f
g1J
+{f
g1J
·v
g2J′
·(X
P1
−X
P2
)/(
v
g1J′
·h
g2J′
−v
g2J′
·h
g1J′
)} (32)
=
f
g2J
+{f
g2J
·v
g1J′
·(X
P1
−X
P2
)/(
v
g1J′
·h
g2J′
−v
g2J′
·h
g1J′
)} (33)
X
Jb
=(
v
b1J′
·h
b2J′
·X
P1
−v
b2J′
·h
b1J′
·X
P2
)/(
v
b1J′
·h
b2J′
−v
b2J′
·h
b1J′
) (34)
Y
Jb
=v
b1J′
·v
b2J′
·(
X
P1
−X
P2)/(
v
b1J′
·h
b2J′
−v
b2J′
·h
b1J′
) (35)
Z
Jb
=f
b1J
+{f
b1J
·v
b2J′
·(X
P1
−X
P2
)/(
v
b1J′
·h
b2J′
−v
b2J′
·h
b1J′
)} (36)
=
f
b2J
+{f
b2J
·v
b1J′
·(X
P1
−X
P2
)/(
v
b1J′
·h
b2J′
−v
b2J′
·h
b1J′
)} (37)
As described above, with respect to, e.g., the measurement point Q
R
of the red phosphor on the display surface
41
a, three measurement points Q
Rr
, Q
Rg
, Q
Rb
are calculated for every image of the respective color components r, g, b obtained by the image pickup unit
2
. Accordingly, if the measurement value of the measurement point Q
R
is determined by weighted average values of Q
Rr
, Q
Rg
, Q
Rb
calculated for every image of the respective color components r, g, b, the XYZ coordinates of the measurement point Q
R
are calculated in accordance with Equations (38) to (41):
X
R
=W
rR
{(
v
r1R′
·h
r2R′
·X
P1
−v
r2R′
·h
r1R′
·X
P2
)/
(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}
+
W
gR
{(
v
g1R′
·h
g2R′
·X
P1
−v
r2R′
·h
g1R′
·X
P2
)/
(
v
g1R′
·h
g2R′
−v
g2R′
·h
g1R′
)}
+
W
bR
{(
v
b1R′
·h
b2R′
·X
P1
−v
r2R′
·h
b1R′
·X
P2
)/
(
v
b1R′
·h
b2R′
−v
b2R′
·h
b1R′
)} (38)
Y
R
=W
rR
{v
r1R′
−v
r2R′
·(
X
P1
−X
P2
)/(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}
+
W
gR
{v
g1R′
−v
g2R′
·(
X
P1
−X
P2
)/(
v
g1R′
·h
g2R′
−v
g2R′
·h
g1R′
)}
+
W
bR
{v
b1R′
−v
b2R′
·(
X
P1
−X
P2
)/(
v
b1R′
·h
b2R′
−v
b2R′
·h
b1R′
)} (39)
Z
R
=W
rR
[f
r1R
+{f
r1R
·v
r2R′
·(
X
P1
−X
P2
)/
(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}]+
W
gR
[f
g1R
+{f
g1R
·v
g2R′
·(
X
P1
−X
P2
)/
(
v
g1R′
·h
g2R′
−v
g2R′
·h
g1R′
)}]+
W
bR
[f
b1R
+{f
b1R
·v
b2R′
·(
X
P1
−X
P2
)/
(
v
b1R′
·h
b2R′
−v
b2R′
·h
b1R′
)}] (40)
=
W
rR
[f
r2R
+{f
r2R
·v
r1R′
·(
X
P1
−X
P2
)/
(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}]+
W
gR
[f
g2R
+{f
g2R
·v
g1R′
·(
X
P1
−X
P2
)/
(
v
g1R′
·h
g2R′
−v
g2R′
·h
g1R′
)}]+
W
bR
[f
b2R
+{f
b2R
·v
b1R′
·(
X
P1
−X
P2
)/
(
v
b1R′
·h
b2R′
−v
b2R′
·h
b1R′
)}] (41)
where W
rR
, W
gR
, W
bR
are weight factors.
Likewise, if the measurement value of the measurement point Q
G
is determined by weighted average values of Q
Gr
, Q
Gg
, Q
Gb
calculated for every image of the respective color components r, g, b, the XYZ coordinates of the measurement point Q
G
are calculated in accordance with Equations (42) to (45). Further, if the measurement value of the measurement point Q
B
is determined by weighted average values of Q
Br
, Q
Bg
, Q
Bb
calculated for every image of the respective color components r, g, b, the XYZ coordinates of the measurement point Q
B
are calculated in accordance with Equations (46) to (49).
X
G
=W
rG
{(
v
r1G′
·h
r2G′
·X
P1
−v
r2G′
·h
r1G′
·X
P2)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}+
W
gG
{(
v
g1G′
·h
g2G′
·X
P1
−v
g2G′
·h
g1G′
·X
P2)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}
+
W
bG
{(
v
b1G′
·h
b2G′
·X
P1
−v
b2G′
·h
b1G′
·X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)} (42)
Y
G
=W
rG
{v
r1G′
·v
r2G′
·(
X
P1
−X
P2
)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}
+
W
gG
{v
g1G′
·v
g2G′
·(
X
P1
−X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}
+
W
bG
{v
b1G′
·v
b2G′
·(
X
P1
−X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)} (43)
Z
G
=W
rG
[f
r1G
+{f
r1G
·v
r2G′
·(
X
P1
−X
P2
)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}]
+
W
gG
[f
g1G
+{f
g1G
·v
g2G′
·(
X
P1
−X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}]
+
W
bG
[f
b1G
+{f
b1G
·v
b2G′
·(
X
P1
−X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)}] (44)
=
W
rG
[f
r2G
+{f
r2G
·v
r1G′
·(
X
P1
−X
P2
)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}]
+
W
gG
[f
g2G
+{f
g2G
·v
g1G′
·(
X
P1
−X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}]
+
W
bG
[f
b2G
+{f
b2G
·v
b1G′
·(
X
P1
−X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)}] (45)
where W
rG
, W
gG
, W
bG
are weight factors.
X
B
=W
rB
{(
v
r1B′
·h
r2B′
·X
P1
−v
r2B′
·h
r1B′
·X
P2
)/
(
v
r1B′
·h
r2B′
−v
r2B′
·h
r1B′
)}
+W
gB
{(
v
g1B′
·h
g2B′
·X
P1
−v
g2B′
·h
g1B′
·X
P2
)/
(
v
g1B′
·h
g2B′
−v
g2B′
·h
g1B′
)}
+W
bB
{(
v
b1B′
·h
b2B′
·X
P1
−v
b2B′
·h
b1B′
·X
P2
)/
(
v
b1B′
·h
b2B′
−v
b2B′
·h
b1B′
)} (46)
Y
B
=W
rB
{v
r1B′
·v
r2B′
·(
X
P1
−X
P2
)/
(
v
r1B′
·h
r2B′
−v
r2B′
·h
r1B′
)}
+
W
gB
{v
g1B′
·v
g2B′
·(
X
P1
−X
P2
)/
(
v
g1B′
·h
g2B′
−v
g2B′
·h
g1B′
)}
+
W
bB
{v
b1B′
·v
b2B′
·(
X
P1
−X
P2
)/
(
v
b1B′
·h
b2B′
−v
b2B′
·h
b1B′
)} (47)
Z
B
=W
rB
[f
r1B
+{f
r1B
·v
r2B′
·(
X
P1
−X
P2
)/
(
v
r1B′
·h
r2B′
−v
r2B′
·h
r1B′
)}]
+
W
gB
[f
g1B
+{f
g1B
·v
g2B′
·(
X
P1
−X
P2
)/
(
v
g1B′
·h
g2B′
−v
g2B′
·h
g1B′
)}]
+
W
bB
[f
b1B
+{f
b1B
·v
b2B′
·(
X
P1
−X
P2
)/
(
v
b1B′
·h
b2B′
−v
b2B′
·h
b1B′
)}] (48)
=
W
rB
[f
r2B
+{f
r2B
·v
r1B′
·(
X
P1
−X
P2
)/
(
v
r1B′
·h
r2B′
−v
r2B′
·h
r1B′
)}]
+
W
gB
[f
g2B
+{f
g2B
·v
g1B′
·(
X
P1
−X
P2
)/
(
v
g1B′
·h
g2B′
−v
g2B′
·h
g1B′
)}]
+
W
bB
[f
b2B
+{f
b2B
·v
b1B′
·(
X
P1
−X
P2
)/
(
v
b1B′
·h
b2B′
−v
b2B′
·h
b1B′
)}] (49)
where W
rB
, W
gB
, W
bB
are weight factors.
Accordingly, if Equations (38), (42), (46) are put into the aforementioned misconvergence amounts ΔD
RGX
(=X
R
−X
G
), ΔD
BGX
(=X
B
−X
G
), Equations for calculating the horizontal misconvergence amounts ΔD
RGX
, ΔD
BGX
are defined as in Equations (50), (51):
Δ
D
RGX
=X
R
−X
G
=[W
rR
{(
v
r1R′
·h
r2R′
·X
P1
−v
r2R′
·h
r1R′
·X
P2
)/
(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}
+
W
gR
{(
v
g1R′
·h
g2R′
·X
P1
−v
g2R′
·h
g1R′
·X
P2
)/
(
v
g1R′
·h
g2R′
−v
g2R′
·h
g1R′
)}
+
W
bR
{(
v
b1R′
·h
b2R′
·X
P1
−v
b2R′
·h
b1R′
·X
P2
)/
(
v
b1R′
·h
b2R′
−v
b2R′
·h
b1R′
)}]
−[W
rG
{(
v
r1G′
·h
r2G′
·X
P1
−v
r2R′
·h
r1G′
·X
P2
)/
(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}
+
W
gG
{(
v
g1G′
·h
g2G′
·X
P1
−v
g2R′
·h
g1G′
·X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}
+
W
bG
{(
v
b1G′
·h
b2G′
·X
P1
−v
b2G′
·h
b1G′
·X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)}] (50)
Δ
D
BGX
=X
B
−X
G
=[W
rB
{(
v
r1B′
·h
r2B′
·X
P1
−v
r2B′
·h
r1B′
·X
P2
)/
(
v
r1B′
·h
r2B′
−v
r2B′
·h
r1B′
)}
+
W
gB
{(
v
g1B′
·h
g2B′
·X
P1
−v
g2B′
·h
g1B′
·X
P2
)/
(
v
g1B′
·h
g2B′
−v
g2B′
·h
g1B′
)}
+
W
bB
{(
v
b1B′
·h
b2B′
·X
P1
−v
b2B′
·h
b1B′
·X
P2
)/
(
v
b1B′
·h
b2B′
−v
b2B′
·h
b1B′
)}]
−[
W
rG
{(
v
r1G′
·h
r2G′
·X
P1
−v
r2R′
·h
r1G′
·X
P2
)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}
+
W
gG
{(
v
g1G′
·h
g2G′
·X
P1
−v
g2R′
·h
g1G′
·X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}
+
W
bG
{(
v
b1G′
·h
b2G′
·X
P1
−v
b2G′
·h
b1G′
·X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)}] (51)
Further, if Equations (39), (43), (47) are put into the aforementioned misconvergence amounts ΔD
RGY
(=Y
R
−Y
G
), ΔD
BGY
(=Y
B
−Y
G
), Equations for calculating the vertical misconvergence amounts ΔD
RGY
, ΔD
BGY
are defined as in Equations (52), (53):
Δ
D
RGY
=Y
R
−Y
G
=[
W
rR
{v
r1R′
·v
r2R′
·(
X
P1
−X
P2
)/
(
v
r1R′
·h
r2R′
−v
r2R′
·h
r1R′
)}
+
W
gR
{v
g1R′
·v
g2R′
·(
X
P1
−X
P2
)/
(
v
g1R′
·h
g2R′
−v
g2R′
·h
g1R′
)}
+
W
bR
{v
b1R′
·v
b2R′
·(
X
P1
−X
P2
)/
(
v
b1R′
·h
b2R′
−v
b2R′
·h
b1R′
)}]
−[
W
rG
{v
r1G′
·v
r2G′
·(
X
P1
−X
P2
)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}
+
W
gG
{v
g1G′
·v
g2G′
·(
X
P1
−X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}
+
W
bG
{v
b1G′
·v
b2G′
·(
X
P1
−X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)}] (52)
Δ
D
BGY
=Y
B
−Y
G
=[W
rB
{v
r1B′
·v
r2B′
·(
X
P1
−X
P2
)/
(
v
r1B′
·h
r2B′
−v
r2B′
·h
r1B′
)}
+
W
gB
{v
g1B′
·v
g2B′
·(
X
P1
−X
P2
)/
(
v
g1B′
·h
g2B′
−v
g2B′
·h
g1B′
)}
+
W
bB
{v
b1B′
·v
b2B′
·(
X
P1
−X
P2
)/
(
v
b1B′
·h
b2B′
−v
b2B′
·h
b1B′
)}]
−[
W
rG
{v
r1G′
·v
r2G′
·(
X
P1
−X
P2
)/
(
v
r1G′
·h
r2G′
−v
r2G′
·h
r1G′
)}
+
W
gG
{v
g1G′
·v
g2G′
·(
X
P1
−X
P2
)/
(
v
g1G′
·h
g2G′
−v
g2G′
·h
g1G′
)}
+
W
bG
{v
b1G′
·v
b2G′
·(
X
P1
−X
P2
)/
(
v
b1G′
·h
b2G′
−v
b2G′
·h
b1G′
)}] (53)
In order to highly accurately calculate the misconvergence amounts ΔD
RGX
, ΔD
BGX
, ΔD
RGY
, ΔD
BGY
using Equations (50) to (53), it is necessary to precisely calculate the hv coordinates (h
j1O
, v
j1O
), (h
j2O
, v
j2O
) of the incidence points o
j1′
, o
j2′
of the optical axes L
J1
, L
J2
corresponding to the respective CCDs
213
R to
213
B,
223
R to
223
B and to align the reference point of the hv coordinate systems on the respective sensing surfaces of the CCDs
213
R to
213
B,
223
R to
223
B.
Next, a method for calculating the coordinates of incidence points o
j1′
, o
j2′
of the optical axes L
J1
, L
J2
(J=R, G, B) and calibrating the measuring system is described.
In the convergence measurement, the misconvergence amounts are calculated as relative displacements of the luminance center positions with respect to an arbitrary one of the color components R, G, B. Accordingly, the use of the data (coordinate data of incidence points o
j1′
, o
j2′
of the respective optical axes L
J1
, L
J2
) for calibrating the displacements of the sensing surfaces of the CCDs
213
R to
213
B,
223
R to
223
B as relative data of the color components is considered to have a little influence on the measurement accuracy.
Thus, according to this embodiment, the same measurement pattern is used as the one for the calibration and the coordinates of incidence points o
j1′
, o
j2′
of the optical axes L
J1
, L
J2
on the sensing surfaces of the CCDs
213
R to
213
B,
223
R to
223
B are calculated based on focusing points I
r1J
, I
r2J
(J=r, g, b, J=R, G, B) corresponding to the measurement points Q
J
of the images of the respective color images r, g, b obtained by picking up an image of this measurement pattern as described below.
Specifically, the measurement pattern is displayed on the color display
4
by making only the green phosphors luminous. The coordinates of a measurement point Q
C
at this time is assumed to be Q
C
(X
C
, Y
C
, Z
C
); the coordinates of the focusing points I
r1C
, I
g1C
, I
b1C
of the luminous light image of the measurement point Q
C
on the sensing surfaces of the CCDs
213
R,
213
G,
213
B to be I
r1C
(h
r1C
, v
r1C
), I
g1C
(h
g1C
, v
g1C), I
b1C
(h
b1C
, v
b1C
); and the coordinates of the focusing points I
r2C
, I
g2C
, I
b2C
of the luminous light image of the measurement point Q
C
on the sensing surfaces of the CCDs
223
R,
223
G,
223
B to be I
r2C
(h
r2C
, v
r2C
), I
g2C
(h
g2C
, v
g2C
), I
b2C
(h
b2C
, v
b2C
).
This calibration differs from an actual measurement only in that the measurement pattern on the color display
4
is obtained by making only the green phosphors luminous. Accordingly, Equations corresponding to Equations (26), (27), (30), (31), (34), (35) are expressed as follows for the focusing points I
r1C
, I
g1C
, I
b1C
, I
R2C
, I
g2C
, I
b2C
.
X
Cr
=(
v
r1C′
·h
r2C′
·X
P1
−v
r2C′
·h
r1C′
·X
P2
)/
(
v
r1C′
·h
r2C′
−v
r2C′
·h
r1C′
) (54)
Y
Cr
=v
r1C′
·v
r2C′
·(
X
P1
−X
P2
)/
(
v
r1C′
·h
r2C′
−v
r2C′
·h
r1C′
) (55)
where
v
r1C′
=v
r1C
−v
r1O
h
r1C′
=h
r1C
−h
r1O
v
r2C′
=v
r2C
−v
r2O
h
r2C′
=h
r2C
−h
r2O
X
Cg
=(
v
g1C′
·h
g2C′
·X
P1
−v
g2C′
·h
g1C′
·X
P2
)/
(
v
g1C′
·h
g2C′
−v
g2C′
·h
g1C′
) (56)
Y
Cg
=v
g1C′
·v
g2C′
·(
X
P1
−X
P2
)/
(
v
g1C′
·h
g2C′
−v
g2C′
·h
g1C′
) (57)
where
v
g1C′
=v
g1C
−v
g1O
h
g1C′
=h
g1C
−h
g1O
v
g2C′
=v
g2C
−v
g2O
h
g2C′
=h
g2C
−h
g2O
X
Cb
=(
v
b1C′
·h
b2C′
·X
P1
−v
b2C′
·h
b1C′
·X
P2
)/
(
v
b1C′
·h
b2C′
−v
b2C′
·h
b1C′
) (58)
Y
Cb
=v
b1C′
·v
b2C′
·(
X
P1
−X
P2
)/(
v
b1C′
·h
b2C′
−v
b2C′
·h
b1C′
) (59)
where
v
b1C′
=v
b1C
−v
b1O
h
b1C′
=h
b1C
−h
b1O
v
b2C′
=v
b2C
−v
b2O
h
b2C′
=h
b2C
−h
b2O
Since Equations (54), (56), (58) are equal to each other and Equations (55), (57), (59) are equal to each other, Equations (60) to (63) can be obtained from these Equations (54) to (59).
(
v
r1C′
·h
r2C′
·X
P1
−v
r2C′
·h
r1C′
·X
P2
)/(
v
r1C′
·h
r2C′
−v
r2C′
·h
r1C′
)
=(
v
g1C′
·h
g2C′
·X
P1
−v
g2C′
·h
g1C′
·X
P2
)/(
v
g1C′
·h
g2C′
−v
g2C′
·h
g1C′
) (60)
(
v
b1C′
·h
b2C′
·X
P1
−v
b2C′
·h
b1C′
·X
P2
)/(
v
b1C′
·h
b2C′
−v
b2C′
·h
b1C′
)
=(
v
g1C′
·h
g2C′
·X
P1
−v
g2C′
·h
g1C′
·X
P2
)/(
v
g1C′
·h
g2C′
−v
g2C′
·h
g1C′
) (61)
v
r1C′
·v
r2C′
·(
X
P1
−X
P2
)/(
v
r1C′
·h
r2C′
−v
r2C′
·h
r1C′
)=
v
g1C′
·v
g2C′
·(
X
P1
−X
P2
)/(
v
g1C′
·h
g2C′
−v
g2C′
·h
g1C′
) (62)
v
b1C′
·v
b2C′
·(
X
P1
−X
P2
)/(
v
b1C′
·h
b2C′
−v
b2C′
·h
b1C′
)=
v
g1C′
·v
g2C′
·(
X
P1
−X
P2
)/(
v
g1C′
·h
g2C′
−v
g2C′
·h
g1C′
) (63)
Since a total of 12 coordinates: o
r1′
(h
r1O
, v
r1O
), o
g1′
(h
g1O
, v
g1O
), o
b1′
(h
b1O
, v
b1O
), o
r2′
(h
r2O
, v
r2O
), o
g2′
(h
g2O
, v
g2O
), o
b2′
(h
b2O
, v
b2O
) are to be calculated, at least 12 Equations corresponding to the above Equations (60) to (63) are obtained for at least three measurement points QC(i) (i=1, 2, 3), and the coordinates o
j1′
(h
j1O
, v
j1O
), o
j2′
(h
j2O
, v
j2O
) (j=r, g, b) are calculated by solving these Equations. Twelve simultaneous Equations are solved according to a known numerical calculation method by the controller
33
.
Although a simple optical model is described in this embodiment, the coordinates o
j1′
(h
j1O
, v
j1O
), o
j2′
(h
j2O
, v
j2O
) (j=r, g, b) can be calculated for a model considering a distortion, a model consideration a wavelength dependency of the principal points P
1
, P
2
of the taking lenses
211
,
221
, a model in which the optical axes L
R
, L
G
, L
B
of the respective color components and the principal points P
1
, P
2
are not located in the same plane, and other models obtained by combining the above models according to a similar method.
Next, an operation sequence of the convergence measurement is described.
FIG. 8
is a flowchart showing an operation sequence. The operation sequence shown in
FIG. 8
includes a calibration procedure (Steps #
1
to #
7
) for calculating the data for calibrating the displacements of the CCDs
213
R to
213
B,
223
R to
223
B (data on the coordinates o
j1′
(h
j1O
, v
j1O
), o
j2′
(h
j2O
, v
j2O
) (j=r, g, b) of the incidence points of the optical axes L
J1
, L
J2
(J=R, G, B)) and a measurement procedure (Steps #
9
to #
15
) for actually calculating the misconvergence amounts.
The calibration procedure may be performed for each color display
4
to be measured in the production line. However, since the displacements of the CCDs do not largely vary unless temperature and/or humidity largely vary, the calibration procedure may be performed when the production line is started or when the measurement apparatus is transferred to a place having difference environments.
In
FIG. 8
, a specified crosshatched measurement pattern (hereinafter, “calibration pattern”) is first displayed on the color display
4
by making only the green phosphors luminous (Step #
1
). This calibration pattern is displayed in such a size that at least three intersections Q
C
(1), Q
C
(2), Q
C
(3) are included in the sensing frame of the image pickup unit
2
.
Subsequently, an image of the calibration pattern is picked up by the image pickup unit
2
, and hv coordinates of the focusing points I
r1C
(i)(h
r1C
(i), v
r1C
(i)), I
g1C
(i)(h
g1C
(i), v
g1C
(i)), I
b1C
(i)(h
b1C
(i), v
b1C
(i)), I
r2C
(i)(h
r2C
(i), v
r2C
(i)), I
g2C
(i)(h
g2C
(i), v
g2C
(i)), I
b2C
(i)(h
b2C
(i)), v
b2C
(i)) (i=
1
,
2
,
3
) of the CCDs
213
R,
213
G,
213
B,
223
R,
223
G,
223
B corresponding to the intersections Q
C
(1), Q
C
(2), Q
C
(3) are calculated using the picked images (Step #
3
).
Three sets of simultaneous Equations of Equations (60) to (63) (a total of 12 Equations) are generated using the calculated hv coordinates of the focusing points I
r1C
(i), I
g1C
(i), I
b1C
(i), I
r2C
(i), I
g2C
(i), I
b2C
(i) (i=
1
,
2
,
3
) (Step #
5
). By solving the 12 simultaneous Equations according to the numerical calculation method, the hv coordinates of the incidence points o
r1′
(h
r1O
, v
r1O
), o
g1′
(h
g1O
, v
g1O
), o
b1′
(h
b1O
, v
b1O
), o
r2′
(h
r2O
, v
r2O
), o
g2′
(h
g2O
, v
g2O
), o
b2′
(h
b2O
, v
b2O
) of the optical axes L
R1
, L
G1
, L
B1
, L
R2
, L
G2
, L
B2
of the respective color components are determined (Step #
7
). Then, the calibration procedure ends.
Subsequently, a specified white crosshatched measurement pattern is displayed on the color display
4
(Step #
9
). This measurement pattern is displayed in such a size that at least one intersection Q
J
is included in the sensing frame of the image pickup unit
2
.
Subsequently, an image of the measurement pattern is picked up by the image pickup unit
2
, and hv coordinates of the focusing points I
r1J
(h
r1J
, v
r1J
), I
g1J
(h
g1J
, v
g1J
), I
b1J
(h
b1J
, v
b1J
), I
r2J
(h
r2J
, v
r2J
), I
g2J
(h
g2J
, v
g2J
), I
b2J
(h
b2J
, v
b2J
) of the CCDs
213
R,
213
G,
213
B,
223
R,
223
G,
223
B corresponding to the intersection Q are calculated using the picked images (Step #
11
).
The horizontal (X-direction) misconvergence amounts ΔDRGX, ΔDBGX are calculated in accordance with Equations (50), (51) using the hv coordinates of the incidence points o
r1′
, o
g1′
, o
b1′
, o
r2′
, o
g2′
, o
b2′
of the optical axes L
R1
, L
G1
, L
B1
, L
R2
, L
G2
, L
B2
, the hv coordinates of the focusing points I
r1J
, I
g1J
, I
b1J
, I
r2J
, I
g2J
, I
b2J
, and the coordinates of the principal points P
1
, P
2
(known) of the taking lenses
211
,
221
(Step #
13
). The calculation results are displayed in the display device
36
in a specified display format (Step #
15
). Then, the measurement procedure ends.
As described above, the specified calibration pattern is displayed by making only the green phosphors luminous and the displacement amounts of the CCDs
213
R to
213
B,
223
R to
223
B of the image pickup unit
2
are calculated as relative displacement amounts using the picked images of the calibration pattern. Accordingly, the calibration data of the displacements of the CCDs
213
,
223
due to temperature and/or humidity changes can be easily and quickly calculated, facilitating the calibration of the convergence measurement.
Further, since the calibration data can be calculated using the color display
4
to be measured, the need for a conventional calibration chart can be obviated. This results in a simplified construction and an improved operability of the apparatus.
Only the green phosphors are made luminous to obtain the calibration pattern in the foregoing embodiment. This is because the use of the light emitted from the green phosphors which also has a sensitivity to the red and blue filters enables a quicker measurement. As an alternative, an image of a calibration pattern obtained by making only the red phosphors luminous and an image of a calibration pattern obtained by making only the blue phosphors luminous may be picked up, and the relative displacement mounts of the respective CCDs
213
R to
213
B,
223
R to
223
B may be calculated using the image data of the respective color components R, G, B obtained from the picked images.
In the foregoing embodiment, the convergence measurement apparatus
1
of the type which picks up an image of the measurement pattern according to stereoscopic method using the image pickup unit
2
including a pair of image pickup cameras
21
,
22
. However, according to the invention, the image of the measurement pattern may not necessarily be picked up according to the stereoscopic method. The invention is also applicable to a display characteristic measurement apparatus which adopts an image pickup unit having a single image pickup camera.
Although the convergence measurement apparatus for a color CRT is described in the foregoing embodiment, a display characteristic measurement apparatus according to the invention is also applicable to color display apparatus such as a projection type color display, a color LCD, and a color plasma display or to the measurement of a luminance center position of a monochromatic display apparatus or the measurement of a geometric image distortion.
Although the image pickup unit including three CCDs of primary colors is described in the foregoing embodiment, the image pickup unit is not limited to this type. It may include two image pickup devices (may be pickup tubes), and the color filters may be of primary color type or complimentary color type.
As described above, a measurement pattern is displayed in a single color to which both a first color image pickup device for picking up an image of a first color component and a second color image pickup device for picking up an image of a second color component have a sensitivity. The positions of measurement points of the measurement pattern on the respective sensing surfaces are detected using the images of the measurement pattern picked up by the first and second color image pickup devices, and a displacement of the position of the measurement point on one sensing surface with respect to that on the other sensing surface is calculated as calibration data for calibrating a relative displacement of the positions of the images picked by the first and second image pickup devices. Accordingly, the calibration procedure can be easily and quickly performed.
Further, the use of the image to be displayed on the color display apparatus to be measured for the calibration obviates the need for the conventional chart special for the calibration. This leads to a simplified construction and an improved operability of the display characteristic measurement apparatus.
Particularly, in the color image pickup device in which the image is picked up while being separated into images of three primary color components, the measurement pattern is made luminous in green to which both red and blue image pickup devices have a sensitivity. Thus, by picking up the image of the measurement pattern for the calibration only once, the calibration data can be calculated using the picked image, enabling the calibration procedure to be performed at a higher speed.
Although the present invention has been fully described by way of example with reference to the accompanying drawings, it is to be understood that various changes and modifications will be apparent to those skilled in the art. Therefore, unless otherwise such changes and modifications depart from the scope of the present invention, they should be construed as being included therein.
Claims
- 1. A display characteristic measurement apparatus for a color display apparatus, comprising:a first image pickup device which picks up an image of a first color component; a second image pickup device which picks up an image of a second color component; a measurement image generator which causes a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible; and a calculator which calculates, based on a first picked-up image picked up by the first image pickup device and a second picked-up image picked up by the second image pickup device, calibration data for calibrating a relative displacement between the first image pickup device and the second image pickup device.
- 2. A display characteristic measurement apparatus according to claim 1, wherein the calculator calculates:a first position data about a specified position of the measurement image on the color display apparatus based on the first picked-up image; a second position data about the specified position of the measurement image on the color display apparatus based on the second picked-up image; and calibration data based on the calculated first and second position data.
- 3. A display characteristic measurement apparatus according to claim 1, wherein:the first color component is one of the red, green, and blue primary color components; the second color component is another of the red, green, and blue primary color components; and the measurement image is displayed in green color.
- 4. A display characteristic measurement apparatus according to claim 3, wherein the measurement image is displayed by making only green phosphors luminous.
- 5. A display characteristic measurement apparatus according to claim 3, further comprising a third image pickup device which picks up an image of a third color component, wherein the calculator calculates, based on the first picked-up image, the second picked-up image, and a third picked-up image picked up by the third image pickup device, calibration data for calibrating a relative displacement between the first, second, and third image pickup devices.
- 6. A display characteristic measurement apparatus according to claim 1 is a convergence measurement apparatus which measures a relative displacement of respective luminance centers of a given number of color components on the color display apparatus as a misconvergence amount.
- 7. A display characteristic measurement apparatus according to claim 6, wherein the measurement image generator which further causes the color display apparatus to display a second measurement image thereon.
- 8. A display characteristic measurement apparatus according to claim 7, wherein the second measurement image is a white pattern image.
- 9. A display characteristic measurement apparatus according to claim 7, wherein the second measurement image is a crosshatched pattern.
- 10. A display characteristic measurement apparatus for a color display apparatus, comprising:a first image pickup device which picks up an image of a first color component; a second image pickup device which picks up an image of a second color component; a measurement image generator which causes a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible; a first position calculator which calculates, based on a first picked-up image picked up by the first image pickup device, a position on the first image pickup device that corresponds to a specified position of the measurement image on the color display apparatus; a second position calculator which calculates, based on a second picked-up image picked up by the second image pickup device, a position on the second image pickup device that corresponds to the specified position of the measurement image on the color display apparatus; and a displacement calculator which calculates, based on first and second positions calculated by the first and second position calculators, a relative displacement between the first image pickup device and the second image pickup device.
- 11. A display characteristic measurement apparatus according to claim 10, wherein:the first color component is one of the red, green, and blue primary color components; the second color component is another of the red, green, and blue primary color components; and the measurement image is displayed in green color.
- 12. A display characteristic measurement apparatus according to claim 11, wherein the measurement image is displayed by making only green phosphors luminous.
- 13. A display characteristic measurement apparatus according to claim 11, further comprising a third image pickup device which picks up an image of a third color component, wherein the displacement calculator calculates, based on the first picked-up image, the second picked-up image, and a third picked-up image picked up by the third image pickup device, a relative displacement between the first, second, and third image pickup devices.
- 14. A display characteristic measurement apparatus according to claim 10 is a convergence measurement apparatus which measures a relative displacement of respective luminance centers of a given number of color components on the color display apparatus as a misconvergence amount.
- 15. A method for calibrating a relative displacement between a first image pickup device for picking up an image of a first color component and a second image pickup device for picking up an image of a second color component provided in a display characteristic measurement apparatus for a color display apparatus, comprising the steps:causing a color display apparatus to display a measurement image thereon, the measurement image being displayed in a single color to which both the first image pickup device and the second image pickup device are sensible; picking up images of the measurement image displayed on the color display apparatus by the first and second image pickup devices, respectively; calculating, based on picked-up images of the first and second image pickup devices, calibration data for calibrating a relative displacement between the first image pickup device and the second image pickup device.
- 16. A method according to claim 15, wherein:the first color component is one of the red, green, and blue primary color components; the second color component is another of the red, green, and blue primary color components; and the measurement image is displayed in green color.
- 17. A method according to claim 16, wherein the measurement image is displayed by making only green phosphors luminous.
Priority Claims (1)
Number |
Date |
Country |
Kind |
9-190194 |
Jul 1997 |
JP |
|
US Referenced Citations (15)
Foreign Referenced Citations (1)
Number |
Date |
Country |
11098542 |
Apr 1999 |
JP |