1. Technical Field
The present invention relates to a similarity determination apparatus, a similarity determination system, and a similarity determination method, in which similarity in color or the like is determined and sameness of a target item with reference to a reference item is determined.
2. Background Art
For example, in inspecting products on a manufacturing line, the colors of a product are checked by using an FA (Factory Automation) camera in which an area sensor having image capturing elements such as a CCD or a CMOS is incorporated, and a product whose color difference exceeds a certain sameness is excluded as a defective. The color determination with an FA camera uses spectral characteristics of the image capturing elements, and a determination is made according to the similarity in RGB brightness value. However, such a method is dependent on the spectral reflectance of the image capturing elements, and a slight color difference cannot be detected. Moreover, there has been a problem that accuracy in determination of a certain color is significantly low.
Further, the conventional color cameras, such as FA cameras used for color determination as described above, are designed such that the spectral sensitivity of a sensor will be similar to human visual sensation. However, the color captured by the camera differs depending on environmental illumination and a difference in positional relationship between a specimen and illumination light. This affects the determination result. For this reason, determination eventually relies upon visual inspection in some cases. If determination relies upon visual check by a person, it is inevitable that there is concern about inconsistencies of the inspection accuracy.
Example embodiments of the present invention include a similarity determination apparatus, a similarity determination system, and a similarity determination method, each of which calculates spectral information of an object from capturing information detected by an image capturing device, the object including 1) a reference item and 2) a target item being a subject for color determination, transforms the spectral information of the object into characteristic quantity, generates a similarity determination criterion from one or a plurality of items of the characteristic quantity of the reference item of the object, and checks the characteristic quantity of the target item being a subject for color determination against the similarity determination criterion to determine similarity of the target item with reference to the reference item.
A more complete appreciation of the disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings.
The accompanying drawings are intended to depict example embodiments of the present invention and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “includes” and/or “including”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In describing example embodiments shown in the drawings, specific terminology is employed for the sake of clarity. However, the present disclosure is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner.
In the following description, illustrative embodiments will be described with reference to acts and symbolic representations of operations (e.g., in the form of flowcharts) that may be implemented as program modules or functional processes including routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types and may be implemented using existing hardware at existing network elements or control nodes. Such existing hardware may include one or more Central Processing Units (CPUs), digital signal processors (DSPs), application-specific-integrated-circuits, field programmable gate arrays (FPGAs) computers or the like. These terms in general may be referred to as processors.
Unless specifically stated otherwise, or as is apparent from the discussion, terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Example embodiments of the present invention will be described with reference to the drawings.
An image obtained by an image capturing element of the image capturing device 12 is transferred to the processing circuit 30. The processing circuit 30 performs processing on the image captured by the image capturing device 12, such as computation for color determination or image processing. The PC 32, which is capable of communicating with the processing circuit 30, designates a parameter for image processing, displays the captured image, or displays a result of the processing performed by the processing circuit 30.
Alternatively, a dedicated terminal with a touch panel or the like may be provided instead of the PC 32. Further, the PC 32 may be integrated with at least a part of the processing circuit. In other words, a terminal such as a PC may include at least a part of the functions of the processing circuit. The processing circuit 30 communicates with an external device 36 through an external communication line 34. For example, the external device 36 is capable of giving a capturing instruction (capturing trigger) to the processing circuit 30, monitoring the operating state of the processing circuit 30, and receiving a determination result from the processing circuit 30. Generally, the processing circuit 30 is connected to a PLC (Programmable logic controller), which is one example of the external device 36.
The image capturing device 12 may be implemented by a light field camera (plenoptic camera). Before the configuration of the image capturing device 12 is specifically described, the principle of a plenoptic camera will be described with reference to
Near a focusing position of the single lens 2, a microlens array 3 (hereinafter, this will be referred to as “MLA” 3) that serves as the second optical system is arranged. In an image area 4 of
The lights diffused from a point of an object 1 enter different positions on the single lens 2, and pass through the filters f1 to f3 that have different spectral characteristics depending on the position on the single lens 2. The lights that have passed through the filter form an image near the MLA 3, and the respective lights are then irradiated by the MLA 3 onto different positions of the sensor 5. As lights having different spectral characteristics that originate from a certain point are irradiated onto different positions of the sensor 5, it becomes possible to project a plurality of types of spectral information on the sensor 5 at once.
The spectral information of a position of the object 1, which is different from the position of the above point, is also irradiated onto different positions of the sensor 5 in a similar manner as described above. This process is performed for a plurality of points of the object, and image processing is applied to arrange the spectral information in order by spectral characteristics. By so doing, two-dimensional images of different spectral characteristics may be obtained at once. If this principle is applied, the two-dimensional spectrum of an object may be obtained in real time (instantly) by arranging a plurality of band-pass filters near the stop of the single lens 2.
Here, the phrase “near the stop” includes a stop position, and indicates a region through which light rays with various angles of view can pass. There are cases in which a plurality of filters having different spectral characteristics provide the function of a filter according to one example of the present invention, and there are cases in which a plurality of filter areas having different spectral characteristics provide the function of a filter according to one example of the present invention. In the former cases, the filter is configured by connecting or combining a plurality of filters with each other. In the latter cases, spectral characteristics are different for the respective areas in a single unit of filter.
The configuration of the image capturing device (hereinafter, this will be referred to simply as “camera”) 12 will be specifically described with reference to
The lens module 18 includes a lens-barrel 22, a main lens 24 provided inside the lens-barrel 22 that serves as the first optical system, a filter 26 arranged near the stop of the main lens 24, and a lens 28.
The camera unit 20 includes an MLA 3 that serves as the second optical system, a monochrome sensor 6 (hereinafter, this will be referred to simply as “sensor” 6) that serves as an image capturing element, and the FPGA 14 therein. A plurality of microlenses are arranged on the MLA 3 in two-dimensional directions perpendicular to an optical axis of the main lens 24. Note that microlenses are implemented on a sensor for every one pixel, which is different from the MLA 3. General color sensors are provided with RGB color filters for every pixel in a Bayer array.
A general outline of modules that constitute the processing circuit 30 will be described with reference to
A reference sign “304” indicates a positional and rotational transformation unit, and the positional and rotational transformation unit 304 performs positional transfer and rotational transfer on an image captured by the sensor 6. For example, when an image is obtained (image is captured) as illustrated in
In
Indexes are provided in X-direction (horizontal direction: x0, x1, x2, . . . ) and Y-direction (vertical direction: y0, y1, y2, . . . ) on each macro-pixel, which will be referred to as a coordinate system of sub-aperture data. In an example embodiment of the present invention, the number of macro-pixels correspond to the number of coordinate values (x, y) of the coordinate system of sub-aperture data on a one-to-one basis. However, the correspondence is not limited to this example, such that one-to-two relation or the like is also possible. The process of obtaining N-band data that corresponds to each spectral filter of a macro-pixel from the pixel data of the macro-pixel will be referred to as the calculation of band data. The calculation in an example embodiment of the present invention conceptually includes calculating an average value from a plurality of pieces of pixel data and obtaining one piece of pixel data. N-dimensional band data that is calculated from macro-pixels and is two-dimensionally arranged in X-direction and Y-direction will be referred to as sub-aperture data.
The procedure for generating sub-aperture data will be described in detail.
For example, referring to
When the data of the six bands of the top-left macro-pixel (x0, y0) of
In
This method will be described below. Firstly, a spectral reflectance estimation problem is formulated. The spectral reflectance estimation problem is expressed as Equation (1).
g=S
t
Er+n (1)
In Equation (1),
g: column vector of m*l indicating each piece of band data,
r: column vector of l*1 indicating the spectral reflectance of an object, and
S: matrix of l*m, where an I-th column indicates the spectral sensitivity characteristics of I-th band. Top-right superscript “t” indicates the transposition of the matrix.
E: diagonal matrix of l*1, where the diagonal component indicates the spectral energy distribution of illumination, and
n: noise term.
In this example, it is assumed that “m=6”. “l” indicates the number of samples of the wavelength of spectral reflectance to be calculated. For example, when sampling is performed at 10 nm intervals in a wavelength area of 400-700 nm, the number of samples becomes “31”. Assuming that H=StE, Equation (1) may be summarized as the following linear system (Equation (2)).
g=Hr+n (2)
“H” is referred to as a system matrix. Spectral reflectance r of an object is calculated from band data g, but when m<l as in this example of the present invention, there would be infinite number of solutions that satisfy Equation (2) and the solution cannot uniquely be determined. Such a problem is generally referred to as an ill-posed problem. A least-norm solution is one of the solutions that is often selected in an ill-posed problem. When it is possible to ignore noise in Equation (2), a least-norm solution is expressed as Equation (3) as follows.
{circumflex over (r)}=H
t(HHt)−1g (3)
The least-norm solution calculated in Equation (3) is a continuous spectral reflectance, and the least-norm solution turns out to be the spectral data obtained by the spectral information calculation unit 306. In addition to the least-norm solution, a method using principal component analysis or a method using Wiener estimation has already been proposed for an ill-posed problem, for example, in MIYAKE reference. These methods may also be used. In view of the spectral reflectance estimation accuracy, it is preferable to use Wiener estimation.
In
A procedure for transforming the spectral data (i.e., spectral reflectance) obtained by the spectral information calculation unit 306 into the characteristic quantity of color space coordinates (i.e., brightness L*, and chromaticity a* and b* of L*a*b* colorimetric system in this example embodiment of the present invention) will be described. In addition to spectral reflectance, data such as color matching functions and the spectral intensity of illumination is used. As color matching functions, the following functions that are specified by the CIE are generally used.
As the spectral intensity of illumination, the spectral intensity of illumination in an environment in which an object is observed is to be used, but a standard light source (such as A-light source and D65 light source) that is defined by the CIE may also be used.
Next, tristimulus values X, Y, and Z are calculated by using Equation (4) below. In Equation (4), E(λ) indicates the spectral distribution of a light source, and R(λ) indicates the spectral reflectance of an object.
X=∫E(λ)
Y=∫E(λ)
Z=∫E(λ)
Brightness L*, and chromaticity a* and b* are calculated from the tristimulus values by using Equation (5) below.
L*=116f(Y/Yn)−16
a*=500[f(X/Xn)−f(Y/Yn)]
b*=200[f(Y/Yn)−f(Z/Zn)] (5)
where
In Equation (5), Xn, Yn, and Zn indicate tristimulus values of a perfectly diffuse reflector. Spectral data is converted into characteristic quantity (L*a*b*) by following the procedure as above.
In
Cases in which the vertices are placed within the minimum inclusion circle 70 are classified as “OK” (meaning “similar” or “some sameness is present”), and the other cases are classified as “NG” (meaning “not similar” or “no sameness is present”). Generally, there is more than one kind of boundary sample, and the minimum inclusion circle 70 is formed from the data obtained from these boundary samples (i.e., characteristic quantity). For example, even when there are other data areas 66 and 68 of a boundary sample, any area that is not within the minimum inclusion circle 70 is classified as “NG”. If a probability of determining “NG” to be “OK” is to be further reduced, the calculated “C” may be multiplied by a coefficient that makes a determination criterion stricter, for example, “0.9”. When the given sample data is stricter in a sample than one in practical use (sample having small degree of dispersion), a little broader area may be set to an OK area by multiplying the calculated “C” by a coefficient such as “1.1”.
Note that an enclosing shape is not limited to a circle, and an enclosing shape may be a polygon including N vertices (i.e., convex polygon in this example embodiment of the present invention). A method for calculating such a convex polygon includes, for example, a method as follows. Two vertices i and j are selected from N vertices, and a straight line that passes through these two points is calculated by an equation as follows.
y−yj−(yi−yj)/(xi−xj)*(x−xj)=0
Then, calculation is made to determine on which side of the straight line N vertices are placed. When N vertices are all placed on a same side, the straight line that connects between vertices (x1, y1) and (x2, y2) is registered.
In an example embodiment of the present invention, examples have been described in which an enclosing shape, which is a conceptual form that includes characteristic quantity, is a two-dimensional minimum inclusion circle or polygon. However, no limitation is indicated for the present invention. When characteristic quantity is N-dimensional, an enclosing shape may be an N-dimensional sphere, or an N-dimensional polyhedron (N-dimensional convex polyhedron) such as a columnar body and a spindle-shaped body.
In
In
In this example, any of the above-described units or modules shown in
Further, the modules of the processing circuit 30 may be implemented in various ways. In one example, the modules of the processing circuit 30 shown in
In another example, the modules of the processing circuit 30 shown in
Operations in a teaching state, performed by the control unit 310, will be described with reference to
The control unit 310 sequentially selects N coordinates (x1, y1), (x2, y2), . . . and (xn, yn) stored in the memory 311 (S3), and performs processes as follows.
Next, the control unit 310 determines whether or not the number of the obtained items of data has reached N (S7), and when it has not reached N, the process returns to S3. N as the number of the obtained items of data indicates the number of the obtained items at different points of a boundary sample, but N may indicate the number of the obtained items at the same area (the same applies to the description below). When the number of the obtained items of data has reached N, the control unit 310 repeats the above data obtaining operation until the number of times of the operation reaches a prescribed number (i.e., Q). It is preferred that “Q” be equal to or greater than the number of boundary samples.
The control unit 310 determines whether or not the number of data obtaining operations has reached Q (S8), and when it has reached Q, the control unit 310 instructs the similarity determination criterion generation unit 308 to generate a similarity determination criterion from N*Q coordinate, L*a*b*information (S9). Then, a similarity determination criterion is stored in the memory 311 (S10). Finally, M points of coordinates for evaluation are recorded according to an operation made at a PC, such as the PC 32, or the like (S11).
Operations in the operating state, performed by the control unit 310, will be described with reference to
The control unit 310 sequentially selects M coordinates stored in the memory 311 (i.e., coordinates set in S11 of
Next, the control unit 310 instructs the similarity determination result output unit 309 to determine similarity of the L*a*b*information of the respective M coordinates (S8). Then, a determination result of “OK” is output when all the M points meet a criterion, and a determination result of “NG” is output in the other cases, via a PC such as the PC 32, or an external IF (S9).
Firstly, a user fixes the camera 12 onto a support base 52 as illustrated in
(S21). At that time, the camera 12 is set so as to be in parallel to an installation ground 54 of the support base 52. As a result, the camera 12 is installed in a vertical direction. Note that it is assumed that the control unit 310 is in a teaching state.
Next, an OK item (i.e., reference item; specimen 50A) among the items for which color determination is to be performed is set directly below the camera (S22). At that time, the specimen 50A is set so as to be included in the field of the camera 12. When the setting is complete, the PC 32 starts a dedicated application such as the color determination application, for example, according to a user instruction input on the PC 32.
A user presses the capturing start button 62a (S23). By so doing, a capturing instruction is input to the control unit 310 through the I/F 302 with the PC. As a result, the processing circuit 30 performs S1 and S2 among the operations illustrated in
Next, a user repeats the following operations as necessary. An OK item (i.e., the specimen 50A) among the items for which color determination is to be performed is set directly below the camera (S22). At that time, the specimen 50A is set so as to be included in the field of the camera 12. When the setting is complete, a user presses the capturing start button 62a (S23). By so doing, a capturing instruction is input to the control unit 310 through the I/F 302 with the PC. As a result, the processing circuit 30 performs S1 and S2 among the operations illustrated in
When the coordinate selecting button 62b is pressed again, the coordinates are recorded in the memory 311 via the I/F 302 with the PC. Then, the operation illustrated in
A user sets a specimen 50B as a subject (for which color determination is to be performed) so as not to change the set position in a teaching state (S27). At that time, the specimen 50B is set so as to be included within the field of the camera 12. Once the setting is complete, a dedicated application such as the color determination application (operating state mode) is operated by the PC 32. An example of the screen of such a dedicated application is illustrated in
The user presses the capturing start button 62a (S28). By so doing, a capturing instruction is input to the control unit 310 via the I/F 302 with the PC, and all the operation illustrated in
Any one of the above-described steps of any one of the operations shown above may be performed in various other ways, for example, in a different order.
According to one aspect of the present invention, there is provided a similarity determination system capable of detecting a slight difference in color or determining a certain color with high precision, which has been difficult to achieve when a conventional FA camera was used. With this feature, variation in the inspection accuracy is greatly reduced, which otherwise is caused if inspection is visually checked by a person.
Numerous additional modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the disclosure of the present invention may be practiced otherwise than as specifically described herein. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of this disclosure and appended claims.
Further, as described above, any one of the above-described and other methods of the present invention may be embodied in the form of a computer program stored in any kind of storage medium. Examples of storage mediums include, but are not limited to, flexible disk, hard disk, optical discs, magneto-optical discs, magnetic tapes, nonvolatile memory cards, ROM (read-only-memory), etc. Alternatively, any one of the above-described and other methods of the present invention may be implemented by ASIC, prepared by interconnecting an appropriate network of conventional component circuits or by a combination thereof with one or more conventional general purpose microprocessors and/or signal processors programmed accordingly.
According to one aspect of the present invention, a similarity determination system is provided including: spectral information calculating means for calculating spectral information of an object from capturing information detected by imaging means, the object including 1) a reference item and 2) a target item being a subject for color determination; characteristic quantity transformation means for transforming the spectral information of the object being obtained by the spectral information calculating means into characteristic quantity; similarity determination criterion generation means for generating a similarity determination criterion from one or a plurality of items of the characteristic quantity of the reference item; and similarity determination means for checking the characteristic quantity of the target item that is a subject for color determination obtained by the characteristic quantity transformation means against the similarity determination criterion obtained by the similarity determination criterion generation means to determine similarity of the target item of the object with reference to the reference item.
More specifically, in one example, the similarity determination system may be implemented by an image capturing apparatus including a processor, which operates in cooperation with the imaging means such as an image capturing device. In another example, the similarity determination system may be implemented by an image capturing system, which includes an image capturing apparatus provided with the imaging means, and an information processing apparatus that communicates with the image capturing apparatus.
Further, in one example, when the similarity determination is in a teaching state, the similarity determination system includes: similarity determination spectral information calculating means for calculating spectral information of a reference item from capturing information detected by imaging means; characteristic quantity transformation means for transforming the spectral information of the reference item being obtained by the spectral information calculating means into characteristic quantity; similarity determination criterion generation means for generating a similarity determination criterion from one or a plurality of items of the characteristic quantity of the reference item; and storing means for storing the similarity determination criterion.
In another example, when the similarity determination system is in an operating state, the similarity determination system includes: similarity determination spectral information calculating means for calculating spectral information of a target item from capturing information detected by imaging means; characteristic quantity transformation means for transforming the spectral information of the target item being obtained by the spectral information calculating means into characteristic quantity; means for obtaining the similarity determination criterion from the memory; and similarity determination means for checking the characteristic quantity of the target item being a subject for color determination against the similarity determination criterion obtained by the similarity determination criterion generation means to determine similarity of the target item of the object with reference to the reference item.
According to one aspect of the present invention, a non-transitory recording medium storing a plurality of instructions is provided which, when executed by a processor, cause the processor to perform a method of determining similarity of an object, the method including: calculating spectral information of the object from capturing information detected by imaging means, the object including 1) a reference item and 2) a target item being a subject for color determination; transforming the calculated spectral information of the object into characteristic quantity; generating a similarity determination criterion from one or a plurality of items of the characteristic quantity of the reference item; and checking the characteristic quantity of the target item being a subject for color determination against the similarity determination criterion to determine similarity of the target item with reference to the reference item.
Number | Date | Country | Kind |
---|---|---|---|
2012-264305 | Dec 2012 | JP | national |
2013-204755 | Sep 2013 | JP | national |
This patent application is based on and claims priority pursuant to 35 U.S.C. §119 to Japanese Patent Application Nos. 2012-264305, filed on Dec. 3, 2012, and 2013-204755, filed on Sep. 30, 2013, in the Japan Patent Office, the entire disclosure of which is hereby incorporated by reference herein.