This application claims the benefit of Korean Patent Application No. 10-2004-0090917, filed on Nov. 09, 2004, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
Embodiments of the present invention can relate to an image sensor included in commercial-use mobile terminals (e.g., cellular phones), electronic wallets that require user authentication, monitoring equipment for monitoring a figure, stereo vision systems, three-dimensional face recognition apparatuses, iris recognition apparatuses, vehicle sensors for sleepiness prevention, vehicle sensors for informing of distances between vehicles, vehicle sensors for warning of the existence of an obstacle/person in front of a vehicle, etc., and more particularly, to an imaging apparatus, medium, and method using infrared rays which can sense an infrared component as well as visible light components from a spectrum of a light, e.g., for identifying an image based on a result of the sensing.
2. Description of the Related Art
Conventional imaging methods have tried to improve the resolution power of an image. Such conventional imaging methods use a color filter array (CFA), an example of which is disclosed in U.S. Pat. No. 3,971,065, entitled “Color Imaging Array”. The main objective of this conventional method is to sense three visible light components, which are a red (R) component, a green (G) component, and a blue (B) component, from a spectrum of a light.
Since the infrared (IR) component of an image degrades the quality of the image, most conventional imaging methods, including the aforementioned method, have tried to obtain a clean and clear color image comparable to human eyesight of by removing the IR component as much as possible from the image.
Another conventional imaging method is disclosed in U.S. Pat. No. 6,292,212, entitled “Electronic Color Infrared Camera”. In this method, a general camera includes either an IR component removal filter or a yellow (Y) component transmission filter. When the Y component transmission filter is used, three components of an image, which may be R, G, and IR components, are sensed. On the other hand, when the IR component removal filter is used, three components of the image, which may be R, G, and B components, are sensed. However, in these methods, all of the R, G, B, and IR components cannot be sensed.
A conventional method of sensing an IR component, in contrast with the above-described conventional methods, is disclosed in U.S. Pat. No. 6,657,663, entitled “Pre-subtracting Architecture for Enabling Multiple Spectrum Image Sensing”. In this method, an IR filter, which transmits an IR component, is produced by overlapping an R filter, transmitting an R component, and a B filter, transmitting a B component. However, the overlapping of the two R and B filters to produce the IR filter increases the number of processes required to photograph an IR component.
In addition, the conventional methods of recognizing a face using visible rays have been discussed by W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld in “Face Recognition—A Literature Survey”, ACM Computing Surveys, Vol. 35, No. 4, pp. 399-458 (December, 2003), who indicate that the performance of face recognition is very sensitive to illumination change.
A conventional method of recognizing the iris of the eye using infrared rays has further been discussed in U.S. Pat. No. 5,291,560, entitled “Biometric Personal Identification System Based on Iris Analysis”. To perform this conventional method, an extra camera is used for recognizing the iris of the eye in addition to the camera used for taking a corresponding photograph. In other words, here, two cameras are required to recognize the iris of the eye and take a photograph according to this conventional method. The use of two cameras leads to the enlargement of any corresponding imaging apparatus. Particularly, when mobile terminals, such as, cellular phones including a camera function, use such conventional iris recognition methods, the resulting enlarged size of the terminals becomes a serious problem.
Embodiments of the present invention provide an imaging apparatus, medium, and method for using infrared rays which may sense at least one visible light component and an infrared component included in a spectrum of light.
Embodiments of the present invention also provide an imaging apparatus, medium, and method for using infrared rays which can better identify an object of interest from an image using a sensed infrared component in the image.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include an imaging device for converting an optically sensed measurement into an electrical signal, the imaging device including a patterned array with repeated optically sensing unit cells, wherein the unit cells include at least one color component cell optically sensing a respective color measurement, including at least a respective visible light component, and an infrared component cell optically sensing an infrared measurement, including at least a respective infrared component.
The imaging device may further include a component separator separating a color component from the infrared measurement by performing an arithmetic operation with one of the respective color measurements and the infrared measurement sensed by the patterned array, wherein the at least one color component cell also senses an infrared component, and the infrared component cell also senses a visible light component.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include an imaging apparatus, including an imaging device according to an embodiment of the present invention, and an image processor for recognizing an object component in the electrical signal generated by the imaging device.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include an imaging apparatus using infrared rays, including an image sensor optically sensing both a visible light component and an infrared component included in a light spectrum in an optically sensed measurement and converting the sensed visible light component and infrared component into an electrical signal, and an image processor to recognize an object component in the electrical signal.
The image sensor may include a patterned array including repeated unit cells that collect the optically sensed measurement, wherein the unit cells may include at least one color component cell optically sensing a respective color measurement, including at least a respective visible light, and an infrared component cell optically sensing an infrared measurement, including at least a respective infrared component.
The infrared component cell may also senses a color component. In addition, the image sensor may further include a component separator to separate a color component from the infrared measurement by performing an arithmetic operation with one of the respective color measurements and the infrared measurement, wherein the at least one color component cell also senses an infrared component.
The infrared measurement may only sense an infrared component. Further, the image sensor may further include a component separator to derive a color component from an arithmetic operation with one of the respective color measurements and the infrared measurement, wherein the at least one color component cell also senses an infrared component.
The image processor may include an image control unit to receive the electrical signal, to image-process the electrical signal, and to output a result of the image-processing as an image signal, an object discriminating unit to extract an object component, which is a target of interest in the image signal, from the image signal and to discriminate the extracted object component, and a main control unit to control the image control unit, the image sensor, and/or the object discriminating unit.
The object discrimination unit may execute authentication to determine whether the discriminated object component is an allowed object component.
In addition, the image processor may further include a user manipulation unit to generate a user signal based on a manipulation of a user and to output the user signal to the main control unit, a display unit to display a result of the discrimination by the object discriminating unit to the user, and a light emitting unit to emit at least one of a visible light and an infrared ray to an image area, corresponding to the image, under the control of the main control unit, wherein the main control unit controls the image control unit, the image sensor, the object discriminating unit, and the light emitting unit in response to the user signal.
The image control unit may include a control signal generation unit to output a first control signal, received from the main control unit, to the image sensor and second and third control signals received from the main control unit, a white balancing processing unit to execute white balancing on the visible light component included in the electrical signal in response to the second control signal and outputting a result of the white balancing, and a component selection unit to select, in response to the third control signal, one of the infrared component included in the electrical signal and the result of the white balancing received from the white balancing processing unit and to output a result of the selection as the image signal wherein the image sensor senses the image in response to the first control signal.
The object discrimination unit may include an object component extraction unit to extract the object component from the image signal, a recognition unit to calculate a score of the extracted object component using templates of a pre-allowed object component, and an authentication unit to compare the score with a predetermined critical value and authenticating whether the extracted object component matches the pre-allowed object component.
The object discrimination unit may further include a database storing the templates of the pre-allowed object components, and a registration unit to register the templates of the pre-allowed object component in the database.
The object component may at least be one of a face and an iris.
In addition, the object component extraction unit may include a storage unit to store the image signal and to output the infrared component included in the stored image signal to the recognition unit, a face extraction unit to extract a face from the stored image signal and to output the extracted face to the recognition unit, and an eye extraction unit to extract an eye from the extracted face and to output the extracted eye to the recognition unit.
Further, the recognition unit may include a face normalization unit to normalize a face image using the extracted face and the infrared component, a face template extraction unit to extract a template of the face from the normalized face image, a face score calculation unit to calculate a score of the extracted template for the face based on a result of a comparison of the extracted face template with the templates of the pre-allowed object component, an iris separation unit to separate an iris image using the extracted eye and the infrared component, an iris normalization unit to normalize the separated iris image, an iris template extraction unit to extract a template of the iris from the normalized iris image, and an iris score calculation unit to calculate a score of the extracted template of the iris based on a result of the a comparison of the extracted template of the iris with the templates of the pre-allowed object component.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include an discriminating method, including determining whether a user is to be authenticated, optically sensing both a visible light component and an infrared component included in a light spectrum of an image and converting the sensed visible light component and infrared component into an electrical signal, based on the determination of whether the user is to be authenticated, determining whether an object component, which is a target of interest in the image, is extracted from the electrical signal, determining whether the extracted object component matches a pre-registered allowed object component based on the determination of whether the object component is extracted from the electrical signal, determining that the extracted object component has an appropriate identity, based on an appropriate identity result for the determination of whether the extracted object component matches the pre-registered allowed object component, determining that the extracted object component does not have the appropriate identity, based on not obtaining an appropriate identity result for the determination of whether the extracted object component matches the pre-registered allowed object component or based on the determination that the object component is not extracted from the electrical signal, and outputting an indication of whether the extracted object component is the appropriate identity.
The determining of whether the extracted object component matches the pre-registered allowed object component may include calculating a score of the extracted object component by comparing a template of the extracted object component with a pre-stored template of the object component, and determining whether the score is greater than a critical value, wherein when the score is determined to be greater than the critical value the object component is considered to match the pre-registered allowed object component.
To achieve the above and/or other aspects and advantages, embodiments of the present invention include at least one medium including computer readable code to implement embodiments of the present invention.
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the invention.
These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. Embodiments are described below to explain the present invention by referring to the figures.
An imaging apparatus according to an embodiment of the present invention, for converting an optically sensed image into an electrical signal and outputting the electrical signal, will now be described below.
The patterned array 10 optically senses an image and has a pattern in which unit cells are repeated. The unit cells include at least one color component cell and an infrared component cell. A color component cell senses a corresponding visible light component in a spectrum of light. The infrared component cell may sense only an infrared component in the light spectrum. For example, the unit cells may have a plurality of color component cells which respectively sense a red (R) component, a green (G) component, and a blue (B) component which are visible light components. The infrared component cell may be implemented as a single cell through which the infrared component is sensed, in contrast with the aforementioned conventional method disclosed in U.S. Pat. No. 6,657,663, in which an infrared component cell is produced by overlapping two color component cells.
In an embodiment of the present invention, the four cells A, B, C, and D may sense an R component, a G component, and a B component, which are visible light components, and an infrared (IR) component included in the spectrum of light. The unit cells may be formed through various tiling arrangements other than the tiling as shown in
When the unit cells A, B, C, and D of
In an embodiment of the present invention, as shown in
In the above-described embodiments, the imaging device of
However, in an embodiment of the present invention, the color component cell included in the patterned array 10 may also sense an IR component, and the IR component cell may also sense at least one visible light component. In this case, the imaging apparatus of
For example, the unit cell A of the patterned array 10 may sense the R component among visible light components and the IR component included in the spectrum of light, the unit cell B thereof may sense the G component among visible light components and the IR component included in the spectrum of light, the unit cell C thereof may sense the B component among visible light components and the IR component included in the spectrum of light, and the unit cell D thereof may sense all of the R, G, B, and IR components. In this case, the component separator 12 may be used to separate the visible light components R, G, and B from the IR component through an arithmetic operation, such as expressed below in Equation 1:
Here, TA denotes the R and IR components sensed by the unit cell A, TB denotes the G and IR components sensed by the unit cell B, TC denotes the B and IR components sensed by the unit cell C, and TD denotes the R, G, B and IR components sensed by the unit cell D.
Accordingly, the imaging device of
An imaging apparatus using infrared rays for discriminating light will now be described in greater detail.
The image sensor 40 may optically sense visible light components and an IR component included in the spectrum of a light, convert the optically sensed light into an electrical signal, and output the electrical signal to the image processor 42.
Here, the imaging device of
According to yet another embodiment of the present invention, the color component unit cell of the patterned array 10, included in the image sensor 40, may sense the IR component as well as the visible light components, and the IR component cell may sense only the IR component. In this case, the image sensor 40 may further include the component separator 12 of
For example, the unit cell A may sense the R component among the visible light components and the IR component, the unit cell B may sense the G component among the visible light components and the IR component, the unit cell C may sense the B component among the visible light components and the IR component, and the unit cell D may sense only the IR component. In this case, the component separator 12 may separate the visible light components R, G, and B from the IR component through the following arithmetic operation expressed in Equation 2:
R=TA−TD
C=TB−TD
B=TC−TD
IR=TD
Here, TA denotes the R and IR components sensed by the unit cell A, TB denotes the G and IR components sensed by the unit cell B, TC denotes the B and IR components sensed by the unit cell C, and TD denotes the IR component sensed by the unit cell D.
The image processor 42 of
As described above, the component separator 12 of
The following description will rely on the component separator 12 being included in the image sensor 40 for expediency of explanation. However, the present invention is not limited to this arrangement.
According to an embodiment of the present invention, the image processor 42A may alternately include only the image control unit 60, the main control unit 62, and the object discrimination unit 64.
The image control unit 60 may receive the electrical signal from the image sensor 40, via the input port IN1, perform image processing on the electrical signal, and output a result of the image processing as an image signal to the main control unit 62.
The control signal generation unit 90 may receive a first control signal C1 from the main control unit 62, via an input port IN2, and output the same to the image sensor 40. Referring to
The white balancing processing unit 92 may receive visible light components, included in the electrical signal from the image sensor 40 via an input port IN3, and may perform white balancing on the visible light components in response to the second control signal C2, received from the control signal generation unit 90, and output a result of the white balancing to the component selection unit 94. At this time, the white balancing processing unit 92 may execute white balancing and/or the degree to which white balancing should be/is executed, in response to the second control signal C2.
The component selection unit 94 may receive an IR component, included in the electrical signal from the image sensor 40 via an input port IN4, and the result of the white balancing from the white balancing processing unit 92. Then, the component selection unit 94 may select either the result of the white balancing or the IR component in response to the third control signal C3, received from the control signal generation unit 90, and may output the result of the selection as an image signal to the main control unit 62 via an output port OUT5.
Referring back to
The object component extraction unit 110 may extract an object component from the image signal received from the image control unit 60, via the main control unit 62 and via the input port IN5, and may output the extracted object component to the recognition unit 114. The object component extraction unit 110 may output a signal indicating extraction or non-extraction of the object component to the registering unit 116 and to the main control unit 62 via an output port OUT7.
The recognition unit 114 may calculate a score of the object component extracted by the object component extraction unit 110, e.g., using templates stored in the database 112, and output the score to the authentication unit 118. The object component extracted by the image processor 42 of
The database 112 may pre-store templates of allowed object components.
To facilitate understanding of the object component extraction unit 110 and the recognition unit 114 of
The storage unit 130 may store the image signal received from the image control unit 60 via the main control unit 62 and an input port IN6. Here, the storage unit 130 may serve as a buffer, for example. The storage unit 130 may output the infrared component of the stored image signal to the recognition unit 114 via an output port OUT8.
The face extraction unit 132 may extract a face from the image signal, e.g., for a current frame stored in the storage unit 130, and output the extracted face to the recognition unit 114 via an output port OUT9. At this time, the face extraction unit 132 may also output a signal indicating whether a face has been extracted from the image signal for the current frame to the registering unit 116, via an output port OUT10, and to the storage unit 130, for example. The signal indicating whether a face has been extracted from the image signal may correspond to the signal indicating extraction or non-extraction of the object component. When recognizing from the signal indicating extraction or non-extraction of the face that the face has not been extracted, the storage unit 130 may then output an image signal for a next frame to the face extraction unit 132.
The eye extraction unit 134 may extract an eye from the face extracted by the face extraction unit 132 and output the extracted eye to the recognition unit 114 via an output port OUT11. At this time, the eye extraction unit 134 may also output a signal indicating whether the eye has been extracted from the face to the registering unit 116, via an output port OUT12 and to the storage unit 130, for example. The signal indicating whether the eye has been extracted from the face may correspond to the signal indicating extraction or non-extraction of the object component. When recognizing from the signal indicating extraction or non-extraction of the eye that the eye has not been extracted, the storage unit 130 may then output the image signal for the next frame to the face extraction unit 132.
The face normalization unit 150 may normalize a face image using the face extracted by the face extraction unit 132 and received via an input port IN7 and the IR component received from the storage unit 130, for example, via an input port IN7 and output the face image to the face template extraction unit 152. For example, the face normalization unit 150 may produce the normalized face image using a process, such as, a histogram equalization of the face using the infrared component. The face template extraction unit 152 may extract a face template from the normalized face image received from the face normalization unit 150, for example, and output the normalized face template to the face score calculation unit 154 and also to the registering unit 116 via an output port OUT13. The face score calculation unit 154 may compare the face template extracted by the face template extraction unit 152 with a template received from the database 112, for example, via an input port IN8, calculate a score of the extracted face template based on a result of the comparison, and output the score to the authentication unit 118 via an output port OUT14.
The iris separation unit 160 may separate an iris image from an eye image using the extracted eye received from the eye extraction unit 134 via an input port IN9 and the infrared component received from the storage unit 130, for example, via the input port IN9 and output the separated iris image to the iris normalization unit 162. The iris normalization unit 162 may normalize the separated iris image received from the iris separation unit 160 and output the normalized iris image to the iris template extraction unit 164. For example, the iris normalization unit 162 may normalize the iris image by enhancing an edge of the iris and equalizing a histogram of the iris. The iris template extraction unit 164 may extract an iris template from the normalized iris image received from the iris normalization unit 162, for example, and output the extracted iris template to the iris score calculation unit 166 and also to the registering unit 116 via an output port OUT15. The iris score calculation unit 166 may compare the iris template extracted by the iris template extraction unit 164 with a template received from the database 112, for example, via an input port IN10, calculate a score of the extracted iris template based on a result of the comparison, and output the calculated score to the authentication unit 118 via an output port OUT16.
Referring back to
According to an embodiment of the present invention, when recognizing, from the signal indicating the extraction or non-extraction of the object component, which may be received from the object component extraction unit 110, that the object component has been extracted, the registering unit 116 may register extracted templates received from the recognition unit 114 in the database 112, for example.
According to another embodiment of the present invention, the registering unit 116 may register only effective templates for object components, among the extracted templates for object components, in the database 112. To achieve this, the authentication unit 118, which may be in an initial state, may compare the score received from the recognition unit 114 with a critical value, authenticate whether the extracted template for the object component is effective in response to a result of the comparison, and output a result of the authentication to the registering unit 116. When recognizing from the result of the authentication received from the authentication unit 118 that the template extracted by the recognition unit 114 is effective, the registering unit 116 may determines the extracted template to be an effective template for the object component.
When the authentication unit 118 is in a normal state, it may compare the score received from the recognition unit 114 with the critical value, authenticate whether the extracted object component is previously allowed, in response to a result of the comparison, and output a result of the authentication to the main control unit 62 and the display unit 66 via an output port OUT6.
As described above, the main control unit 62 of
According to another embodiment of the present invention, the image processor 42A of
Referring back to
According to an embodiment of the present invention, the main control unit 62 may generate the first, second, and third control signals C1, C2, and C3, in response to the user signal received from the user manipulation unit 68.
According to another embodiment of the present invention, the first, second, and third control signals C1, C2, and C3, generated by the main control unit 62, may be predetermined control signals.
Under the control of the main control unit 62, the light emitting unit 70 may emit at least one of an infrared light and visible light, via an output port OUT4. If the object component discriminated from the image, by the image processor 42 of
If an embodiment of the present invention is applied to a case where a camera is connected to a computer, the image sensor 40 of
Hereinafter, an object discriminating method using infrared light, according to an embodiment of present invention, where an image is sensed and an image component is discriminated using the sensed image, will be described in greater detail.
To facilitate an understanding of the following embodiments of present invention, it is assumed, only herein, that the object discriminating method of
In operation 180, whether an object is to be authenticated is determined. The imaging apparatus of
To facilitate an understanding of operations 180 and 182 of
After operation 182, the image processor 42 may determine, based on the electrical signal received from the image sensor 40, whether an object component, i.e., a target of interest, is extracted from an image, in operation 184. To do this, the main control unit 62 may receive a signal indicating extraction or non-extraction of an object component from the object discrimination unit 64, for example, from the object component extraction unit 110 of
If an object component is a face and an iris, after operation 182, it may be determined, from the electrical signal, whether a face has been extracted, and if it is determined that the face has been extracted, another determination as to whether an eye has been extracted from the extracted face is made, in operation 184. If it is determined that the object component has been extracted from the image, the image processor 42 may determine whether the extracted object component is a pre-registered allowed object component, in operation 186.
The recognition unit 114 of
After operation 200, the authentication unit 118 may determine, using the score calculated by the recognition unit 114, whether the extracted object component is an allowed object component. In other words, the authentication unit 118 may determine whether the score is greater than the critical value, for example, in operation 202. When the score is greater than the critical value, the extracted object component may be a pre-registered allowed object component.
If the object component is a face and an iris, for example, the authentication unit 118 may simultaneously perform a comparison of the score of the iris with a critical value for the iris, and a comparison of the score of the face with a critical value for the face, in operation 202. Alternatively, the authentication unit 118 may perform the comparison of the score of the iris with the critical value for the iris prior to the comparison of the score of the face with the critical value for the face. Alternatively, the authentication unit 118 may perform the comparison of the score of the iris with the critical value for the iris after the comparison of the score of the face with the critical value for the face, noting that alternative embodiments are equally available.
Referring back to
If it is determined in operation 180 that an image is only to be sensed, for example, instead of being authenticated, the image may be sensed and stored, in operation 192. To perform operation 192, the image sensor 40 may sense the image, and the image control unit 60 of the image processor 42A may produce an image signal based on a result of the sensing and output the image signal to the main control unit 62. The main control unit 62 may output the image signal to the display unit 66. The display unit 66 may display an image corresponding to the image signal received from the main control unit 62.
As described above, embodiments of an imaging apparatus, medium, and method using infrared rays, such as that of
In contrast with a conventional imaging apparatus, including separate cameras for recognizing an iris and for sensing a color image, an imaging apparatus according to an embodiment of the present invention can recognize an object component and obtain a color image using a single camera. Hence, an imaging apparatus according to embodiments of the present invention may be widely applied to mobile terminals (e.g., cellular phones), criminal discriminating apparatuses which compare faces of suspects with personal items of criminals, airline passenger discriminating apparatuses that compare faces of airline passengers with pictures on passports of the passengers, entrance terminals based on biometric authentication, etc., for example. In this case, the imaging apparatus according to embodiments of present invention may authenticate users by recognizing at least one of their irises and their face, which are taken as objects to be extracted from images. Furthermore, the imaging apparatus, according to embodiments of the present invention, may also be used to determine, using an infrared component, whether an object extracted from an image is an image of a picture or a live image.
When an infrared lighting and a sensor recognize a person or an animal using an image, the imaging apparatus, using infrared rays and an object discrimination method thereof, may be used to implement a recognition system that is robust to surrounding illumination.
Further, as described above, according to embodiments of the present invention, an infrared component cell can be far more easily implemented than in the conventional art. In a conventional method of discriminating an object component of an image, for example, a face, without using an infrared component, discrimination is greatly affected by an ambient illumination around the face. However, according to embodiments of the present invention, an object component can be more accurately identified while being less affected by the ambient illumination of an object, such as, the face, because an infrared component of an image sensed by an implemented infrared filter is used. Furthermore, in contrast with the conventional art where an extra iris recognition camera is required to recognize an iris, in addition to an image sensing camera, embodiments of the present invention can perform both iris identification and color image acquisition using a single camera by employing the image sensor 40, sensing an infrared component and a visible light component together. In other words, the two operations, which are the iris identification and the color image acquisition, can be incorporated and executed by a single camera. Therefore, the imaging apparatus according to embodiments of the present invention can be made compact.
In addition to the above described embodiments, embodiments (and/or aspects of embodiments) of the present invention can also be implemented through computer readable code/instructions in/on a medium, e.g., a computer readable medium. The medium can correspond to any medium/media permitting the storing and/or transmission of the computer readable code.
The computer readable code can be recorded/transferred on a medium in a variety of ways, with examples of the medium including magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), and storage/transmission media such as carrier waves, as well as through the Internet, for example. The media may also be a distributed network, so that the computer readable code is stored/transferred and executed in a distributed fashion.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and sprit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2004-0090917 | Nov 2004 | KR | national |