Claims
- 1. An image processing apparatus characterized by comprising:
image processing means for sensing object images from different directions, extracting feature points from the sensed object images, and computing a feature pattern on the basis of the extracted feature points.
- 2. An apparatus according to claim 1, comprising:
a plurality of image sensing means for sensing object images from different directions; and normalization means for extracting feature points from the object images sensed by said plurality of image sensing means, setting a feature region on the basis of the extracted feature points, segmenting the set feature region into a plurality of regions, computing predetermined information in each segmented region, and computing a feature pattern on the basis of the computed predetermined information.
- 3. An apparatus according to claim 1, comprising:
a plurality of image sensing means for sensing object images from different directions; normalization means for extracting feature points from the object images sensed by said plurality of image sensing means, setting a feature region on the basis of the extracted feature points, segmenting the set feature region into a plurality of regions, computing an average value of brightness levels in each segmented region, and computing a feature pattern on the basis of the computed average value; registration means for registering the feature pattern computed by said normalization means as a feature pattern associated with a predetermined object; and verification means for specifying an object associated with the object image by comparing the feature pattern computed by said normalization means with the feature pattern registered in said registration means.
- 4. An apparatus according to claim 3, wherein said plurality of image sensing means line up vertically, and
said normalization means computes the feature pattern using one of a feature point group including central points of right and left pupils and central points of right and left nasal cavities of the object image, and a feature point group including central points of right and left pupils of the object image.
- 5. An apparatus according to claim 3, wherein said plurality of image sensing means line up horizontally, and
said normalization means computes the feature pattern using one of a feature point group including central points of right and left pupils and central points of right and left nasal cavities of the object image, a feature point group including central points of right and left pupils and a central point of a left nasal cavity of the object image, and a feature point group including central points of right and left pupils and a central point of a right nasal cavity of the object image.
- 6. An apparatus according to claim 3, wherein said plurality of image sensing means line up vertically and horizontally, and
said normalization means computes the feature pattern using one of a feature point group including central points of right and left pupils and central points of right and left nasal cavities of the object image, a feature point group including central points of right and left pupils and a central point of a left nasal cavity of the object image, a feature point group including central points of right and left pupils and a central point of a right nasal cavity of the object image, and a feature point group including central points of right and left pupils of the object image.
- 7. An apparatus according to claim 3, wherein said normalization means extracts feature vectors of different dimensions from respective object images sensed by said plurality of image sensing means, and arranges the extracted feature vectors of different dimensions in turn to integrate them as a multi-dimensional feature pattern.
- 8. An apparatus according to claim 3, wherein said normalization means captures object images sensed by said plurality of image sensing means at predetermined time intervals, computes feature patterns of the object images of identical times, and arranges feature patterns of different times in turn to integrate them as a time-series feature pattern.
- 9. An apparatus according to claim 1, comprising:
image input means for sensing an object image from different positions, and inputting a plurality of object images at different image sensing positions; feature extraction means for extracting feature patterns that represent features of an object from the plurality of object images input by said image input means; verification means for verifying the plurality of feature patterns extracted by said feature extraction means with a reference feature pattern which is registered in advance; and discrimination means for, when at least one of the plurality of feature patterns extracted by said feature extraction means matches the reference feature pattern which is registered in advance as a result of verification of said verification means, determining that an object associated with that object image is a person himself or herself.
- 10. An apparatus according to claim 9, wherein said image input means has a plurality of image sensing means which are set in advance at a plurality of predetermined positions, and sense an object image from a plurality of different positions, and inputs a plurality of object images at different image sensing positions using said plurality of image sensing means.
- 11. An apparatus according to claim 9, wherein said feature extraction means comprises:
feature point detection means for detecting feature points of an object from the input object image; feature region setting means for setting a feature region on the basis of the feature points detected by said feature point detection means; region segmentation means for segmenting the feature region set by said feature region setting means into a plurality of regions; and feature pattern extraction means for computing brightness average values in the regions segmented by said region segmentation means, and extracting a feature pattern which represents a feature of the object on the basis of the brightness average values.
- 12. An apparatus according to claim 1, comprising:
image input means for sensing an object image from different positions, and inputting a plurality of object images at different image sensing positions; input image determination means for determining an image sensing position of an object image to be used from the plurality of object images input by said image input means upon registration of a feature pattern; first feature extraction means for extracting a feature pattern which represents a feature of an object from the object image determined by said input image determination means; registration means for registering the feature pattern extracted by said first feature extraction means as a reference feature pattern associated with the object in correspondence with position information indicating the image sensing position of the corresponding object image; verification image selection means for selecting an object image at an image sensing position, which corresponds to the position information registered together with the feature pattern of the object to be verified registered in said registration means, of the plurality of object images input by said image input means upon verification of a feature pattern; second feature extraction means for extracting a feature pattern which represents a feature of the object from the object image selected by said verification image selection means; and verification means for specifying an object associated with the object image by verifying the feature pattern extracted by said second feature extraction means with the feature pattern of the object to be verified registered in said registration means.
- 13. An apparatus according to claim 1, wherein an object image is input at different image sensing positions by moving at least one image sensing means to a plurality of predetermined positions and sensing an object image at respective positions.
- 14. An apparatus according to claim 1, comprising:
a plurality of image sensing means, respectively set in advance at a plurality of predetermined positions, for sensing an object image from a plurality of different positions; determination means for determining a position of the image sensing means to be used of said plurality of image sensing means upon registration of a feature pattern; first feature extraction means for extracting a feature pattern which represents a feature of an object from the object image obtained by the image sensing means determined by said determination means; registration means for registering the feature pattern extracted by said first feature extraction means as a reference feature pattern associated with the object in correspondence with position information indicating the position of the image sensing means determined by said determination means; selection means for selecting the image sensing means at a position, which corresponds to the position information registered together with the feature pattern of the object in said registration means, of said plurality of image sensing means upon verification of a feature pattern; second feature extraction means for extracting a feature pattern which represents a feature of the object from the object image obtained by the image sensing means selected by said selection means; and verification means for specifying an object associated with the object image by verifying the feature pattern extracted by said second feature extraction means with the feature pattern of the object registered in said registration means.
- 15. An apparatus according to claim 14, wherein each of said first and second feature extraction means comprises:
feature point detection means for detecting feature points of an object from the input object image; feature region setting means for setting a feature region on the basis of the feature points detected by said feature point detection means; region segmentation means for segmenting the feature region set by said feature region setting means into a plurality of regions; and feature pattern extraction means for computing brightness average values in the regions segmented by said region segmentation means, and extracting a feature pattern which represents a feature of the object on the basis of the brightness average values.
- 16. An image processing method comprising:
the step of sensing object images from different directions, extracting feature points from the sensed object images, and computing a feature pattern on the basis of the extracted feature points.
- 17. A method according to claim 16, comprising:
the first step of sensing object images from different directions; and the second step of extracting feature points from the object images sensed in the first step, setting a feature region on the basis of the extracted feature points, segmenting the set feature region into a plurality of regions, computing predetermined information in each segmented region, and computing a feature pattern on the basis of the computed predetermined information.
- 18. A method according to claim 16, comprising:
the first step of sensing object images from different directions; the second step of extracting feature points from the object images sensed in the first step, setting a feature region on the basis of the extracted feature points, segmenting the set feature region into a plurality of regions, computing an average value of brightness levels in each segmented region, and computing a feature pattern on the basis of the computed average value; the third step of registering the feature pattern computed in the second step as a feature pattern associated with a predetermined object; and the fourth step of specifying an object associated with the object image by comparing the feature pattern computed in the second step with the feature pattern registered in the third step.
- 19. A method according to claim 18, wherein the first step includes the step of sensing object images from different directions which line up vertically, and
the second step includes the step of computing the feature pattern using one of a feature point group including central points of right and left pupils and central points of right and left nasal cavities of the object image, and a feature point group including central points of right and left pupils of the object image.
- 20. A method according to claim 18, wherein the first step includes the step of sensing object images from different directions which line up horizontally, and
the second step includes the step of computing the feature pattern using one of a feature point group including central points of right and left pupils and central points of right and left nasal cavities of the object image, a feature point group including central points of right and left pupils and a central point of a left nasal cavity of the object image, and a feature point group including central points of right and left pupils and a central point of a right nasal cavity of the object image.
- 21. A method according to claim 18, wherein the first step includes the step of simultaneously sensing object images from directions different in vertical and horizontal directions, and
the second step includes the step of computing the feature pattern using one of a feature point group including central points of right and left pupils and central points of right and left nasal cavities of the object image, a feature point group including central points of right and left pupils and a central point of a left nasal cavity of the object image, a feature point group including central points of right and left pupils and a central point of a right nasal cavity of the object image, and a feature point group including central points of right and left pupils of the object image.
- 22. A method according to claim 18, wherein the second step includes the step of extracting feature vectors of different dimensions from respective object images sensed in the first step, and arranging the extracted feature vectors of different dimensions in turn to integrate them as a multi-dimensional feature pattern.
- 23. A method according to claim 18, wherein the second step includes the step of capturing object images sensed in the first step at predetermined time intervals, computing feature pattern of the object images of identical times, and arranging feature patterns of different times in turn to integrate them as a time-series feature pattern.
- 24. A method according to claim 16, comprising:
the first step of sensing an object image from different positions, and inputting a plurality of object images at different image sensing positions; the second step of extracting feature patterns that represent features of an object from the plurality of object images input in the first step; the third step of verifying the plurality of feature patterns extracted in the second step with a reference feature pattern which is registered in advance; and the fourth step of determining, when at least one of the plurality of feature patterns extracted in the second step matches the reference feature pattern which is registered in advance as a result of verification of the third step, that an object associated with that object image is a person himself or herself.
- 25. A method according to claim 24, wherein the first step includes the step of inputting a plurality of object images at different image sensing positions using a plurality of image sensing means which are set in advance at a plurality of predetermined positions, and sense an object image from a plurality of different positions.
- 26. A method according to claim 24, wherein the second step includes:
the step of detecting feature points of an object from the input object image; the step of setting a feature region on the basis of the detected feature points; the step of segmenting the set feature region into a plurality of regions; and the step of computing brightness average values in the segmented regions, and extracting a feature pattern which represents a feature of the object on the basis of the brightness average values.
- 27. A method according to claim 16, comprising:
the first step of sensing an object image from different positions, and inputting a plurality of object images at different image sensing positions; the second step of determining an image sensing position of an object image to be used from the plurality of object images input in the first step upon registration of a feature pattern; the third step of extracting a feature pattern which represents a feature of an object from the object image determined in the second step; the fourth step of registering the feature pattern extracted in the third step as a reference feature pattern associated with the object in correspondence with position information indicating the image sensing position of the corresponding object image; the fifth step of selecting an object image at an image sensing position, which corresponds to the position information registered together with the feature pattern of the object to be verified registered in the fourth step, of the plurality of object images input in the first step upon verification of a feature pattern; the sixth step of extracting a feature pattern which represents a feature of the object from the object image selected in the fifth step; and the seventh step of specifying an object associated with the object image by verifying the feature pattern extracted in the sixth step with the feature pattern of the object to be verified registered in the fourth step.
- 28. A method according to claim 16, wherein an object image is input at different image sensing positions by moving at least one image sensing means to a plurality of predetermined positions and sensing an object image at respective positions.
- 29. A method according to claim 16, comprising:
the first step of inputting a plurality of object images at different image sensing positions using a plurality of image sensing means, respectively set in advance at a plurality of predetermined positions, for sensing an object image from a plurality of different positions; the second step of determining a position of the image sensing means to be used of said plurality of image sensing means upon registration of a feature pattern; the third step of extracting a feature pattern which represents a feature of an object from the object image obtained by the image sensing means determined in the second step; the fourth step of registering the feature pattern extracted in the third step as a reference feature pattern associated with the object in correspondence with position information indicating the position of the image sensing means determined in the second step; the fifth step of selecting the image sensing means at a position, which corresponds to the position information registered together with the feature pattern of the object in the fourth step, of said plurality of image sensing means upon verification of a feature pattern; the sixth step of extracting a feature pattern which represents a feature of the object from the object image obtained by the image sensing means selected in the fifth step; and the seventh step of specifying an object associated with the object image by verifying the feature pattern extracted in the sixth step with the feature pattern of the object registered in the fourth step.
- 30. A method according to claim 29, wherein each of the third and sixth steps includes:
the step of detecting feature points of an object from the input object image; the step of setting a feature region on the basis of the detected feature points; the step of segmenting the set feature region into a plurality of regions; and the step of computing brightness average values in the segmented regions, and extracting a feature pattern which represents a feature of the object on the basis of the brightness average values.
Priority Claims (2)
Number |
Date |
Country |
Kind |
2000-074489 |
Mar 2000 |
JP |
|
2000-347043 |
Nov 2000 |
JP |
|
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is based upon and claims the benefit of priority from the prior Japanese Patent Applications No. 2000-074489, filed Mar. 16, 2000; and No. 2000-347043, filed Nov. 14, 2000, the entire contents of both of which are incorporated herein by reference.