METHOD OF DETERMINING THE SPATIAL RELATIONSHIP OF AN EYE OF A PERSON WITH RESPECT TO A CAMERA DEVICE

Information

  • Patent Application
  • 20070171369
  • Publication Number
    20070171369
  • Date Filed
    January 10, 2007
    18 years ago
  • Date Published
    July 26, 2007
    17 years ago
Abstract
A method of determining the spatial relationship of an eye (10) of a person with respect to a camera device (20) which provides images of the eye (10) comprises: a model acquisition phase in which a customized model of the eye (10) is constructed and a reference spatial relationship of the eye model with respect to the camera device (20) is determined using a reference image of the eye (10); anda tracking phase in which position and/or rotation coordinates of the eye (10) are determined by aligning the eye model to a current image of the eye (10).
Description

BRIEF DESCRIPTION OF THE DRAWINGS

Other objects, characteristics and results of the invention will be clear from the following description given by way of non-limitative example and illustrated by the accompanying figures, in which:



FIG. 1 is a schematic illustration of a human eye and a coordinate system attached thereto;



FIG. 2
a is a photograph of a person's eye during a purkinje analysis, the cornea of the eye being intact;



FIG. 2
b is a photograph similar to FIG. 2a in the case of a cornea layer having been cut off;



FIG. 3 is a schematic view illustrating an experimental setup used for corneal ablation surgery;



FIG. 4 shows a schematic experimental setup for purkinje analysis;



FIG. 5 is a schematic view illustrating the relationship between the person's eye having a coordinate system attached thereto and the camera device to which a main coordinate system fixed in space is attached;



FIG. 6 is an enlarged image of a person's eye with characteristic blood vessel features;



FIG. 7 is a schematic representation of projecting rays simulated by a computer during the model acquisition phase, according to a first embodiment of the invention;



FIG. 8 shows the spatial distribution of reference image points in the coordinate system attached to the eye, according to the first embodiment of the invention;



FIG. 9 shows a plurality of templates, each of which is associated to a respective reference image point, according to the first embodiment of the invention;



FIG. 10 is a schematic flow chart illustrating the method steps conducted in the first embodiment of the invention;



FIG. 11 shows three gray value maps used in the second embodiment of the invention;



FIG. 12 is a schematic flow chart illustrating essential method steps of the second embodiment of the invention;



FIG. 13 shows an experimental setup used in an alternative embodiment of the invention based on eye model acquisition by triangulation of eye surface points; and



FIG. 14 shows yet another experimental setup used in an alternative embodiment based on eye model acquisition by multiple cameras.


Claims
  • 1. A method of determining the spatial relationship of an eye (10) of a person with respect to a camera device (20) which provides images of the eye (10), characterized in that the method comprises: a model acquisition phase in which a customized model of the eye (10) is constructed and a reference spatial relationship of the eye model with respect to the camera device (20) is determined using a reference image of the eye (10); anda tracking phase in which position and/or rotation coordinates of the eye (10) are determined by aligning the eye model to a current image of the eye (10).
  • 2. The method according to claim 1, characterized in that the eye model comprises: a three-dimensional eye shape model representing the surface of the eye (10) and being aligned with the reference image of the eye (10) during the model acquisition phase; andtexture information associated with a plurality of points located on the surface of the eye (10).
  • 3. The method according to claim 2, characterized in that the three-dimensional eye shape model comprises a superposition of two ellipsoids, one of which represents the globe (12) and the other of which represents the cornea (16).
  • 4. The method according to claim 3, characterized in that at least one of the two ellipsoids is a sphere.
  • 5. The method according to claim 2, characterized in that the three-dimensional eye shape model comprises a wire mesh structure of connected points.
  • 6. The method according to claim 2, characterized in that the three-dimensional eye shape model comprises a plurality of points defined by linear combinations of shape eigenvectors.
  • 7. The method according to claim 2, characterized in that the reference spatial relationship of the eye shape model with respect to the camera device (20) is obtained by applying a purkinje analysis to the reference image of the eye (10) based on corneal reflexes of illumination sources (32) located at predefined positions relative to the camera device (20).
  • 8. The method according to claim 2, characterized in that the reference spatial relationship of the eye shape model with respect to the camera device (20) is obtained by assuming that the person is fixating at a fixed position in space previously defined relative to the camera device (20).
  • 9. The method according to claim 2, characterized in that the texture information is stored as a feature template map comprising: three-dimensional model coordinates of a plurality of reference image points at which eye features are located; andtemplates containing parts of the reference image extracted around each associated point and showing the corresponding eye feature.
  • 10. The method according to claim 9, characterized in that the features are selected from the group consisting of blood vessels, iris features, limbus, limbus centre, pupil centre, pupil edge and artificial markers.
  • 11. The method according to claim 9, characterized in that the three-dimensional model coordinates of a point are obtained by detecting the corresponding eye feature in the reference image of the eye (10) and intersecting a projecting ray (40) of the eye feature with the aligned eye shape model.
  • 12. The method according to claim 9, characterized in that the tracking phase comprises the following steps: d) a template matching step (S10) in which, for a plurality of feature templates stored in the feature template map, a region in the current eye image bearing the largest resemblance to the respective feature template is searched;e) a coordinate determination step (S20) in which the coordinates of the regions found in step a) are determined as respective current feature positions; andf) an alignment step (S30) in which an image distance between the current feature positions determined in step b) and the positions of the corresponding three-dimensional model features projected into the current eye image is minimized by fitting orientation and/or rotation parameters of the eye model.
  • 13. The method according to claim 12, characterized in that for each feature template stored in the feature template map, the search conducted during the template matching step (S10) is limited to a predefined zone of the current eye image around the feature position determined in a previous coordinate determination step (S20).
  • 14. The method according to claim 2, characterized in that the texture information is stored as a gray value map comprising: three-dimensional model coordinates of a plurality of previously defined reference image points; andgray values extracted from the reference image at each of the points.
  • 15. The method according to claim 14, characterized in that the three-dimensional model coordinates of a point are obtained by intersecting a projecting ray (40) of the reference image point with the aligned eye shape model.
  • 16. The method according to claim 14, characterized in that the tracking phase comprises an alignment step (S130) in which the total gray value difference between the current eye image and the gray value map projected into the current eye image is minimized by fitting orientation and/or rotation parameters of the eye model.
  • 17. The method according to claim 1, characterized in that the tracking phase is continuously repeated in regular time intervals.
  • 18. The method according to claim 1, characterized in that it furthermore comprises a step of determining a spatial relationship of the person's eye with respect to a surgical device (26) based on a previously determined spatial relationship of the camera device (20) with respect to the surgical device (26).
  • 19. The method according to claim 1, characterized in that the spatial tracking phase furthermore comprises determining internal degrees of freedom of the eye (10) model selected from the group consisting of relative movements of eye model features, scaling of the whole eye model, scaling of eye model features, deformations of the whole eye model, deformations of eye model features and appearance changes due to illumination influences.
  • 20. A computer program, comprising: computer program code which, when executed on a computer (22) connected to a camera device (20), enables the computer (22) to carry out a method according to claim 1.
Priority Claims (1)
Number Date Country Kind
DE102006002001.4 Jan 2006 DE national