In many applications in medicine, technology and art, 3D sensors are used for three-dimensional acquisition of shape. Exemplary problems are: acquiring components in automotive industry, measuring statues. Exemplary medical problems in which living humans or parts of humans are acquired are: 3D acquisition of the head, acquisition of faces, of breasts, backs, feet. Exemplary measuring principles for optical 3D sensors are laser triangulation or coded illumination, for example with strip projection.
As a rule, an optical sensor acquires or measures only from one viewing direction. However, a frequent problem is panoramic measurement of the object, or at least measurement from an angular range which is larger than that resulting from only one view. (“3D wide angle measurement”). This problem is usually solved by recording the object from a number of directions and combining (registering) the various views. The object or the sensor is usually repositioned in this case.
A further requirement is to keep the time required for a measurement sequence as short as possible, and to save the working step of repositioning. Moreover, quick measurement is required when measuring in the medical field because of possible movement of the human—including inadvertent movement.
In the case of many systems, repositioning is therefore performed by motorized movement. The G-Scan system [Fraunhofer 04] of the Fraunhofer-Gesellschaft is used to measure human faces. No repositioning of the sensor or of the object is performed here. Rather, only a “virtual” repositioning of the sensor is performed, by rotating a mirror into four positions sequentially in time and giving rise in each position to a beam path that reaches, with the aid of a respective further mirror, a new virtual position of the sensor in relation to the measurement object (face). One disadvantage of the system is the necessarily sequential cycle of the measurements, and the complicated movement mechanism of the mirrors. Again, folding the mirrors over takes time.
A further requirement in technology and medicine is the measurement of the texture of the surface of objects including the 3D shape.
The object can be achieved partially by modifying the 3D sensor, for example by using color cameras instead of black and white cameras. Color information is also obtained in relation to each measurement pixel. Such a unit is built by 3dMD [3dMD 06] using the stereo method.
There are likewise methods for acquiring the texture which make use of an additional color camera in addition to the sensor. As a rule, the color camera is calibrated photogrammetrically in relation to the sensor coordinate system. The color camera is used to take a picture once the 3D measurement has been performed, or in a pause for changing images. Said picture is projected mathematically onto the 3D data (“mapped”) and the texture is calculated in this way. [contento 05].
A mechanical movement of the sensor, of the object or of mirrors for 3D wide angle measurement requires too much time for some problems. For example, human faces should be measured in a period less than one second, since inadvertent movements lead to measuring errors. Old people, children, Parkinson's patients, in general people in a poor state of health, in particular, cannot be in a position of rest for long.
Furthermore, by way of example the mirror arrangement of the G-Scan product does not achieve panoramic measurement, but only the measurement of the front half space around the human head. The unit can be used to measure the face, but not to measure a complete head.
Thirdly, the quality of the recorded textures is generally low. The use of color video cameras instead of black and white video cameras is disadvantageous precisely through the use of these color cameras: the resolution (pixel number) of these color video cameras is mediocre, because the latter are optimized simultaneously for outputting images quickly and continuously. The quality of the 3D measurement is impaired.
When the illumination is performed by the (coding) illuminating system of the 3D sensor, this has the disadvantage that this illuminating system is designed for producing a structured illumination, that is to say with a small aperture. It is known from photography that pictures with a small aperture lead to inhomogenities and strong local variations in the observed object brightness, for example through shading and reflections. The problem of “acquiring the object texture in the 3D image” can therefore not be effectively solved, because the object texture should be a property that is as far as possible independent of the illumination, but that depends strongly on the illumination owing to the incomplete measurement.
The invention is intended to solve the problems described in its preferred design: panoramic measurement without mechanical movement and, in addition, the acquisition with high resolution of a color texture largely independent of illumination. The invention is preferably to be used to measure human heads, faces and other body parts, but it is also possible, of course, to acquire works of art, jewelry or technical objects in three dimensions.
An optical 3D sensor is to be used for measurement; laser triangulation, coded illumination or the stereo method preferably being taken into consideration here. Two of these methods—laser triangulation and coded illumination—exhibit an illuminating system and an observing system that have a common image field. The stereo method exhibits at least two cameras that have a common image field and can be backed up by active illumination with strip projection. The respectively common image field is to be denoted as “image field of the 3D sensor”. An axis which begins in the middle of the triangulation base of the sensor and ends in the middle of the image field of the 3D sensor is to be denoted as “optical axis of the 3D sensor”. The triangulation base is here either the distance between the pupil of the observing system and the pupil of the illuminating system, as illustrated in
One feature of the invention (
The division of the image field renders it possible to position the object (2) in one part of the image field and to place in other parts of the image field mirrors (3a, 3b, 3c) in which other viewing directions of the object are respectively visible from the perspective of the sensor (
Since the position of the mirrors and of the 3D sensor is known, the position of the associated real 3D views can be calculated from the virtual 3D views in each case. These 3D views can be combined with the directly measured 3D view to form a complete 3D object (registration).
The described procedure leads to a problem that, as demonstrated in
The solution to this problem is a further feature of the invention. In a simple form of the solution, the superposition is avoided according to the invention in that the measurement is performed in a number of temporally consecutive phases. Only a part of the image field is illuminated in each case in the image field of the illuminating system. In one of the phases, only the part of the image field (1) in which the object (2) is located is illuminated. In other phases, respectively one mirror (3a, 3b, 3c) is illuminated. The other parts of the image field must be switched to dark. As a result, there is in each case only a single optical path from the illumination to the object, and no superpositions of the illumination from various directions comes about.
In order to implement this solution, it is necessary to be able to address the illumination completely in spatial terms. This obtains, for example, when use is made of a video projector. A system with laser triangulation having a light line traced by a scanner mirror can be used when, for example, the reversal points of the scanner movement are varied under control. Another possibility is to switch the light source on and off under control such that only the selected fields of view are respectively illuminated.
The introduction of the described temporally consecutive phases of the measurement certainly lengthens the time required for measurement, but there is no need for mechanical movement between the phases, and so a substantial gain in time continues to remain as inventive advantage.
An acceleration of the measurement is possible with the aid of another inventive solution: two 3D views can respectively be measured simultaneously with the aid of a specific arrangement of the mirrors. This is the case whenever the corresponding parts of the image field of the illuminating system can be switched simultaneously to bright without the object regions illuminated thereby overlapping.
This can be achieved, for example, when both parts of the image field of the illumination are imaged onto the object via a mirror, and the direction vectors of the incident light via these two mirrors oppose one another at an angle of approximately 180°, as shown in
When the sensor is equipped with this feature, measurement thus comprises a phase with direct measurement, at least one phase with simultaneous measurement of two 3D views as described, and possibly further phases of the measurement of individual 3D views via a mirror.
The desired effect that various parts of the object can be simultaneously illuminated and measured can also be achieved in principle by controlling the illuminated fields of view such that they do not overlap one another. This is possible, in particular, when the object and its position are roughly known. It is then also not necessary for the illuminating directions to oppose one another.
A particular arrangement of the mirrors is illustrated in
This measuring geometry is well suited to the acquisition of faces, in particular. The face can therefore be acquired in three dimensions inclusive of the sides as far as the ears. A seating facility (9) on which the person (10) to be measured sits can be located below the mirror construction. As described above, measurement via the mirrors can be performed simultaneously. The measurement is therefore performed in a total of two or three phases.
A further particular arrangement of the mirrors is illustrated in
This measuring geometry is suitable for acquiring parts of the human head, in particular for example when a helmet is to be fitted. Otherwise than in the case of the design according to
A further particular arrangement of the mirrors is illustrated in
As is known from the chemistry of the carbon atom, the angle between the measuring directions is approximately 104°. The mirrors are therefore tilted relative to the optical axis of the illumination of the sensor by approximately 52° in accordance with the law of reflection, and positioned azimuthally in a regular 120° arrangement. The arrangement exhibits high symmetry and permits a 3D panoramic measurement with as large an overlap of four 3D views as possible. It is suitable for panoramic measurement of human heads, for example; here, as well, the person is positioned on a seating facility below the mirror construction. Measurement is performed in a total of four phases.
A further embodiment of the invention relates to checking for movement artifacts. When measuring objects that move under some circumstances, such as humans, for example, a suitable algorithm can be used after each phase of measurement to check whether the person being measured has moved too much or the data of this phase of measurement cannot be used for other reasons (for example error function of the 3D sensor). This phase of measurement can then immediately be repeated before the next phase proceeds. For this purpose the partial images of each measurement are checked for known properties as described in [Creath 86], for example.
It is therefore to be expected when operating such measuring apparatus that the position of the mirrors varies slightly in the course of time, for example when a patient bumps against the mirror construction. Such small variations of the mirrors would have the effect that the registration of the 3D views no longer functions exactly in late operation, and that discontinuous transitions will be seen in the data of the 3D wide angle measurement. In order to avoid this problem, after measurement of the 3D views the data needs to be registered during operation of the measuring apparatus, and this registration is integrated into the evaluation in accordance with the invention.
The concept of the mirror construction can be combined with the concept of the rotation of the measurement object. Thus, a measurement object can firstly be measured from a number of directions with the aid of the mirror construction, then rotated or displaced, and then remeasured. The process can be repeated several times. This produces a number of 3D wide angle views which, can be registered relative to one another, and be combined to form a comprehensive 3D wide angle view. This can serve, in particular, for filling up gaps (shading) in the 3D view.
The invention can also be used to record a texture of the object to be measured in addition to the 3D information. Two methods come into consideration in order to implement the advantageous recording of a texture in accordance with the invention.
Firstly, when use is made of an illuminating unit of the 3D sensor with an addressable illumination, this illuminating unit can be enhanced such that it can also illuminate the object in various colors, preferably the three primary colors of red, green and blue. Video projectors are suitable for implementation. The sensor is further equipped with black and white cameras. The method has the advantage that no further hardware is required for texture measurement when use is made of a video projector or another controllable light source.
The hue can be calculated pixel by pixel in a fashion corresponding to the 3D views by projecting the various colors and by recording with the black and white camera. The parts of the image field are assigned to the various 3D views, and thereafter these 3D views are cut apart from one another and registered, as explained.
The illuminating system of the 3D sensor generally has a very small illuminating aperture. According to the invention, the very slight illuminating aperture is additionally increased by the mirror construction in a decisive way, because (additional) virtual images of the illuminating device are produced which exhibit a large angle to the actual illuminating device (synthetic aperture). Texture pictures with a higher illuminating aperture reproduce the texture better than pictures with a low aperture. In particular, the illumination becomes more homogeneous and fewer shadows and highlights are produced. The brightness in the observed image also no longer depends so strongly on the local inclination of the surface element observed.
A further and improved possibility for measuring a texture in combination with the mirror construction consists in positioning an additional color camera (14) in the vicinity of the optical 3D sensor—see
When the 3D sensor is based on a triangulation method, an advantageous position for accommodating the color camera is a location in the vicinity of the triangulation base of the triangulation sensor, because then the color camera does not further enlarge the angular range of the illuminating and observing beams, and smaller mirrors suffice.
The image field of the color camera is intended to correspond approximately to the image field of the 3D sensor. The color camera can be optimized for recording a single image, not necessarily for recording image sequences. Thus, digital color photographic cameras can be used that can attain a resolution which exceeds the resolution of the 3D measurement video cameras.
In any case, the usual software models for managing textured 3D data envisage the possibility of representing and managing textures that have a higher resolution than the 3D data. According to the prior art, a color digital photographic camera with 12 megapixels is available, for example.
These color photographic cameras further have an optimized system for color rendition with automatic white balance, and can therefore record and reproduce colors better. The position of the color camera and the position of all its observing beams in space must be known, this purpose being served by a camera calibration that is to be carried out.
In this implementation of the invention, according to the invention the illumination for the color recording with the aid of an additional color camera is not, or is not only, intended to be performed by the illuminating system of the 3D sensor (although this is possible), since the aperture thereof is relatively small and the quality of texture measurement can be further improved. This is valid even when the illuminating aperture is increased by the reflection.
The illumination is, moreover, intended to be performed by a separate illuminating system with a high aperture. It is to be preferred for this purpose, or in addition, to use flash systems that flash indirectly, for example via a reflector disk, and to enlarge the aperture in this way. The illuminating aperture is also additionally increased in a decisive way here by the mirror construction. It has been shown in experiments that very good texture measurement is possible through the use of a mirror construction with an aperture of illumination of 5°×5° for color recording.
When the illuminating unit is used, the various views of the measurement object at various points in the image field of the color camera are simultaneously visible, and can at the same time be recorded with a single recording. Thereafter, the views can be extracted individually for evaluation. According to the invention, the individual views are imaged algorithmically in a further method step onto the measured 3D wide angle measurement with the aid of the camera calibration. It is thereby possible to add a texture to the 3D wide angle measurement view with an extended angular range.
Just as in the case of 3D measurement, unavoidable small variations in the mirror construction would lead to a spatially false assignment of the texture on the object surface. The variations in the mirror construction are, however, to be assumed as known after the above described registration of the 3D views relative to the 3D wide angle measurement has taken place. The registration supplies a small correction to the position of the mirrors that can be taken into account in the mathematical imaging of the texture onto the measurement object.
A direct view of the object and, in general, a number of views recorded via mirrors are located in the image field of the color camera. The latter views will appear slightly darker, because the reflection coefficient of the mirror is less than one. An aluminum mirror has a reflection coefficient of approximately 70%. The known numerical value for the reflection coefficient of the mirrors can be used to correct the brightness of the various views.
A particular exemplary embodiment for a 3D as illustrated in a side view in
Number | Date | Country | Kind |
---|---|---|---|
102006042311.9 | Sep 2006 | DE | national |