Exemplary embodiments concern a method for operating a time-of-flight camera, or ToF camera for short, specifically a method for processing a raw image of a ToF camera. Further exemplary embodiments relate to an image processing apparatus for a ToF camera, to a ToF camera and to a computer program product for a ToF camera.
A ToF camera can be used to produce a depth image. The respective distances of the imaged objects or parts of the imaged objects can be presented on this depth image. To this end, at least four individual recordings per depth image are produced in a phase-based distance determination. The individual recordings can be referred to as individual image or phase image or ToF raw image and be required for the reconstruction of the respective distances.
One light signal or light ray per raw image can be sent by the ToF camera or a light transmission unit or light source of the ToF camera, which is reflected by an object. The reflected signal is received with a phase shift or time shift that is dependent on the time of flight by the ToF camera or a light receiving unit of the ToF camera. The phase shift of the individual images can contain the information with respect to the distance of the respective objects, with the distance being ascertained by way of a combination of the respective individual images.
In particular when using ToF cameras in mobile devices, in which any electric power required for the ToF camera may be limited by a respective predetermined battery capacity, the recording of the respective individual images for creating a depth image can represent an energy requirement that is too great for a specified operating time of the mobile device. This may be a disadvantage especially when tracking the position of an object, for which depth images must be recorded continuously, because this results in a continuously great energy requirement. Furthermore, it is possible, due to the time period until the respective individual images have been successively recorded, for latency to arise that can lead to a delay in an image output of the ToF camera.
Known are methods for operating ToF cameras and for processing raw images of ToF cameras.
There is a need for a method for operating a ToF camera for tracking an object, by way of which even previously unknown objects can be tracked and which provides a reduction in an energy requirement and latency when tracking.
One exemplary embodiment relates to a method for processing a raw image of a time-of-flight (ToF) camera. The method comprises determining a phase-related value within a tracking region of the raw image and the use of calibration data to determine a distance that corresponds to the phase-related value.
A raw image which has been processed in accordance with the method can also be referred to as a phase image which has been recorded by the ToF camera with a single exposure. For example, a raw image can have been recorded based on exposure with modulated light of an individual phase position or an individual phase shift or an individual phase displacement. It is also possible for the phase position of the modulated light to be modified during the exposure.
Such a raw image has at least one tracking region, or it can have a plurality of tracking regions or be divided into such tracking regions. A tracking region can be, for example, a specific contiguous region within an object, the distance of which from the ToF camera is to be determined. The at least one tracking region has at least one phase-related value. A phase-related value can also be referred to as a phase value of the raw image or a value of an auto-correlation function between a light signal emitted by the ToF camera and a light signal received by the ToF camera.
A phase-related value of a tracking region can correspond to a distance to be determined of the tracking region from the ToF camera. Furthermore, calibration data are used or utilized to determine this distance. It is possible with the calibration data to assign for example the respective corresponding distance to a phase-related value. The distance thus determined can be the distance of the tracking region, within which the phase-related value is determined.
The method can have the effect that only one raw image is required to determine the distance. In this way it is possible to reduce an energy requirement of a ToF camera, in particular to reduce the energy requirement by up to 75%, for example, with respect to a conventional method for determining a distance. A ToF camera that is operated in this way can consequently be used, for example, for object tracking or object locating in a mobile device having limited energy storage. A further effect may be that in the case of a determination of a distance in accordance with the method a latency time is reduced because, instead of a plurality of raw images, only one raw image needs to be recorded by the ToF camera to determine the distance. In contrast to a distance determination using a depth image consisting of four raw images, a latency time can decrease by up to 75%. A further effect of the method can be that movement artefacts during the distance determination of moving objects can be completely avoided because only a single raw image is always required to determine the distance.
An exemplary embodiment is concerned with an image processing apparatus for a ToF camera. An image processing apparatus comprises an image reading device and a distance determination device. The image reading device is configured to provide a phase-related value within a tracking region of a raw image. The distance determination device is configured to determine a distance corresponding to the phase-related value by using calibration data. The image processing apparatus can receive at least one raw image from a ToF camera, or such a raw image can be available to the image processing apparatus.
The image processing apparatus can capture a tracking region of the raw image or ascertain or utilize a predetermined tracking region to ascertain and output, provide or store a phase-related value within the tracking region.
The image processing apparatus can furthermore determine a distance corresponding to the phase-related value using the distance determination device. Calibration data can be available in, or be made available to, the image processing apparatus or the distance determination device. The distance determination device can utilize the calibration data to assign a distance to the phase-related value that is provided by the image processing apparatus or to determine said distance. In accordance with a few further exemplary embodiments, the image processing apparatus can furthermore be set up to store or output the distance that was thus determined.
Further exemplary embodiments concern a ToF camera having an image processing apparatus and a computer program product. The ToF camera can have such an image processing apparatus or be combined with an image processing apparatus. Such a ToF camera can have the effect that, when locating an object at a 60 Hz frame rate for example, that is to say with 60 determined distances per second, an exposure time for producing a raw image at constant latency can be longer, in particular up to four times longer, than in the case of a ToF camera that requires for example four raw images for determining a distance. Due to the longer exposure time, a high signal-to-noise ratio can be ensured.
Further exemplary embodiments concern a computer program product. The computer program product has a program code that can effect performance of a method for processing a raw image of a time-of-flight (ToF) camera, in particular determining a distance, when the program code is executed on a programmable hardware component. Such a computer program product can furthermore have the effect that it is avoided that depth images composed of a plurality of phase images are produced in corresponding elaborate calculations. Owing to the computer program product, a CPU utilization can be reduced in the determination of a distance in some exemplary embodiments, which can save costs and energy.
A few examples of apparatuses and/or methods will be explained in more detail below merely by way of example with reference to the appended figures. In the figures:
Various examples will now be described in more detail with reference to the appended figures, in which a number of examples are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarification.
Further examples are suitable for different modifications and alternative forms, and consequently a few specific examples thereof are shown in the figures and will be described in detail below. However, this detailed description does not limit further examples to the described specific forms. Further examples can cover all modifications, correspondences and alternatives that fall within the scope of the disclosure. The same reference signs relate throughout the description of the figures to the same or similar elements, which upon comparison can be implemented identically or in a modified form, while providing the same or a similar function.
It is to be understood that where an element is referred to as being “connected” or “coupled” to another element, the elements can be connected or coupled directly or via one or more intermediate elements. When two elements A and B are combined using an “or,” this is to be understood to mean that all possible combinations are disclosed, i.e. only A, only B, and also A and B. An alternative wording for the same combinations is “at least one of A and B.” The same applies to combinations of more than 2 elements.
The terms used here to describe specific examples are not intended to be limiting for further examples. If a singular form, e.g. “a, an” and “the,” is used and the use only of a single element is defined as being neither explicitly nor implicitly binding, further examples can also use plural elements to implement the same function. When a function is described below as being implemented using a plurality of elements, further examples can implement the same function using a single element or a single processing entity. Furthermore, it is understood that the terms “comprises,” “comprising,” “has” and/or “having” when used concretize the presence of the indicated features, whole numbers, steps, operations, processes, elements, components and/or a group thereof, but do not exclude the presence or the addition of one or more further features, whole numbers, steps, operations, processes, elements, components and/or a group thereof.
Unless this is otherwise defined, all terms (including technical and scientific terms) are used here in their typical meaning of the field to which examples belong.
In subsequently described exemplary embodiments it is possible, in contrast to the above-described method which produces and uses a plurality of raw images to calculate distance information or a 3D image, to produce in an additional operating mode distance information or a 3D image which requires only the recording of a single raw image, i.e. only a single phase shift between the modulation signal and the reference signal. To this end, at least one tracking region can be provided or selected in the raw image. The method 100 comprises determining a phase-related value 106 within the tracking region. The phase-related value can, for example, correspond to a value of the auto-correlation signal measured using a pixel of a ToF camera, which value can be referred to as a phase value and can have, for example, a specific intensity. The specific phase-related value 106 is utilized, using calibration data 108, to determine a distance.
Optionally, the raw image can be captured or provided in an image capturing step 102. The tracking region can be determined in the raw image in an optional tracking step 104. The specific distance can be output to a subsequent system in an optional output step 110.
The ToF camera 204 can be configured in a special exemplary embodiment such that it emits pulsed intensity-modulated infrared light and captures light that is reflected by an object or by a plurality of different objects in the image region using a PMD (photonic mixer device) image sensor. A PMD image sensor can be provided in each image point of the ToF camera 204 and produce a value in each case that indicates a phase shift difference between the emitted and received light. Owing to an auto-correlation of the emitted and received light it is possible, as described above, for a PMD image sensor to produce for each image point a phase-related value, e.g. an auto-correlation signal. Such an image consisting of phase-related values is also referred to, as described above, as a phase image or a raw image. A function 218 illustrated in
If this fundamental illustration were considered alone, it would be possible to assign a distance to each phase-related value between the two maxima. However, the phase-related values ascertained by the ToF camera in an individual raw image frequently do not depend on the distance alone, but also on other factors, such as for example on the unknown reflectivity of the region of the object that is imaged on the respectively considered pixel. By using calibration data, it is possible for possible additional dependencies to be taken into account, such that, in accordance with some exemplary embodiments, the use of an individual raw image is sufficient to assign a distance to a phase-related value in connection with the calibration data.
This can be the case for example if the calibration data were determined for that tracking region for which the phase-related value is also determined. Possibilities for determining suitable tracking regions and for suitably determining calibration data that correspond thereto will be described with reference to a number of the subsequent figures.
A distance can be determined for example with respect to an extended tracking region on the object 210 or individually for each image point (pixel) of a sensor used in the ToF camera. In an exemplary embodiment, the tracking region can also be extended to the entire object 210 or comprise it, such that the resulting distance can be the distance of the ToF camera from the entire object 210. In the configuration illustrated by way of example in
As was already mentioned in connection with
To determine the phase-related value 106 within the tracking region, a phase-related value of an image point of the tracking region can be selected. For example, an image point located in the geometric center of the tracking region can be used herefor. To ascertain the distance of the tracking region and consequently of the object 210, it is possible to use the phase-related value with calibration data. Calibration data can have been generated previously, in particular for a respectively current image scene with the object 210. The calibration data can thus be adapted in each case to a recording environment. A possibility for generating calibration data will be described below. Calibration data can be available for example in the form of a look-up table, which can assign in each case a distance 202 to a respective phase-related value 200.
In the exemplary embodiment, the tracking region of the raw image comprising the object 210 can have the phase-related value 216. By using calibration data or a look-up table, it is possible to assign a distance 214 to this phase-related value 216. In this way, it is possible by simple placement of a single phase-related value 216 in for example a look-up table to determine the distance 214 of the object 210 from the ToF camera 204 or for an output 110 of the distance to be effected. Owing to the method, the distance determination can be quick, energy-efficient and possible with little calculation outlay.
In a special exemplary embodiment, an object 210 can be identified on a raw image, wherein the object 210 can be assigned a tracking region. The tracking region can have a phase-related value that can be assigned to a distance by way of calibration data. The distance can correspond to the distance of the object 210 from the ToF camera at the time of the recording of the raw image.
The method 100 can be used in particular with repeated performance to provide a tracking function which can ascertain in particular a respectively current or updated distance of the object. The object 210 can thus be tracked, or the location of the object 210 can be tracked. In particular, according to the method, 3D object tracking, or 3D tracking, is possible, which can comprise a distance of the object 210.
In accordance with a further exemplary embodiment, the method 100 comprises determining one or more tracking regions as contiguously identifiable partial regions, which are similar with respect to their reflection properties, of an object 210 that is imaged on the raw image. This can have the result that, in an object having a single identifiable partial region, said one partial region can be used as a tracking region, while in objects having a plurality of identifiable partial regions, for example one of said plurality of partial regions can be used as a tracking region.
The tracking regions 400 and 404 of the example shown in
In another exemplary embodiment, a further partial region can be a pair of glasses or sunglasses worn by the user. The effect can be in this case that this partial region is identifiable because it can differ strongly with respect to its reflection property, in contrast to the user's skin, which can comprise a remaining region of the object 210. Identification of such a partial region can thus be possible with simple means. Further partial regions 400 and 404 can also be earrings of the user which, similar to a pair of sunglasses, can have reflection properties which are distinguishable from the remaining region due to their surface or their material.
In some exemplary embodiments, as has just been shown, at least two tracking regions in the object 210 are determined. This can make it possible for the object 210 to be resolved in the manner of a relief with respect to its surface shape or plastic embodiment because it is possible via the respective phase-related value to assign in each case one distance per tracking region. It is thus possible, when the object 210 is a head, for the distance of a nose to be less than the distance of an ear. The flexible number of the tracking regions can have the effect that an application-adapted optimum number with respect to an object resolution and depending on the object type can be selected. A further effect can be that, in the case of at least two tracking regions, rotations of the object 210 can be established. For example, when the distance 110 of the first tracking region 400 between two raw images 408, 408′ is constant, but the distance of the second tracking region 402 varies, a rotation and the respective direction of rotation of the object 210 can be determined.
A further exemplary embodiment makes provision for a further tracking region on a further object that is imaged on the raw image to be determined. For example two different objects, for example a head and a hand of a user, can be imaged on the raw image. For example, the user can point his hand to the ToF camera. According to the method, a dedicated tracking region can be assigned in each case to the hand and to the head. This can have the effect that both the distance of the hand and the distance of the head can be determined. To this end, the respective phase-related values of the respective tracking regions of the respective objects can be used, with the use of calibration data that in each case correspond to the tracking regions, for determining the respective distances. It is thus possible, for example, to capture that a user changes the position of his hand, while the position of his head is unchanged. It is also possible to track two users independently of one another.
In a further exemplary embodiment, provision is made for the phase-related value at a single image point of the tracking region to be determined. In the exemplary embodiment, this can have the effect, in particular in the case of tracking regions having a large number of image points, of a great decrease in method outlay. For example, a central image point can be selected for determining the phase-related value to determine the distance of an object 210. It is also possible first for the distance of a plurality of image points of the tracking region of the raw image 408 to be determined and for that image point of the tracking region to be selected in subsequent raw images 408′, 408″ that has the shortest or farthest distance in the raw image 408.
In a further exemplary embodiment, the phase-related values of a plurality of image points of the tracking region are averaged, and the averaged phase-related value is used for the distance determination with the use of the calibration data 108 for the tracking region. This can have the effect that an average distance of the object 210 from the ToF camera can be determined. It can furthermore have the effect that, due to the average distance value and a maximum or minimum distance of the tracking region, which can be determined by way of a phase-related value of the tracking region which has an extreme value, an object unevenness of the object 210 can be determined.
In some exemplary embodiments, when determining the phase-related value, a correction term is taken into account that is dependent on a respective image point to which the phase-related value corresponds. Such a correction term can be applied to a phase-related value by way of a post-processing operation, in which the raw image can be post-processed. For example, intensity fluctuations of image points or pixels of the ToF camera can be compensated here in each case, or systematic sensor errors can be compensated. If, for example, a different phase-related value were generated with the same light incidence at a first image point than at a second image point, one of the two phase-related values can be adapted by way of the correction term, or both values can be corrected. It is also possible to introduce as the correction term a further correction factor, which compensates a different light incidence per image point due to an inhomogeneous illumination unit of the ToF camera. Such a further correction factor can have been ascertained during a production or in a test method of the ToF camera or of an image sensor of the ToF camera and is dependent on the image sensor and consequently adapted in each case specifically to the ToF camera. A correction term can at some image points of the image sensor also have the value zero, which means that a phase-related value is not post-processed if this is not necessary at the corresponding image point.
In some exemplary embodiments, calibration data are generated for the tracking region in dependence on a respective recording situation. For each tracking region of a raw image, in each case corresponding calibration data can be generated that can be appropriate for a current situation. If the object 210 is located with the tracking region at a first distance, or in a first distance region, from the ToF camera, first calibration data can be determined for example for the tracking region, with which data the distance of the object 210 can then be determined. If the object 210 is located with the tracking region at a second distance, or in a second distance region, from the ToF camera, second calibration data can be correspondingly determined for the tracking region which are appropriate for or correspond to the recording situation with the second distance. A recording situation can here be a position of an object within a predetermined region, in particular distance region, around the respective distance. It is also possible for example for respective calibration data to be generated for a plurality of tracking regions, which can be at different distances.
In this respect,
Calibration data can have a plurality of support points or calibration data elements, as is illustrated for example in
The calibration 501 can be started in different ways. In one exemplary embodiments, the calibration 501 is initialized 504 when an object is located in a new recording situation before the ToF camera 204. This can be the case when the object first appears in the image region of the ToF camera 204. Calibration data for the phase position 600 are generated, for example. The object is located at a first distance from the ToF camera, and the tracking region comprising the object has a first phase-related value. By combined storing 510, a support point for the calibration data is generated, which can be referred to as a calibration data element for the phase position 600. In the exemplary embodiment, the object moves during the calibration 501 in a region between the first and a second distance. Performed in the calibration 501 is a respective storing 510 of calibration data elements with a respective distance and a respective phase-related value of the tracking region. The calibration data elements are combined or generated to form calibration data. The thus generated calibration data are then valid for the tracking region in a region between the first and the second distance for the raw images of the phase position 600. In this way, it is possible for calibration data to be provided 502, with which raw images of the phase position 600 can be processed in the method 100. The calibration 501 can alternatively be started by a request 506 for example with respect to a post-calibration, as will be described below.
In one exemplary embodiment, the calibration data 706 can include the calibration data element 700. In the method 100, it is thus possible to process a raw image of the first phase position with the phase-related value and to ascertain, by way of the calibration data element 700, the distance of the tracking region. However, since the phase-related value of a raw image in the method 100 can have different values, it can be necessary to be able to use respectively suitable calibration data herefor. Consequently, in some exemplary embodiments, at least a second calibration data element 704 is determined. It is ascertained for the same phase position as the calibration data element 700 in order to be able to thereby generate calibration data of this phase position. Performed to this end is capturing 508 of a second depth image of the object with the tracking region, for example when a distance of the object with respect to the ToF camera has changed. The second recorded depth image can also be used for the generation of the calibration data element 704 only when a distance change of the tracking region is established on the depth image. Otherwise, a further depth image which has a corresponding distance change can also be recorded and used. The distance of the tracking region of the object on the second depth image can correspond to a distance 712. The raw image of the second depth image having the first phase position can have a phase-related value 714 within the tracking region. The phase-related value 714 and the distance 712 can be stored in the calibration data element 704.
In the same way in one exemplary embodiment at least one further calibration data element 702 is additionally generated if a distance of the tracking region of the object 210 changes. The calibration data element 702 assigns a respective phase-related value to a distance between the first and the second distance. For example, it is thus possible to generate calibration data elements until a sufficient number of calibration data elements for the termination 516 is available. Such a number can represent for example a predetermined density of calibration data elements for a given distance region. For example, ten different calibration data elements can represent a sufficient number for a distance region of 50 cm. A sufficient number can also be specified in that in each case at least one calibration data element is available in each case at a distance of, for example, 10 cm or 2 cm within a given distance region for which calibration data should be applicable.
In a further exemplary embodiment, the first and the second calibration data element 700, 704 are stored together or in combination as calibration data 706, wherein further, for example 10 further or 100 further, calibration data elements are additionally stored. This can have the effect of an increased resolution of the calibration data 706. The calibration data elements 700, 704 can be stored in an interpolable look-up table presenting the calibration data 706. The look-up table can include for example between the calibration data elements 700, 704 five further or ten further or further calibration data elements. In one exemplary embodiment, it is possible in the method 100 to use a raw image with a phase-related value for which no calibration data element exists. If for a tracking region of a raw image to be processed a phase-related value is then ascertained which is for example not stored in the look-up table, it is nevertheless possible to ascertain a corresponding distance for said phase-related value. Herefor, the two calibration data elements with the phase-related values between which the phase-related value of the tracking region is located can be used, wherein the distances assigned thereto can be averaged or proportionally interpolated, and the result can be assigned to the missing phase-related value as a distance. The case may arise from an interpolable look-up table that there is no need to generate a calibration data element for each distance, but a finite number of calibration data elements 700, 702, 704 suffices to be able to assign a distance to a tracking region within a distance region of the calibration data 706.
Calibration data can also have a form other than a look-up table. In a further exemplary embodiment, a look-up function is adapted to the calibration data elements 700, 702, 704. This look-up function represents the calibration data 706. A look-up function can be, for example, a mathematical function or a third-order or higher-order polynomial which connects the respective individual calibration data elements with one another or continuously connects them with one another. It can be adapted at least within a distance region between the calibration data elements 700, 704 to the further calibration data elements 702 located between said calibration data elements. This can have the effect that calibration data 706 thus generated can make possible a particularly accurate assignment of distances, because a look-up function continuously covers the respective region of phase-related values 200. In particular in a lower-order look-up function, calculation of a respective distance 202 for a given phase-related value 200 can be effected quickly and efficiently, such that the distance of the tracking region can be provided in a simple manner.
In a further exemplary embodiment, the phase position of the calibration data 706 corresponds to said phase position of the raw image. The phase position of the raw image can be, for example, the phase position 600. What is meant hereby is that, when a raw image of the phase position 600 is to be analysed with respect to the distance of its tracking region, calibration data 706 of said same phase position 600 are used. These can be generated by the request 506, for example when the raw image of the phase position 600 is to be processed, or can have been generated possibly even before the recording of the raw image with the phase position 600. It is likewise possible that a raw image which is to be processed in accordance with the method is recorded with that phase position for which calibration data 706 are already available. This can have the effect that calibration data 706 do not need to be generated for all phase positions 600, 602, 604, 606 of the respective raw images of the depth images, but it may be sufficient for calibration data 706 to be generated for a single phase position 600 if only raw images of the phase position 600 are to be processed. In this way, the method can be performed particularly efficiently.
Provision is made in a further exemplary embodiment for a distance of the tracking region obtained from the depth image to be assigned to a phase-related value of a raw image of a different phase position 602, 604, 606. In this way, calibration data 706 can be generated for different phase positions 600, 602, 604, 606. It is thus possible for calibration data 706 for different phase positions to be available, such that raw images of the respective different phase positions can be processed immediately, without a request 506 for generating respective calibration data 706 being necessary. In some exemplary embodiments, calibration data 706 can be generated for a single further or for a plurality of further phase positions 602, 604, 606.
In a specific exemplary embodiment, it may be the case that the object moves out of a distance region of available calibration data. If the object for example is at a distance that is closer to the ToF camera than a distance that is stored in a calibration, for example the distance 708, a phase-related value of a tracking region exists outside of a validity region between the phase-related values 714 and 710 of the calibration data 706. For example, the phase-related value is then greater than the greatest phase-related value 710 of the calibration data 706. In an exemplary embodiment, the distance of said object can nevertheless be ascertained by using 108 the calibration data 706.
For this case,
In a further exemplary embodiment, if an object exceeds a validity region of calibration data, further calibration data elements are produced, that is to say a post-calibration is performed for the new recording situation. For example, it is possible at a distance 800 in the case of available calibration data 706 by way of the request 506 for the calibration 501 to be effected for the new distance region around the distance 800. It is possible here to continue to keep the original calibration data 706 available for example in a memory and to use them again when the object moves back into the distance region in which the calibration data 706 are valid. In one exemplary embodiment, extrapolation 802 of the calibration data 706 is effected until the post-calibration is at least partially complete.
Owing to the production of raw images with modulated light, a phase-related value periodically changes in the case of a distance change of an object, with the result that the same phase-related value can be assigned to a plurality of distances. In one example, a possible result of said periodicity is that a phase-related value is ambiguous. In this respect,
In one exemplary embodiment, a raw image of a different phase position is used when the phase-related value 906 of the raw image of the first phase position fulfils a predetermined criterion. The predetermined criterion can be fulfilled for example when the phase-related value 906 is too close to the extreme point 908, that is to say has a smaller distance than a predetermined distance from the extreme point 908. For example, the predetermined distance can be the mean value between the phase-related value of the extreme point 908 and the average value 900.
In a further exemplary embodiment, provision is made for the predetermined criterion to be fulfilled when the phase-related value of the first raw image of the first phase position is greater than the phase-related value of a second raw image of a different phase position. For example, the tracking region of the object 210 has, when using the first raw image, a phase-related value 906, while the same tracking region of the same object 210 of the second raw image with another phase position that differs from the first phase position has the phase-related value 1002. The phase-related value 1002 is here closer to the average value 900 than the phase-related value 906. In this case, it is possible for the distance determination of the object 210 in subsequent methods 100 for raw images of the other phase position to be used.
In one example, a raw image of a first phase position 600 is used to determine a phase-related value 906, to which a distance 902 is assigned. The distance 902 can be incorporated in inverted calibration data of the first phase position 600 and a second phase position 602, with the result that a phase-related value per phase position 600, 602 can be ascertained, for example the phase-related value 906 and the phase-related value 1002. In the example, the phase-related value 1002 is closer to the average value 900 and as a consequence, the second phase position 602 is selected in the selection 1102. In subsequent methods 100, raw images of the phase position 602 can be processed and in this way ambiguous assignment of distances can be avoided. In order to select 1102 an optimum phase position 600, 602, 604, 606, it is possible in one exemplary embodiment for a plurality of inverted calibration data of different phase positions, in particular of four different or all available different phase positions 600, 602, 604, 606, to be used to select the phase position by way of which the phase-related value that is closest to the average value is obtained.
In one exemplary embodiment, the image processing apparatus 1200 can be attached to a ToF camera 204 or be connected thereto, in particular electrically connected for the purposes of signal transmission. The image processing apparatus 1200 can also be integrated in such a ToF camera 204.
In a further exemplary embodiment, a ToF camera 204 can have an image processing apparatus 1200. An image processing apparatus 1200 can be integrated in the ToF camera or be connected to the ToF camera. Such a ToF camera can be configured for the use in a mobile device or be installed in a mobile device.
A further exemplary embodiment can comprise a computer program product with a program code. The latter can effect a performance of the method 100 or of the generation 500 of calibration data in particular if it is installed on the ToF camera 204 or a mobile device with the ToF camera 204. The program code can effect the same if it is executed on the image processing apparatus 1200.
The examples show how the phase value or phase-related value of objects can be recorded together with distance measurements and can later serve as calibration data or look-up function to reconstruct depth or distance from individual phase images. Since ToF cameras provide all necessary input data, the system can adapt to new or changing objects during normal operation. When a number of depth measurements of the same object have been performed, the system can stop performing depth measurements for generating calibration data and instead simply use individual phase images. A simple look-up operation with respect to the phase response function learned in the calibration results in an assigned depth value or a corresponding distance. This can lead to a simple performance or performability with great accuracy.
An alternative possibility for generating calibration data can be an approach which is entirely based on machine learning, wherein such a system does not only learn the phase response function or calibration data of specific regions of an object, but also learns that of the complete object of the phase image or raw image per se in a structure, for example a neural network, which produces different distance values.
The aspects and features which are described together with one or more of the previously detailed examples and figures can also be combined with one or more of the other examples in order to replace an identical feature of the other example or in order to additionally introduce the feature in the other example.
Examples can furthermore be a computer program with a program code for performing one or more of the above methods or can relate thereto when the computer program is executed on a computer or a processor. Steps, operations or processes of different methods described above can be performed by programmed computers or processors. Examples can also cover program storage apparatuses, e.g. digital data storage media, which are machine-readable, processor-readable or computer-readable, and code machine-executable, processor-executable or computer-executable programs of instructions. The instructions perform some or all of the steps of the above-described methods or effect their performance. The program storage apparatuses can comprise or be e.g. digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives or optically readable digital data storage media. Further examples can also cover computers, processors or control units which are programmed for performing the steps of the above-described methods, or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), which are programmed for performing the steps of the above-described methods.
Only the principles of the disclosure are illustrated by the description and drawings. Furthermore, all examples mentioned here are expressly intended in principle only to serve teaching purposes, so as to support the reader in understanding the principles of the disclosure and the concepts provided by the inventor (1) for further refining the technology. All statements made here relating to principles, aspects and examples of the disclosure and concrete examples thereof are intended to encompass the counterparts thereof.
A function block designated as “Means for . . . ” carrying out a specific function can relate to a circuit configured for carrying out a specific function. Consequently a “Means for something” can be implemented as a “Means configured for or suitable for something”, e.g. a component or a circuit configured for or suitable for the respective task.
Functions of different elements shown in the figures including those function blocks designated as “Means”, “Means for providing a signal”, “Means for generating a signal”, etc. can be implemented in the form of dedicated hardware, e.g. “a signal provider”, “a signal processing unit”, “a processor”, “a controller” etc., and as hardware capable of executing software in conjunction with associated software. When provided by a processor, the functions can be provided by a single dedicated processor, by a single jointly used processor or by a plurality of individual processors, some or all of which can be used jointly. However, the term “processor” or “controller” is far from being limited to hardware capable exclusively of executing software, but rather can encompass digital signal processor hardware (DSP hardware), network processor, application specific integrated circuit (ASIC), field programmable logic array (FPGA=Field Programmable Gate Array), read only memory (ROM) for storing software, random access memory (RAM) and non-volatile memory apparatus (storage). Other hardware, conventional and/or customized, can also be included.
A block diagram can illustrate for example a rough circuit diagram which implements the principles of the disclosure. In a similar manner, a flow diagram, a flow chart, a state transition diagram, a pseudo-code and the like can represent various processes, operations or steps which are represented for example substantially in a computer-readable medium and are thus performed by a computer or processor, regardless of whether such a computer or processor is explicitly shown. Methods disclosed in the description or in the patent claims can be implemented by a component having a means for performing each of the respective steps of said methods.
It is to be understood that the disclosure of a plurality of steps, processes, operations or functions disclosed in the description or the claims should not be interpreted as being in the specific order, unless this is explicitly or implicitly indicated otherwise, e.g. for technical reasons. The disclosure of a plurality of steps or functions therefore does not limit them to a specific order, unless said steps or functions are not interchangeable for technical reasons. Furthermore, in some examples, an individual step, function, process or operation can include a plurality of partial steps, functions, processes or operations and/or be subdivided into them. Such partial steps can be included and be part of the disclosure of said individual step, provided that they are not explicitly excluded.
Furthermore, the claims that follow are hereby incorporated in the detailed description, where each claim can be representative of a separate example by itself. While each claim can be representative of a separate example by itself, it should be taken into consideration that—although a dependent claim can refer in the claims to a specific combination with one or more other claims—other examples can also encompass a combination of the dependent claim with the subject matter of any other dependent or independent claim. Such combinations are explicitly proposed here, provided that no indication is given that a specific combination is not intended. Furthermore, features of a claim are also intended to be included for any other independent claim, even if this claim is not made directly dependent on the independent claim.
Number | Date | Country | Kind |
---|---|---|---|
102017126378.0 | Nov 2017 | DE | national |