The present invention relates to detection of changes in relation to a tracked object in a video sequence and handling of tracking the object, and specifically to detecting a change of ratio of occlusion of a tracked object in a video sequence and/or deciding to refrain from using re-identification in tracking the object.
When tracking an object in a video sequence it may be useful to detect when a part of the tracked object goes from being visible in an image frame to becoming occluded in a successive image frame of the video sequence or vice versa, i.e., a change of the ratio of occlusion of the tracked object between two successive image frames, such as when the tracked object moves behind or out from behind something in the scene captured in the video sequence. Such detection is useful, e.g., when deciding whether to use functions based on the appearance of a detected object, such as re-identification, in object tracking when identifying which object detection in an image frame corresponds to an object detection in a previous image frame. In prior art such detection has been suggested. However, the suggested methods have drawbacks such as being complex. Hence, there is still a need for improvements in detecting a change of ratio of occlusion of a tracked object between two successive image frames.
An object of the present invention is to overcome or at least mitigate the problems of prior art.
According to a first aspect, a computer implemented method of detecting a change of ratio of occlusion of a tracked object in a video sequence is provided. For each of a plurality of image frames of the video sequence, a bounding box or a mask of the tracked object is determined. For each pair of successive image frames of a plurality of pairs of successive image frames of the plurality of image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames is calculated. Similarly, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames is calculated. On condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, it is detected that a ratio of occlusion of the tracked object has changed.
The invention makes use of the realization that when an object is moving, an intersection over union, IoU, between bounding boxes or masks in a first and second image frame of a video sequence will be different when a part of the object that is visible in the first image frame becomes occluded in the second image frame or vice versa as compared to an IoU, between bounding boxes or masks in a first and second image frame of a video sequence where the ratio of occlusion of the object does not change between the first and second images.
The calculation of the further IoU of the further pair of successive image frames and of the IoUs of the plurality of pairs of successive image frames as well as the comparison to determine if the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount are straightforward and non-complex. Hence, the method according to the first aspect enables straightforward and non-complex detection that a ratio of occlusion of a tracked object has changed.
Furthermore, when a ratio of occlusion of a tracked object changes considerably between the further pair of successive image frames as compared to the ratio of occlusion of the tracked object in the plurality of pairs of successive image frames, the further IoU will differ considerably from the calculated IoUs of the plurality of pairs of successive image frames. Hence, the method according to the first aspect enables reliable detection that a ratio of occlusion of a tracked object has changed.
By a ‘bounding box of the tracked object’ is meant a box that encloses the tracked object. The term ‘bounding box’ is used as it is generally understood within video analytics in relation to object detection. Typically, a bounding box of a tracked object is the minimal box that encloses the tracked object. Methods for detecting an object and determining a bounding box are well known in the art.
By a ‘mask of the tracked object’ is meant a mask that encloses the tracked object. The term ‘mask’ is used as it is generally understood within video analytics in relation to object detection. Typically, a mask of a tracked object coincides with or is close to the outline of the tracked object. Methods for detecting an object and determining a mask, e.g., by segmentation, are well known in the art.
By ‘intersection over union’, IoU, is meant a division or ratio of the area of the intersection between a first area, e.g., a first bounding box or mask, and a second area, e.g., a second bounding box or mask, and the union between the first area and the second area.
It is to be noted that both the plurality of pairs of successive image frames and the further pair of successive image frames belong to the plurality of image frames. Hence, it is implicit that the plurality of pairs of successive image frames does not constitute all of the plurality of image frames.
The method according to the first aspect may further comprise calculating a mean or median of the IoUs of the plurality of pairs of successive image frames. The act of detecting then comprises, on condition that the further IoU differs from the determined mean or median of the IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detecting that a ratio of occlusion of the tracked object has changed.
The mean may either be an unweighted mean, or it may be a weighted mean. For example, a weighted mean may be used where a weight of an IoU increases with how late in the plurality of pairs of successive image frames, i.e., the closer to the further pair of successive image frames, the pair of successive image frames occurs for which the IoU is calculated. For example, the weights may increase exponentially.
By using a weighted mean, the difference between the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames can be made more dependent on IoUs of pairs of successive image frames of the plurality of pairs of successive image frames that are closer to the further pair of successive image frames in the plurality of image frames, i.e., in the video sequence.
In addition to calculating a mean or median, the method according to the first aspect may further comprise calculating a variance of the IoUs of the plurality of pairs of successive image frames, and setting the threshold using the calculated variance.
By setting the threshold using the calculated variance, the threshold may be set such that change of a ratio of occlusion of the tracked object is only detected if the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames more than normal variation within the plurality of pairs of successive image frames as indicated by the variance.
According to a second aspect, a computer implemented method of deciding to refrain from using re-identification in tracking an object in image frames of a video sequence is provided. For each of a plurality of image frames of the video sequence, a bounding box or a mask of the tracked object is determined. For each pair of successive image frames of a plurality of pairs of successive image frames of the plurality of image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames is calculated. For a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames is calculated. On condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, it is decided to refrain from using re-identification in tracking the object in the further pair of successive image frames.
The above-mentioned optional additional features of the method according to the first aspect, when applicable, apply to the method according to the second aspect as well. In order to avoid undue repetition, reference is made to the above.
According to a third aspect, a non-transitory computer-readable storage medium is provided having stored thereon instructions for implementing the method according to the first aspect or a method according to the second aspect, when executed in a device having a processor.
The above-mentioned optional additional features of the method according to the first aspect, when applicable, apply to the non-transitory computer-readable storage medium according to the third aspect as well. In order to avoid undue repetition, reference is made to the above.
According to a fourth aspect, a device is provided. The device comprises circuitry configured to execute a determining function, a first calculating function, a second calculating function, and a detecting function. The determining function is configured to determine, for each of a plurality of image frames of a video sequence, a bounding box or a mask of the tracked object. The first calculating function is configured to calculate, for each pair of successive image frames of a plurality of pairs of successive image frames of the plurality of image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames. The second calculating function is configured to calculate, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames. The detecting function is configured to, on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detect that a ratio of occlusion of the tracked object has changed.
The above-mentioned optional additional features of the method according to the first aspect, when applicable, apply to the device according to the fourth aspect as well. In order to avoid undue repetition, reference is made to the above.
According to a fifth aspect, a device is provided. The device comprises circuitry configured to execute a determining function, a first calculating function, a second calculating function, and a deciding function. The determining function is configured to determine, for each of a plurality of image frames of a video sequence, a bounding box or a mask of the tracked object. The first calculating function is configured to calculate, for each pair of successive image frames of a plurality of pairs of successive image frames of the plurality of image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames. The second calculating function is configured to calculate, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames. The deciding function is configured to, on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, decide to refrain from using re-identification in tracking the object in the further pair of successive image frames.
The above-mentioned optional additional features of the method according to the first aspect, when applicable, apply to the device according to the fourth aspect as well. In order to avoid undue repetition, reference is made to the above.
A further scope of applicability of the present invention will become apparent from the detailed description given below. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the scope of the invention will become apparent to those skilled in the art from this detailed description.
Hence, it is to be understood that this invention is not limited to the particular component parts of the device described or acts of the methods described as such device and method may vary. It is also to be understood that the terminology used herein is for purpose of describing particular embodiments only and is not intended to be limiting. It must be noted that, as used in the specification and the appended claim, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements unless the context clearly dictates otherwise. Thus, for example, reference to “a unit” or “the unit” may include several devices, and the like. Furthermore, the words “comprising”, “including”, “containing” and similar wordings does not exclude other elements or steps.
The above and other aspects of the present invention will now be described in more detail, with reference to appended figures. The figures should not be considered limiting but are instead used for explaining and understanding.
The present invention will now be described hereinafter with reference to the accompanying drawings, in which currently preferred embodiments of the invention are illustrated. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
The invention is applicable to scenarios where an object is tracked in a video sequence. In such scenarios it may be beneficial to detect when a ratio of occlusion of the tracked object changes. For example, in some embodiments, so-called frame-by-frame re-identification, or re-identification for short, may be used to assist in the tracking of the object, e.g., originally only based on a Kalman state, by also taking into account how likely it is based on re-identification that a candidate object in an image frame is the same object as an object detected in a previous frame by the object tracker. An example of such re-identification is DeepSORT described in https://arxiv.org/pdf/1703.07402.pdf In such embodiments, weighting can be used wherein indicating to what extent a detection should be based on re-identification and a Kalman state, respectively. In case a ratio of occlusion of the tracked object changes for a tracked object between a pair of successive image frames of a video sequence, e.g., because a part of the object visible in one of the image frames of the pair becomes occluded in the next image frame of the pair or vice versa, re-identification may not be useful to assist the tracking. Hence, in such a case it may be best to refrain from using re-identification, or at least reduce the weight of results from re-identification, when assisting the tracking in relation to the pair of successive image frames in which the change of the ratio of occlusion of the tracked object is detected. Other methods that make use of comparison of pixel data between an object and candidate objects are also affected by such a change of ratio of occlusion of a tracked object in a video sequence and thus, detecting change of ratio of occlusion of the tracked object may be a trigger to avoid using such methods or reducing the weight of the results of such methods in relation to a pair of successive image frames in which a change of the ratio of occlusion of the tracked object is detected.
According to embodiments of the invention, change of ratio of occlusion is detected by calculating IoUs between bounding boxes or masks of the tracked object in a plurality of pairs of successive image frames and detecting a difference over a threshold value of an IoU in a further pair of successive image frames in relation to the IoUs of the plurality of pairs of successive image frames. Additionally, such difference over a threshold value may also be due to other reasons than change of ratio of occlusion. Hence, when detecting a difference over a threshold value of an IoU in a further pair of successive image frames in relation to the IoUs of the plurality of pairs of successive image frames it may be generally beneficial to refrain from use of re-identification or other methods that make use of comparison of pixel data between an object and candidate objects, or at least reduce the weight thereof, when assisting tracking of an object.
Embodiments of a method 100 of detecting a change of ratio of occlusion of a tracked object in a video sequence will now be described in relation to the flow chart in
In the method 100 a bounding box of the tracked object is determined S110 for each of a plurality of image frames of the video sequence. The bounding boxes may be determined using any known method for detecting objects and providing a bounding box that defines the extension of the detected object. Typically, a bounding box of a tracked object is the minimal or close to minimal box that encloses the tracked object. Known algorithms for detecting objects and providing bounding boxes are YOLO, MobileNet, and R-CNN. From R-CNN may also provide masks.
Turning now to
When selecting a bounding box in the second image of the pair of successive images from a number of candidate bounding boxes, the bounding box closest to the bounding box of the first image of the pair of successive images is typically selected. Similarly, when selecting a bounding box in the second image of the further pair of successive images from a number of candidate bounding boxes, the bounding box closest to the bounding box of the first image of the further pair of successive images is typically selected.
For each pair of successive image frames of a plurality of pairs of successive image frames of the plurality of image frames, an IoU of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames is calculated S120. Typically, the plurality of pairs of successive image frames are overlapping such that each image frame is included in a pair with the preceding image frame and a pair with the subsequent image frame. This will be the result if for each image frame an IoU is calculated for a pair consisting of the image frame and the previous image in the video sequence. However, it is also possible for the plurality of pairs of successive image frames to be non-overlapping such that each image frame is included only in one pair but such that there is no image frame in a sequence that is not included in any pair. This will be the result if for every second image frame an IoU is calculated for a pair consisting of the image frame and the previous image in the video sequence. Preferably, either all pairs are overlapping, or all pairs are non-overlapping. However, a combination is feasible.
Turning now to
For a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames is calculated S130. The further pair of successive image frames typically consists of the last image frame of the last pair of successive image frames of the plurality of image frames and the subsequent image frame of the plurality of image frames. In alternative, the further pair of successive image frames consists of the first and second image frame subsequent to the last image frame of the last pair of successive image frames of the plurality of image frames. The further pair of successive image frames may also comprise a pair of successive image frames even later in the plurality of image frames. However, it is preferred that the further pair of successive image frames are close to the last image frame of the last pair of successive image frames of the plurality of image frames.
Turning now to
On condition C140 that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, it is detected S150 that a ratio of occlusion of the tracked object has changed. It is to be noted that it is the absolute value of the difference that is relevant as a change to a higher ratio of occlusion will have a difference in opposite value from a change to a lower ratio of occlusion.
Turning to
The method 100 may further comprise calculating a mean or median of the IoUs of the plurality of pairs of successive image frames. The act of detecting S150 then comprises detecting that a ratio of occlusion of the tracked object has changed on condition that the further IoU differs from the determined mean or median of the IoUs of the plurality of pairs of successive image frames by more than the threshold amount.
The mean may either be an unweighted mean, or it may be a weighted mean. For example, a weighted mean may be used where a weight of an IoU increases with how late in the of pairs of successive image frames, i.e., the closer to the further pair of successive image frames, the pair of successive image frames occurs for which the IoU is calculated. For example, the weights may increase exponentially.
In addition to calculating a mean or median, the method 100 may further comprise calculating a variance of the IoUs of the plurality of pairs of successive image frames, and setting the threshold using the calculated variance. By setting the threshold using the calculated variance, the threshold may be set such that change of a ratio of occlusion of the tracked object is only detected if the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames more than normal variation within the plurality of pairs of successive image frames as indicated by the variance. Other measures indicating typical variation of the IoU of the pairs of successive image frames of the plurality of pairs of successive image frames are of course applicable when setting the threshold.
The method 100 has been described in relation to a particular instance in detection of a change in ratio of occlusion of a tracked object in a video sequence where it is to be determined whether a change in ratio of occlusion occurs in a further pair of successive image frames and where IoUs are first determined for a plurality of pairs of successive image frames occurring before the further pair of successive image frames in the video sequence. Typically, the detection is performed for each pair of successive image frames in a video sequence, such that for each pair of successive image frames in the video sequence a comparison of the IoU calculated for that pair of successive image frames is made to calculated IoUs for previous pairs of successive image frames in the video sequence in order to detect any change of rate of occlusion. The comparison may be made to the IoUs of a predetermined number of previous pairs of successive image frames, i.e., the plurality of pairs of successive image frames are predetermined number of pairs of successive image frames constituting a rolling window of previous pairs of successive image frames whose IoUs are used in the comparison.
The plurality of pairs of successive image frames should preferably be such that no detection of a change of ratio of occlusion has been detected for any pair of successive image frames of the plurality of pairs of successive image frames, i.e., such that the IoU of each pair of successive image frames of the plurality of image frames does not differ from IoUs of a plurality of pairs of successive image frames before that pair of successive image frames.
When a change of ratio of occlusion is detected for a (further) pair of successive image frames, the IoU will differ from the IoUs of the plurality of previous image frames and hence for a next pair of successive image frames after the further pair of successive image frames a change of ratio of occlusion should preferably be detected by comparison to the pair of successive image frames for which the change of ratio of occlusion is detected. Hence, when a change of ratio of occlusion is detected for a further pair of successive image frames, the plurality of pairs of successive image frames in the method should be selected starting from the pair of successive image frames for which the change of ratio of occlusion is detected. The plurality of pairs of successive image frames may thus be selected as all or a subset of the pairs of successive image frames since the previous pair of successive image frames for which a change of ratio of occlusion is detected or if no change of ratio of occlusion has been detected since the first pair of successive image frames for which the object is tracked.
In alternative to determining bounding boxes and basing the IoUs calculated in the method 100 on the determined bounding boxes, masks can be determined and the IoUs may be calculated based on the determined masks. The masks may be determined using any known method for detecting objects and providing a mask that defines the extension of the detected object. Typically, a mask of a tracked object coincides with or is close to the outline of the tracked object.
Embodiments of a method 400 of deciding to refrain from using re-identification in tracking an object in image frames of a video sequence will now be described in relation to the flow chart in
The method 400 comprises the act of determining S110, the act of calculating S120, the further act of calculating S130 in the same way as the method 100 as described in relation to
The method 400 differs from the method 100 in that the act of detecting S150 is replaced by an act of deciding S450. Specifically, on condition C140 that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, it is decided S450 to refrain from using re-identification in tracking the object in the further pair of successive image frames. To this end, the condition C140 that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, is identical to the condition C140 as described in relation to
Although a change of ratio of occlusion of the tracked object is the most common reason to a change of IoU in relation to previous IoUs by more than the threshold value, there are other reasons for such a change. Specifically, everything that causes a considerable change of the size, shape or location of a bounding box or mask of a tracked object between successive image frames will typically result in such a change of IoU in relation to previous IoUs by more than the threshold value. Such other reasons are for example, an abrupt change of pose of the tracked object, an abrupt change of shape of the tracked object, etc.
It is to be noted that even if there are other reasons than change of ratio of occlusion of the tracked object for a change of IoU in relation to previous IoUs by more than the threshold value, the other reasons constitute a small proportion of the total number of such changes of IoU in relation to previous IoUs by more than the threshold value. Hence, the small proportion of false detections of a change of ratio of occlusion of the tracked object in the method of 100 is generally acceptable.
In further embodiments, on condition that the further IoU differs from the IoUs of the plurality of pairs of successive image frames by more than a threshold amount, it may, alternatively or additionally to deciding to refrain from using re-identification, be decided to refrain from using, or reducing the weight of, one or more of other methods that make use of comparison of pixel data between an object and candidate objects, e.g., in relation to assisting tracking of the object.
The optional additional features of the method 100 described in relation to
The device 500 comprises a circuitry 510. The circuitry 510 is configured to carry out functions of the device 500. The circuitry 510 may include a processor 512, such as for example a central processing unit (CPU), graphical processing unit (GPU), tensor processing unit (TPU), microcontroller, or microprocessor. The processor 512 is configured to execute program code. The program code may for example be configured to carry out the functions of the device 500.
The device 500 may further comprise a memory 520. The memory 520 may be one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, a random access memory (RAM), or another suitable device. In a typical arrangement, the memory 520 may include a non-volatile memory for long term data storage and a volatile memory that functions as device memory for the circuitry 510. The memory 520 may exchange data with the circuitry 510 over a data bus. Accompanying control lines and an address bus between the memory 520 and the circuitry 510 also may be present.
Functions of the device 500 may be embodied in the form of executable logic routines (e.g., lines of code, software programs, etc.) that are stored on a non-transitory computer readable medium (e.g., the memory 520) of the device 500 and are executed by the circuitry 510 (e.g., using the processor 512). Furthermore, the functions of the device 500 may be a stand-alone software application or form a part of a software application that carries out additional tasks related to the device 500. The described functions may be considered a method that a processing unit, e.g., the processor 512 of the circuitry 510 is configured to carry out. Also, while the described functions may be implemented in software, such functionality may as well be carried out via dedicated hardware or firmware, or some combination of hardware, firmware and/or software.
The circuitry 510 is configured to execute a determining function 521, a first calculating function 522, a second calculating function 523, and a detecting function 524.
The determining function 521 is configured to determine, for each of a plurality of image frames of a video sequence, a bounding box or a mask of the tracked object.
The first calculating function is configured to calculate, for each pair of successive image frames of a plurality 522 of pairs of successive image frames of the plurality of image frames, an intersection over union, IoU, of a first bounding box in a first image frame of the pair of successive image frames and a second bounding box in a second image frame of the pair of successive image frames or of a first mask in the first image frame of the pair of successive image frames and a second mask in the second image frame of the pair of successive image frames.
The second calculating function 523 is configured to calculate, for a further pair of successive image frames subsequent to the plurality of pairs of successive image frames in the plurality of image fames, a further intersection over union, IoU, of a first bounding box in a first image frame of the further pair of successive image frames and a second bounding box in a second image frame of the further pair of successive image frames or of a first mask in the first image frame of the further pair of successive image frames and a second mask in the second image frame of the further pair of successive image frames.
The detecting function 524 is configured to, on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, detect that a ratio of occlusion of the tracked object has changed.
The detailed description of the acts of the method 100 described in relation to
The device 600 is similar to the device 500 described in relation to
The device 600 comprises a circuitry 510 configured to execute the determining function 521, the first calculating function 522, and the second calculating function 523 as described in relation to
The deciding function 624 is configured to, on condition that the further IoU differs from the calculated IoUs of the plurality of pairs of successive image frames by more than a threshold amount, decide to refrain from using re-identification in tracking the object in the further pair of successive image frames.
The detailed description of the acts of the method 100 described in relation to
Even even if it is disclosed herein that a IoU is calculated for each of a plurality of successive image frames and an IoU of a further pair of successive image frames is compared to the IoUs of the plurality of pairs of successive image frame, it is feasible to only determine an IoU of one pair of successive image frames and then compare the IoU of the further pair of successive image frames to the IoU of the one pair of successive image frames. However, the methods and devices will be less accurate using only one pair of successive image frames for comparison. Furthermore, even if it is disclosed herein that the pair of image frames are pairs of successive image frame, it is feasible also to use pairs of image frames that are not successive but which are distanced by one or more intermediate image frames in the plurality of image frames. However, the methods and devices will be less accurate the further away the image frames are in each pair of image frames.
A person skilled in the art realizes that the present invention is not limited to the embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims. Such modifications and variations can be understood and effected by a skilled person in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
23176361.6 | May 2023 | EP | regional |