METHOD FOR CODING A SEQUENCE OF VIDEO IMAGES

Information

  • Patent Application
  • 20230036743
  • Publication Number
    20230036743
  • Date Filed
    July 07, 2022
    2 years ago
  • Date Published
    February 02, 2023
    a year ago
  • CPC
    • G06V10/82
    • G06V10/42
    • G06V10/761
  • International Classifications
    • G06V10/82
    • G06V10/42
    • G06V10/74
Abstract
A method for coding a predefined time sequence of video images in a representation which is evaluable by machine made up of stationary features and nonstationary features. In the method: at least one function parameterized using trainable parameters is provided, which maps sequences of video images on representations; from the sequence of video images, N adjoining, nonoverlapping short extracts and one long extract, which contains all N short extracts are selected; using the parameterized function, a representation of the long extract and multiple representations of the short extracts are ascertained; the parameterized function is assessed; the parameters of the function are optimized with the goal that the assessment of the cost function for representations ascertained in future is expected to improve; using the function parameterized by the finished optimized parameters, the predefined time sequence of video images is mapped on the sought representation.
Description
CROSS REFERENCE

The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 10 2021 207 468.5 filed on Jul. 14, 2021, which is expressly incorporated herein by reference in its entirety.


FIELD

The present invention relates to the coding of a sequence of video images in a representation which facilitates the downstream machine evaluation.


BACKGROUND INFORMATION

When guiding vehicles in road traffic, observations of the vehicle surroundings are an important source of information. In particular, the dynamic behavior of other road users is often evaluated from a sequence of video images.


German Patent Application No. DE 10 2018 209 388 A1 describes a method using which a region in the surroundings of a vehicle may be ascertained from video images, in which a situation relevant for the travel and/or safety of this vehicle is present.


SUMMARY

Within the scope of the present invention, a method is provided for coding a predefined time sequence x of video images into a representation ξ=(ψ, ϕ) made up of stationary features ψ and nonstationary features ϕ. Such a representation is evaluable further by machine with respect to many downstream tasks. The processing of sequence x of video images to form representation ξ=(ψ, ϕ) is thus somewhat similar to the processing of chemical raw materials containing carbon and hydrogen to form a synthesis gas, which may in turn be used as a universal base material for manufacturing a variety of products.


In accordance with an example embodiment of the present invention, in within the scope of the method, at least one function ƒθ({tilde over (x)}) parameterized using trainable parameters θ is provided, which maps sequences {tilde over (x)} of video images on representations ƒθ({tilde over (x)})=ξ=(ψ, ϕ). These parameters θ are trained in a self-monitored manner on the basis of predefined time sequence x of video images. When parameters θ are optimized to their final values θ*, function ƒθ* is hereby also defined, using which the predefined time sequence x of video images is mapped on searched representation ƒθ*(x)=ξ=(ψ, ϕ).


The self-monitored training begins in that from sequence x of video images N, adjoining, nonoverlapping short extracts xs(1), . . . , xs(N) and a long extract xl, which contains all N short extracts xs(1), . . . , xs(N), are selected. Using parameterized function ƒθ, whose behavior is characterized by the present state of parameters θ, a representation ƒθ(xl)=ξl=(ψl, ϕl) of long extract xl and multiple representations ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)), of short extracts xs(i) for i=1, . . . , N are ascertained. Parameters θ may, for example, be randomly initialized at the beginning of the training and then change in the course of the optimization.


Parameterized function ƒθ is assessed using a predefined cost function custom-character about the extent to which representation ξl=(ψl, ϕl) of long extract xl is consistent with representations ξs(i)=(ψs(i), ϕs(i)) of short extracts xs(i) with regard to at least one predefined consistency condition. The self-monitored optimization of parameters θ is directed to the goal of the assessment of the cost function being expected to improve for representations ƒθ(xl)=ξl=(ψl, ϕl) and ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)) ascertained in future.


The self-monitored character of this optimization is that only the at least one consistency condition between representation ξl of long extract xl, on the one hand, and representations ξs(i) of short extracts xs(i), on the other hand, is utilized, which in turn are both ascertained from identical predefined sequence x of video images. No “ground truth” applied from an external source is required, which “labels” training sequences of video images using setpoint representations, on which function ƒθ({tilde over (x)}) should ideally map these training sequences. On the one hand, such labeling generally requires additional manual work and is therefore costly. On the other hand, the question arises in such monitored training to what extent the training completed on one sequence of video images is also transferable to sequences of video images.


Several examples of consistency conditions and contributions to cost function custom-character, in which these consistency conditions may manifest themselves, are indicated. These consistency conditions each contain similarity comparisons between features of long extract xl, on the one hand, and features of short extracts xs(i) for i=1, . . . , N, on the other hand.


For these similarity comparisons, a similarity measure is required which maps two features z1 and z2 on a numeric value for the similarity. One example of such a similarity measure is the cosine similarity








sim
h

(


z
1

,

z
2


)

=


1
τ







h

(

z
1

)

T



h

(

z
2

)






h

(

z
1

)







h

(

z
2

)





.






Herein, h is a predefined transformation, and τ is a temperature parameter for the scaling. Transformation h may in particular be a trained transformation, for example.


The similarity measured by cost function custom-character may in this case in particular be related in each case to similarities which supply a comparison of particular features ψs(i) or g(ϕs(1), . . . , ϕs(N)) of short extracts xs(i) for i=1, . . . , N, on the one hand, to features ψl or ϕl of a randomly generated sequence xl of video images. The latter similarity is ideally to be zero, but is not in practice. The measurement of the relationship by the cost function is a step toward measuring a signal-to-noise ratio, instead of only a signal strength, in communication engineering.


From a randomly generated sequence xl of video images, parameterized function ƒθ generates a representation ξneg=(ψneg, ϕneg). Representations ξneg obtained for a predefined set of randomly generated sequences xl may be combined into a set custom-character, custom-characterΨ being the set of all stationary features ψneg and custom-characterϕ being the set of all nonstationary features ϕneg of representations ξneg.


In one particularly advantageous embodiment of the present invention, the at least one consistency condition includes that stationary features ψl of long extract xl are similar to stationary features ψs(i) of short extracts xs(i) for i=1, . . . , N. If these are actually stationary features, they have to remain stationary both on the time scale of short extracts xs(i) and also on the time scale of long extract xl. This consistency condition may contribute, for example, a contribution








s

=


-
log




exp

(


sim

h
s


(


ψ
s

(
j
)


,

ψ
l


)

)







ψ
_

l




𝒩
ψ



{

ψ
l

}





exp

(


sim

h
s


(


ψ
s

(
j
)


,


ψ
_

l


)

)








to cost function custom-character. In this case, in similarity measure simhs hs is a trained transformation h, which is specifically used for the examination of the stationary features. ψs(j) is a stationary feature of an arbitrary randomly selected short extract xs(j).


In a further particularly advantageous embodiment of the present invention, the at least one consistency condition includes that the nonstationary features ϕl of long extract xl are similar to an aggregation g(ϕs(1), . . . , ϕs(N)), formed using an aggregation function g of nonstationary features ϕs(1), . . . , ϕs(N) of short extracts xs(1), . . . , xs(N). The result of the changes in the video image caused by the nonstationary features is not dependent on whether the sequence of video images is played back in one stroke or is paused after each short extract xs(i). This consistency condition may contribute, for example, a contribution








n

=


-
log




exp

(


sim

h
n


(


ϕ
g

,

ϕ
l


)

)







ϕ
_

l




𝒩
ϕ



{

ϕ
l

}





exp

(


sim

h
n


(


ϕ
g

,


ϕ
_

l


)

)








to cost function custom-character Therein, ϕg=g(ϕs(1), . . . , ϕs(N)) is an aggregated version of nonstationary features. In similarity measure simhn, hn is a trained transformation h, which is specifically used for the examination of nonstationary features.


Aggregation function g may include in particular, for example

    • a summation, and/or
    • a linear mapping, and/or
    • a mapping by a multilayer perceptron, MLP, and/or
    • a mapping by a recurrent neural network, RNN.


In a further particularly advantageous embodiment of the present invention, cost function custom-character additionally measures the similarity between representation ξl of long extract xl, on the one hand, and representation {circumflex over (ξ)}l corresponding thereto for a modification {circumflex over (x)}l of long extract xl including the same semantic content. This may be quantified, for example, in a contribution








i

=


-
log




exp

(


sim

h
i


(


ξ
l

,


ξ
^

l


)

)








ξ
_



l



𝒩


{


ξ
^

l

}





exp

(


sim

h
i


(


ξ
l

,


ξ
_

l


)

)








to cost function custom-character. This contribution fulfills the function of the typical contrastive training with respect to images or videos. Modification {circumflex over (x)}l of long extract xl including the same semantic content corresponds to positive examples of that which is to be assessed as similar to representation ξl of long extract xl. In contrast, representations ξl obtained for randomly generated sequences xl correspond to negative examples of that which is not to be assessed as similar to representation ξl of long extract xl. In similarity measure simhi, hi is a trained transformation h, which is specifically used for the comparison to modification {circumflex over (x)}l of long extract xl including the same semantic content.


Modification {circumflex over (x)}l including the same semantic content may be generated in particular, for example, by

    • selection of a random image detail and enlargement back to the original size, and/or
    • reflection, and/or
    • color change


      from long extract xl.


As explained above, self-monitored trained representation ƒθ*(x)=ξ=(ψ, ϕ) is the starting material for many further evaluations of time sequence x of video images. In one particularly advantageous embodiment, the recognition of at least one action which time sequence x of video images shows is assessed from representation ƒθ*(x)=ξ=(ψ, ϕ). Alternatively or also in combination therewith, for example, different actions which time sequence x of video images shows may be delimited from one another. In this way, for example, large amounts of video material may be broken down in an automated manner into sections which show specific actions. If, for example, a film is to be compiled which shows specific actions, it is possible in this way to search in an automated manner for suitable starting material. This may save working time to a significant extent in relation to a manual search.


In a further advantageous embodiment of the present invention, a sequence x* of video images similar to predefined time sequence x of video images is ascertained from a database on the basis of representation ƒθ*(x)=ξ=(ψ, ϕ). This search operates detached from simple features of the images which are included in sequence x, on the level of actions visible in sequence x. This search may also save a large amount of working time for the manual search, for example, when compiling a video. Furthermore, sequences x* similar to a predefined sequence x of video images may be used, for example, to enlarge a training data set for a classifier or another machine learning application.


In a further advantageous embodiment of the present invention, an activation signal is ascertained from representation ƒθ*(x)=ξ=(ψ, ϕ), and/or from a processing product evaluated therefrom. A vehicle, a system for the quality control of products, and/or a system for the monitoring of regions can be activated using this activation signal. As explained above, the processing of original sequence x of video images to form representation ƒθ*(x)=ξ=(ψ, ϕ) facilitates the downstream processing. The probability is therefore increased that the reaction triggered by the activation signal at the particular activated technical system is appropriate to the situation represented by sequence x of video images.


The method may in particular be entirely or partially computer implemented. The present invention therefore also relates to a computer program including machine-readable instructions which, when they are executed on one or multiple computer(s), prompt the computer or computers to carry out the described method. In this meaning, controllers for vehicles and embedded systems for technical devices which are also capable of executing machine-readable instructions are also to be considered computers.


The present invention also relates to a machine-readable data medium and/or a download product including the computer program. A download product is a digital product transferable via a data network, i.e., downloadable by a user of the data network, which may be offered for sale, for example, in an online shop for immediate download.


Furthermore, a computer may be equipped with the computer program, the machine-readable data medium, or the download product.


Further measures improving the present invention are described in greater detail hereinafter together with the description of the preferred exemplary embodiments of the present invention on the basis of figures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an exemplary embodiment of method 100 of the present invention for coding a sequence x of video images in a representation ξ=(ψ, ϕ) which is evaluable by machine.



FIG. 2 shows an illustration of the self-monitored training on the basis of the example of a scene 10 in a chemical laboratory, according to the present invention.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS


FIG. 1 is a schematic flowchart of an exemplary embodiment of method 100 for coding a sequence x of video images in a representation ξ=(ψ, ϕ) which is evaluable by machine.


In step 110 at least one function ƒθ({tilde over (x)}) parameterized using trainable parameters θ is provided, which maps sequences x of video images on representations ƒθ({tilde over (x)})==(ψ, ϕ).


In step 120, from predefined sequence x of video images, N adjoining, nonoverlapping short extracts xs(1), . . . , xs(N) and one long extract xl, which contains all N short extracts xs(1), . . . , xs(N), are selected. In this case, in particular, for example, long extract xl may correspond to complete sequence x of video images.


In step 130, using parameterized function ƒθ, a representation ƒθ(xl)=ξl=(ψl, ψl) of long extract xl and multiple representations ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)), of short extracts xs(i) for i=1, . . . , N are ascertained. If there are multiple such parameterized functions ƒθ, long extract xl and different short extracts xs(i) may also be processed using different functions ƒθ.


In step 140, parameterized function ƒθ is assessed using a predefined cost function custom-character about the extent to which representation ξl=(ψl, ϕl) of long extract xl is consistent with regard to at least one predefined consistency condition with representations ξs(i)=(ψs(i), ϕs(i)), of short extracts xs(i).


In this case, in particular, for example, according to block 141, the at least one consistency condition may include that stationary features Ψl of long extract xl are similar to stationary features ψs(i) of short extracts xs(i) for i=1, . . . , N.


According to block 142, the at least one consistency condition may include, for example, that nonstationary features ϕl of long extract xl are similar to an aggregation g(ϕs(1), . . . , ϕs(N)), formed using an aggregation function g, of nonstationary features ϕs(1), . . . , ϕs(N) of short extracts xs(1), . . . , xs(N). As aggregation function g, in this case according to block 142a in particular, for example

    • a summation, and/or
    • a linear mapping, and/or
    • a mapping by a multilayer perceptron, MLP, and/or
    • a mapping by a recurrent neural network, RNN,


      may be used.


According to block 143, cost function custom-character may, for example, additionally measure the similarity between representation ξl of long extract xl, on the one hand, and representation {circumflex over (ξ)}l corresponding thereto for a modification {circumflex over (x)}l of long extract xl including the same semantic content. According to block 143a, modification {circumflex over (x)}l including the same semantic content may be generated in particular, for example, by

    • selection of a random image detail and enlargement back to the original size, and/or
    • reflection, and/or
    • color change


      from long extract xl.


According to block 144, a similarity measured by cost function custom-character can be related in each case to similarities which supply a comparison of particular features ψs(i) or g(ϕs(1), . . . , ϕs(N)) of short extracts xs(i) for i=1, . . . , N, on the one hand, to features ψl or ϕl of a randomly generated sequence xl of video images.


According to block 145, at least one similarity between features z1 and z2 may be measured using a cosine similarity.


In step 150, parameters θ of function ƒθ are optimized with the goal that the assessment of the cost function for representations ƒθ(xl)=ξl=(ψ, ϕl) and ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)) ascertained in future is expected to improve.


In step 160, using function ƒθ* parameterized by finished optimized parameters θ*, predefined time sequence x of video images is mapped on sought representation ƒθ*(x)=ξ=(ψ, ϕ). As explained above, this representation ƒθ*(x)=ξ=(ψ, ϕ) may be used similarly to a synthesis gas in chemistry for further processing into a variety of further results relevant for the particular application.


In step 170, the recognition of at least one action A, which time sequence x of video images shows, is evaluated from representation ƒθ*(x)=ξ=(ψ, ϕ).


In step 175, on the basis of changes of representation ƒθ*(x)=ξ=(ψ, ϕ), different actions A, B, C, which time sequence x of video images shows, are delimited from one another.


In step 180, on the basis of representation ƒθ*(x)=ξ=(ψ, ϕ), a sequence x* of video images similar to predefined time sequence x is ascertained from a database.


In step 190, an activation signal 190a is ascertained from representation ƒθ*(x)=ξ=(ψ, ϕ), and/or from a processing product evaluated therefrom.


In step 195, a vehicle 50, a system 60 for the quality control of products, and/or a system 70 for monitoring areas is activated using this activation signal 190a.



FIG. 2 illustrates the above-described self-monitored training on the basis of the example of a scene 10 in a chemical laboratory.


On the left in FIG. 2, complete time sequence x of video images is plotted, which also corresponds here to long extract xl. On the right in FIG. 2, three short extracts xs(1), xs(2), xs(3) are plotted, into which time sequence x was broken down.


Scene 10 includes pouring two substances 11a, 12a from test tubes 11, 12 into a beaker 13 and the subsequent reaction of substances 11a, 12a to form a product 14. At the beginning of scene 10, test tube 11 is picked up and its content 11a is poured into beaker 13. Empty test tube 11 is then put down again. Next, test tube 12 is picked up and its content 12a is also poured into beaker 13, where it first accumulates above substance 11a already located there as a separate layer. Empty test tube 12 is put down again, and the two substances 11a and 12a mix thoroughly in beaker 13 to react to form product 14.


Stationary component s of this scene 10 is, that there is a laboratory scene including two test tubes 11 and 12 and a beaker 13 at all. Nonstationary component n is that test tubes 11 and 12 are picked up, their particular content 11a or 12a is poured into beaker 13, and the reaction to form product 14 takes place in beaker 13.


Short extract xs(1) includes the time period in which first test tube 11 is picked up, substance 11a is poured into beaker 13, and first test tube 11 is put down again. These actions accordingly represent nonstationary component n of short extract xs(1).


Short extract xs(2) includes the time period in which second test tube 12 is picked up, substance 12a is poured into beaker 13, and second test tube 12 is put down again. These actions accordingly represent nonstationary component n of short extract xs(2).


Short extract xs(3) includes the time period in which both test tubes 11 and 12 stand at their location and the reaction of the two substances 11a and 12a to form product 14 takes place inside beaker 13. This reaction accordingly represents nonstationary component n of short extract xs(3).


The above-described contrastive training rewards it if the aggregation of nonstationary components n of all short extracts xs(1), xs(2), xs(3) using aggregation function g is similar to nonstationary component n of long extract xl. Ultimately, nothing changes due to the division of long extract xl into short extracts xs(1), xs(2), xs(3) with regard to what is done overall in the course of scene 10.


The contrastive training also rewards it if stationary component s, namely the fundamental presence of two test tubes 11, 12, one beaker 13, and a certain amount of chemicals 11a, 12a, or 14, in all short extracts xs(1), xs(2), xs(3) corresponds to the stationary component of long extract xl.

Claims
  • 1. A method for coding a predefined time sequence of video images in a sought representation which is evaluable by a machine, the sought representation being made up of stationary features and nonstationary features, the method comprising the following steps: providing at least one function parameterized using trainable parameters, the parameterized function configured to map sequences of video images on representations made up of stationary features and non-stationary features;selecting, from the predefined time sequence of video images, N adjoining, nonoverlapping short extracts and one long extract which contains all N short extracts;ascertaining, using the parameterized function, a representation of the long extract xl and multiple representations of the short extracts;assessing the parameterized function using a redefined cost function about an extent to which the representation of the long extract is consistent with the representations of the short extracts with regard to at least one predefined consistency condition;optimizing the parameters of the parameterized function with a goal that the assessment of the cost function for the representation of the long extract and the representations of the short extracts representations ascertained in future is expected to improve;mapping, using the function parameterized by the optimized parameters, the predefined time sequence of video images on the sought representation.
  • 2. The method as recited in claim 1, wherein the at least one consistency condition includes that the stationary features of the long extract are similar to the stationary features of the short extracts.
  • 3. The method as recited in claim 1, wherein the at least one consistency condition includes that the nonstationary features of the long extract are similar to an aggregation formed using an aggregation function of the nonstationary features of the short extracts.
  • 4. The method as recited in claim 3, wherein the aggregation function includes: a summation, and/ora linear mapping, and/ora mapping by a multilayer perceptron (MLP), and/ora mapping by a recurrent neural network (RNN).
  • 5. The method as recited in claim 1, wherein the cost function additionally measures a similarity between the representation of the long extract, on the one hand, and of the representation corresponding thereto for a modification of the long extract including the same semantic content.
  • 6. The method as recited in claim 5, wherein the modification including the same semantic content is generated by selection of a random image detail and enlargement back to an original size, from the long extract, and/orreflection, from the long extract, and/orcolor change, from the long extract.
  • 7. The method as recited in claim 2, wherein the similarity measured by the cost function is related in each case to similarities, which supply a comparison of the stationary and non-stationary features of the short extracts, on the one hand, to stationary and non-stationary features of a randomly generated sequence of video images.
  • 8. The method as recited in claim 2, wherein at least one similarity between features z1 and z2 is ascertained using a cosine similarity of the form
  • 9. The method as recited in claim 1, wherein from the sought representation, a recognition of at least one action, which is shown in the predefined time sequence of video images, is evaluated.
  • 10. The method as recited in claim 1, wherein, based on the sought representation, a sequence of video images similar to the predefined time sequence of video images is ascertained from a database.
  • 11. The method as recited in claim 1, wherein, based on changes in the sought representation, different actions, which are shown in the time sequence of video images, are delimited from one another.
  • 12. The method as recited in claim 1, wherein: an activation signal is ascertained from the sought representation and/or from a processing product evaluated from the sought representation, anda vehicle and/or a system for quality control of products and/or a system for monitoring areas, is activated using the activation signal.
  • 13. A non-transitory machine-readable data medium on which is stored a computer program for coding a predefined time sequence of video images in a sought representation which is evaluable by a machine, the sought representation being made up of stationary features and nonstationary features, the computer program, when executed by one or more computers, causing the one or more computers to perform the following steps: providing at least one function parameterized using trainable parameters, the parameterized function configured to map sequences of video images on representations made up of stationary features and non-stationary features;selecting, from the predefined time sequence of video images, N adjoining, nonoverlapping short extracts and one long extract which contains all N short extracts;ascertaining, using the parameterized function, a representation of the long extract xl and multiple representations of the short extracts;assessing the parameterized function using a redefined cost function about an extent to which the representation of the long extract is consistent with the representations of the short extracts with regard to at least one predefined consistency condition;optimizing the parameters of the parameterized function with a goal that the assessment of the cost function for the representation of the long extract and the representations of the short extracts representations ascertained in future is expected to improve; andmapping, using the function parameterized by the optimized parameters, the predefined time sequence of video images on the sought representation.
  • 14. One or multiple computers configured to code a predefined time sequence of video images in a sought representation which is evaluable by a machine, the sought representation being made up of stationary features and nonstationary features, the one or computers configured to: provide at least one function parameterized using trainable parameters, the parameterized function configured to map sequences of video images on representations made up of stationary features and non-stationary features;select, from the predefined time sequence of video images, N adjoining, nonoverlapping short extracts and one long extract which contains all N short extracts;ascertain, using the parameterized function, a representation of the long extract xl and multiple representations of the short extracts;assess the parameterized function using a redefined cost function about an extent to which the representation of the long extract is consistent with the representations of the short extracts with regard to at least one predefined consistency condition;optimize the parameters of the parameterized function with a goal that the assessment of the cost function for the representation of the long extract and the representations of the short extracts representations ascertained in future is expected to improve; andmap, using the function parameterized by the optimized parameters, the predefined time sequence of video images on the sought representation.
Priority Claims (1)
Number Date Country Kind
10 2021 207 468.5 Jul 2021 DE national