The present application claims the benefit under 35 U.S.C. § 119 of German Patent Application No. DE 10 2021 207 468.5 filed on Jul. 14, 2021, which is expressly incorporated herein by reference in its entirety.
The present invention relates to the coding of a sequence of video images in a representation which facilitates the downstream machine evaluation.
When guiding vehicles in road traffic, observations of the vehicle surroundings are an important source of information. In particular, the dynamic behavior of other road users is often evaluated from a sequence of video images.
German Patent Application No. DE 10 2018 209 388 A1 describes a method using which a region in the surroundings of a vehicle may be ascertained from video images, in which a situation relevant for the travel and/or safety of this vehicle is present.
Within the scope of the present invention, a method is provided for coding a predefined time sequence x of video images into a representation ξ=(ψ, ϕ) made up of stationary features ψ and nonstationary features ϕ. Such a representation is evaluable further by machine with respect to many downstream tasks. The processing of sequence x of video images to form representation ξ=(ψ, ϕ) is thus somewhat similar to the processing of chemical raw materials containing carbon and hydrogen to form a synthesis gas, which may in turn be used as a universal base material for manufacturing a variety of products.
In accordance with an example embodiment of the present invention, in within the scope of the method, at least one function ƒθ({tilde over (x)}) parameterized using trainable parameters θ is provided, which maps sequences {tilde over (x)} of video images on representations ƒθ({tilde over (x)})=ξ=(ψ, ϕ). These parameters θ are trained in a self-monitored manner on the basis of predefined time sequence x of video images. When parameters θ are optimized to their final values θ*, function ƒθ* is hereby also defined, using which the predefined time sequence x of video images is mapped on searched representation ƒθ*(x)=ξ=(ψ, ϕ).
The self-monitored training begins in that from sequence x of video images N, adjoining, nonoverlapping short extracts xs(1), . . . , xs(N) and a long extract xl, which contains all N short extracts xs(1), . . . , xs(N), are selected. Using parameterized function ƒθ, whose behavior is characterized by the present state of parameters θ, a representation ƒθ(xl)=ξl=(ψl, ϕl) of long extract xl and multiple representations ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)), of short extracts xs(i) for i=1, . . . , N are ascertained. Parameters θ may, for example, be randomly initialized at the beginning of the training and then change in the course of the optimization.
Parameterized function ƒθ is assessed using a predefined cost function about the extent to which representation ξl=(ψl, ϕl) of long extract xl is consistent with representations ξs(i)=(ψs(i), ϕs(i)) of short extracts xs(i) with regard to at least one predefined consistency condition. The self-monitored optimization of parameters θ is directed to the goal of the assessment of the cost function being expected to improve for representations ƒθ(xl)=ξl=(ψl, ϕl) and ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)) ascertained in future.
The self-monitored character of this optimization is that only the at least one consistency condition between representation ξl of long extract xl, on the one hand, and representations ξs(i) of short extracts xs(i), on the other hand, is utilized, which in turn are both ascertained from identical predefined sequence x of video images. No “ground truth” applied from an external source is required, which “labels” training sequences of video images using setpoint representations, on which function ƒθ({tilde over (x)}) should ideally map these training sequences. On the one hand, such labeling generally requires additional manual work and is therefore costly. On the other hand, the question arises in such monitored training to what extent the training completed on one sequence of video images is also transferable to sequences of video images.
Several examples of consistency conditions and contributions to cost function , in which these consistency conditions may manifest themselves, are indicated. These consistency conditions each contain similarity comparisons between features of long extract xl, on the one hand, and features of short extracts xs(i) for i=1, . . . , N, on the other hand.
For these similarity comparisons, a similarity measure is required which maps two features z1 and z2 on a numeric value for the similarity. One example of such a similarity measure is the cosine similarity
Herein, h is a predefined transformation, and τ is a temperature parameter for the scaling. Transformation h may in particular be a trained transformation, for example.
The similarity measured by cost function may in this case in particular be related in each case to similarities which supply a comparison of particular features ψs(i) or g(ϕs(1), . . . , ϕs(N)) of short extracts xs(i) for i=1, . . . , N, on the one hand, to features
From a randomly generated sequence
In one particularly advantageous embodiment of the present invention, the at least one consistency condition includes that stationary features ψl of long extract xl are similar to stationary features ψs(i) of short extracts xs(i) for i=1, . . . , N. If these are actually stationary features, they have to remain stationary both on the time scale of short extracts xs(i) and also on the time scale of long extract xl. This consistency condition may contribute, for example, a contribution
to cost function . In this case, in similarity measure simh
In a further particularly advantageous embodiment of the present invention, the at least one consistency condition includes that the nonstationary features ϕl of long extract xl are similar to an aggregation g(ϕs(1), . . . , ϕs(N)), formed using an aggregation function g of nonstationary features ϕs(1), . . . , ϕs(N) of short extracts xs(1), . . . , xs(N). The result of the changes in the video image caused by the nonstationary features is not dependent on whether the sequence of video images is played back in one stroke or is paused after each short extract xs(i). This consistency condition may contribute, for example, a contribution
to cost function Therein, ϕg=g(ϕs(1), . . . , ϕs(N)) is an aggregated version of nonstationary features. In similarity measure simh
Aggregation function g may include in particular, for example
In a further particularly advantageous embodiment of the present invention, cost function additionally measures the similarity between representation ξl of long extract xl, on the one hand, and representation {circumflex over (ξ)}l corresponding thereto for a modification {circumflex over (x)}l of long extract xl including the same semantic content. This may be quantified, for example, in a contribution
to cost function . This contribution fulfills the function of the typical contrastive training with respect to images or videos. Modification {circumflex over (x)}l of long extract xl including the same semantic content corresponds to positive examples of that which is to be assessed as similar to representation ξl of long extract xl. In contrast, representations
Modification {circumflex over (x)}l including the same semantic content may be generated in particular, for example, by
As explained above, self-monitored trained representation ƒθ*(x)=ξ=(ψ, ϕ) is the starting material for many further evaluations of time sequence x of video images. In one particularly advantageous embodiment, the recognition of at least one action which time sequence x of video images shows is assessed from representation ƒθ*(x)=ξ=(ψ, ϕ). Alternatively or also in combination therewith, for example, different actions which time sequence x of video images shows may be delimited from one another. In this way, for example, large amounts of video material may be broken down in an automated manner into sections which show specific actions. If, for example, a film is to be compiled which shows specific actions, it is possible in this way to search in an automated manner for suitable starting material. This may save working time to a significant extent in relation to a manual search.
In a further advantageous embodiment of the present invention, a sequence x* of video images similar to predefined time sequence x of video images is ascertained from a database on the basis of representation ƒθ*(x)=ξ=(ψ, ϕ). This search operates detached from simple features of the images which are included in sequence x, on the level of actions visible in sequence x. This search may also save a large amount of working time for the manual search, for example, when compiling a video. Furthermore, sequences x* similar to a predefined sequence x of video images may be used, for example, to enlarge a training data set for a classifier or another machine learning application.
In a further advantageous embodiment of the present invention, an activation signal is ascertained from representation ƒθ*(x)=ξ=(ψ, ϕ), and/or from a processing product evaluated therefrom. A vehicle, a system for the quality control of products, and/or a system for the monitoring of regions can be activated using this activation signal. As explained above, the processing of original sequence x of video images to form representation ƒθ*(x)=ξ=(ψ, ϕ) facilitates the downstream processing. The probability is therefore increased that the reaction triggered by the activation signal at the particular activated technical system is appropriate to the situation represented by sequence x of video images.
The method may in particular be entirely or partially computer implemented. The present invention therefore also relates to a computer program including machine-readable instructions which, when they are executed on one or multiple computer(s), prompt the computer or computers to carry out the described method. In this meaning, controllers for vehicles and embedded systems for technical devices which are also capable of executing machine-readable instructions are also to be considered computers.
The present invention also relates to a machine-readable data medium and/or a download product including the computer program. A download product is a digital product transferable via a data network, i.e., downloadable by a user of the data network, which may be offered for sale, for example, in an online shop for immediate download.
Furthermore, a computer may be equipped with the computer program, the machine-readable data medium, or the download product.
Further measures improving the present invention are described in greater detail hereinafter together with the description of the preferred exemplary embodiments of the present invention on the basis of figures.
In step 110 at least one function ƒθ({tilde over (x)}) parameterized using trainable parameters θ is provided, which maps sequences x of video images on representations ƒθ({tilde over (x)})==(ψ, ϕ).
In step 120, from predefined sequence x of video images, N adjoining, nonoverlapping short extracts xs(1), . . . , xs(N) and one long extract xl, which contains all N short extracts xs(1), . . . , xs(N), are selected. In this case, in particular, for example, long extract xl may correspond to complete sequence x of video images.
In step 130, using parameterized function ƒθ, a representation ƒθ(xl)=ξl=(ψl, ψl) of long extract xl and multiple representations ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)), of short extracts xs(i) for i=1, . . . , N are ascertained. If there are multiple such parameterized functions ƒθ, long extract xl and different short extracts xs(i) may also be processed using different functions ƒθ.
In step 140, parameterized function ƒθ is assessed using a predefined cost function about the extent to which representation ξl=(ψl, ϕl) of long extract xl is consistent with regard to at least one predefined consistency condition with representations ξs(i)=(ψs(i), ϕs(i)), of short extracts xs(i).
In this case, in particular, for example, according to block 141, the at least one consistency condition may include that stationary features Ψl of long extract xl are similar to stationary features ψs(i) of short extracts xs(i) for i=1, . . . , N.
According to block 142, the at least one consistency condition may include, for example, that nonstationary features ϕl of long extract xl are similar to an aggregation g(ϕs(1), . . . , ϕs(N)), formed using an aggregation function g, of nonstationary features ϕs(1), . . . , ϕs(N) of short extracts xs(1), . . . , xs(N). As aggregation function g, in this case according to block 142a in particular, for example
According to block 143, cost function may, for example, additionally measure the similarity between representation ξl of long extract xl, on the one hand, and representation {circumflex over (ξ)}l corresponding thereto for a modification {circumflex over (x)}l of long extract xl including the same semantic content. According to block 143a, modification {circumflex over (x)}l including the same semantic content may be generated in particular, for example, by
According to block 144, a similarity measured by cost function can be related in each case to similarities which supply a comparison of particular features ψs(i) or g(ϕs(1), . . . , ϕs(N)) of short extracts xs(i) for i=1, . . . , N, on the one hand, to features
According to block 145, at least one similarity between features z1 and z2 may be measured using a cosine similarity.
In step 150, parameters θ of function ƒθ are optimized with the goal that the assessment of the cost function for representations ƒθ(xl)=ξl=(ψ, ϕl) and ƒθ(xs(i))=ξs(i)=(ψs(i), ϕs(i)) ascertained in future is expected to improve.
In step 160, using function ƒθ* parameterized by finished optimized parameters θ*, predefined time sequence x of video images is mapped on sought representation ƒθ*(x)=ξ=(ψ, ϕ). As explained above, this representation ƒθ*(x)=ξ=(ψ, ϕ) may be used similarly to a synthesis gas in chemistry for further processing into a variety of further results relevant for the particular application.
In step 170, the recognition of at least one action A, which time sequence x of video images shows, is evaluated from representation ƒθ*(x)=ξ=(ψ, ϕ).
In step 175, on the basis of changes of representation ƒθ*(x)=ξ=(ψ, ϕ), different actions A, B, C, which time sequence x of video images shows, are delimited from one another.
In step 180, on the basis of representation ƒθ*(x)=ξ=(ψ, ϕ), a sequence x* of video images similar to predefined time sequence x is ascertained from a database.
In step 190, an activation signal 190a is ascertained from representation ƒθ*(x)=ξ=(ψ, ϕ), and/or from a processing product evaluated therefrom.
In step 195, a vehicle 50, a system 60 for the quality control of products, and/or a system 70 for monitoring areas is activated using this activation signal 190a.
On the left in
Scene 10 includes pouring two substances 11a, 12a from test tubes 11, 12 into a beaker 13 and the subsequent reaction of substances 11a, 12a to form a product 14. At the beginning of scene 10, test tube 11 is picked up and its content 11a is poured into beaker 13. Empty test tube 11 is then put down again. Next, test tube 12 is picked up and its content 12a is also poured into beaker 13, where it first accumulates above substance 11a already located there as a separate layer. Empty test tube 12 is put down again, and the two substances 11a and 12a mix thoroughly in beaker 13 to react to form product 14.
Stationary component s of this scene 10 is, that there is a laboratory scene including two test tubes 11 and 12 and a beaker 13 at all. Nonstationary component n is that test tubes 11 and 12 are picked up, their particular content 11a or 12a is poured into beaker 13, and the reaction to form product 14 takes place in beaker 13.
Short extract xs(1) includes the time period in which first test tube 11 is picked up, substance 11a is poured into beaker 13, and first test tube 11 is put down again. These actions accordingly represent nonstationary component n of short extract xs(1).
Short extract xs(2) includes the time period in which second test tube 12 is picked up, substance 12a is poured into beaker 13, and second test tube 12 is put down again. These actions accordingly represent nonstationary component n of short extract xs(2).
Short extract xs(3) includes the time period in which both test tubes 11 and 12 stand at their location and the reaction of the two substances 11a and 12a to form product 14 takes place inside beaker 13. This reaction accordingly represents nonstationary component n of short extract xs(3).
The above-described contrastive training rewards it if the aggregation of nonstationary components n of all short extracts xs(1), xs(2), xs(3) using aggregation function g is similar to nonstationary component n of long extract xl. Ultimately, nothing changes due to the division of long extract xl into short extracts xs(1), xs(2), xs(3) with regard to what is done overall in the course of scene 10.
The contrastive training also rewards it if stationary component s, namely the fundamental presence of two test tubes 11, 12, one beaker 13, and a certain amount of chemicals 11a, 12a, or 14, in all short extracts xs(1), xs(2), xs(3) corresponds to the stationary component of long extract xl.
Number | Date | Country | Kind |
---|---|---|---|
10 2021 207 468.5 | Jul 2021 | DE | national |