COMPUTER IMPLEMENTED METHOD AND APPARATUS FOR UNSUPERVISED REPRESENTATION LEARNING

Information

  • Patent Application
  • 20230359940
  • Publication Number
    20230359940
  • Date Filed
    April 04, 2023
    a year ago
  • Date Published
    November 09, 2023
    7 months ago
  • CPC
    • G06N20/10
  • International Classifications
    • G06N20/10
Abstract
An apparatus and a computer implemented method for unsupervised representation learning. The method includes: providing an input data set comprising samples of a first domain and samples of a second domain; providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain; providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding; providing a similarity kernel for determining a similarity between embeddings; determining with the encoder embeddings of samples from the first domain and embeddings of samples from the second domain; determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain; determining at least one parameter of the encoder depending on a loss.
Description
CROSS REFERENCE

The present application claims the benefit under 35 U.S.C. § 119 of European Patent Application No. EP 22 17 2147.5 filed on May 6, 2022, which is expressly incorporated herein by reference in its entirety.


FIELD

The present invention relates to a computer implemented method and an apparatus for unsupervised representation learning.


BACKGROUND INFORMATION

Representation learning is required for multiple tasks ranging from autonomous driving to speech understanding. At the core of representation learning is a network architecture that produces embeddings, i.e., representations, for input objects and a task-driven loss function which requires representations to be informative.


Common loss functions for representation learning, such as cross-entropy for example, rely on annotated label supervision. When such supervision is not available, self-supervised methods are employed. The center idea of most of the self-supervised methods is to maximize the mutual information between different views of original data, e.g., as described in Ting Chen, Calvin Luo, Lala Li; “Intriguing Properties of Contrastive Losses.” https://arxiv.org/abs/2011.02803.


Practically, the different views can be achieved by augmenting the original data to provide augmented data and then maximizing a similarity of the augmented data and the original data, e.g. as described in Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton; “A Simple Framework for Contrastive Learning of Visual Representations.” https://arxiv.org/abs/2002.05709.


The resulting supervision method ends up performing surprisingly well on a range of tasks. It, however, was noticed, e.g., in Yazhe Li, Roman Pogodin, Danica J. Sutherland, Arthur Gretton; “Self-Supervised Learning with Kernel Dependence Maximization;” https://arxiv.org/abs/2106.08320 that such a good performance can not only be explained from the mutual information perspective.


SUMMARY

A computer implemented method and apparatus for unsupervised representation learning according to the present invention provide a perspective based on structured assignment learning.


According to an example embodiment of the present invention, the computer implemented method of unsupervised representation learning comprises providing an input data set comprising samples of a first domain and samples of a second domain, providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain, providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding, providing a similarity kernel for determining a similarity between embeddings, in particular a kernel for determining an preferably Euclidean distance between the embeddings, determining with the encoder embeddings of samples from the first domain and embeddings of samples from the second domain, determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain, determining at least one parameter of the encoder depending on a loss, wherein the loss depends on a first cost for the similarities of pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain. The loss depends on a discrepancy between a ground truth cost and an induced cost. The ground truth cost, i.e. the first cost, is the cost of a reference assignment. The ground truth cost depends on the reference assignment of the underlying assignment problem. The induced cost, i.e. the second cost, is an estimate for a solution to the assignment problem. The assignment problem may be a linear or a quadratic assignment problem. The induced cost depends on at least one possible assignment of the assignment problem. Using an estimate for the induced cost allows to determine the parameter while it may be computationally intractable otherwise. In the absence of a large amount of annotated data, this provides an efficient unsupervised or self-supervised representation learning.


According to an example embodiment of the present invention, the first cost preferably depends on a sum of the similarities between the embeddings that are assigned to each other according to the reference assignment.


According to an example embodiment of the present invention, the loss preferably comprises a difference between the first cost and the estimate for the second cost. The loss is either a sparsely-smoothed structured linear assignment loss or a higher-order representation loss function comprising the estimate. This loss is used to train the encoder, e.g., a neural architecture representing the encoder, in cases when no annotated data is available. This approach may be used as an add-on in any existing deep-learning perception network for various down-stream applications. Since the method operates on the representation, it does not require modifying or interfering with the internal mechanics of the backbone algorithm or the down-stream task based on it.


According to an example embodiment of the present invention, preferably, the method comprises providing a function that is configured to map a plurality of sums of the similarities between embeddings that are assigned to each other according to different possible assignments to a possible cost for the plurality of possible assignments, wherein the possible cost is weighted by a weight, wherein the function is configured to map a plurality of sums of negatives of the similarities between the embeddings that are assigned to each other according to the different possible assignments to a virtual cost, wherein the weight depends on a projection, in particular a minimum distance Euclidean projection, of the virtual cost to a simplex that has one dimension less than the plurality of possible assignments, and wherein the second cost depends on the possible cost that is weighted by the weight. This facilitates the computation of the estimate for the second cost.


According to an example embodiment of the present invention, the method preferably comprises determining with the similarity kernel a first matrix comprising as its elements similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the first domain, and a second matrix comprising as its elements similarities for pairs of one embedding of a sample from the second domain and one embedding of a sample from the second domain, wherein the second cost depends on a sum of the similarities between the embeddings within the first domain that are assigned according to the reference assignment, and the similarities between the embeddings within the second domain that are assigned according to the reference assignment, and on a maximum scalar product between the eigenvalues of the first matrix and the eigenvalues of the second matrix. This facilitates the computation of the estimate for the quadratic assignment problem.


According to an example embodiment of the present invention, the method preferably comprises providing a matrix comprising as its elements the reference assignment, providing a matrix comprising as its elements the possible assignment, providing a matrix comprising as its elements the similarities between respective pairs of embeddings. This facilitates the computation of the estimate for the assignment problem.


The method may comprise determining the at least one parameter of the encoder depending on a solution to an optimization problem that is defined depending on the loss.


The method may comprise providing samples of the first domain and of the second domain, wherein providing the input data set comprises determining a first number of first samples that is a subset of the samples comprising samples of the first domain and determining a second number of second samples that is a subset of the samples comprising samples of the second domain. This way different views of the samples are used which improves the result of the unsupervised representation learning.


According to an example embodiment of the present invention, the apparatus for unsupervised representation learning comprises at least one processor and at least one memory for storing computer readable instructions, that when executed by the at least one processor cause the apparatus to perform steps in the method, wherein the at least one processor is configured to execute the computer readable instructions. This apparatus has the advantages of the method.


According to an example embodiment of the present invention, a computer program may be provided, wherein the computer program comprises computer readable instructions, that when executed by a computer, cause the computer to perform steps in the method according to the present invention.


Further embodiments of the present invention are derived from the following description and the figures.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 schematically depicts an apparatus for representation learning, according to an example embodiment of the present invention.



FIG. 2 depicts a method of representation learning, according to an example embodiment of the present invention.



FIG. 3 depicts a method of operating a technical system, according to an example embodiment of the present invention.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS


FIG. 1 depicts an apparatus 100 for unsupervised representation learning schematically. The apparatus 100 comprises at least one processor 102 and at least one memory 104.


The at least one processor 102 is configured to execute computer readable instructions.


The at least one memory 104 is configured for storing computer readable instructions.


The computer readable instructions are configured so that when they are executed by the at least one processor 102, they cause the apparatus 100 to perform steps in a method of unsupervised representation learning that is described below.


In one example, a computer program comprises the computer readable instructions.


The computer program is in one example stored on the at least one memory 104. The at least one processor 102 in one example executes the computer program.


The apparatus 100 may comprise a capturing device 106 configured to capture an input. The input may comprise an audio signal or a digital image. The capturing device 106 may comprise a microphone or a camera or a Radar sensor or a LiDAR Sensor or an ultrasound sensor or an infrared sensor or a motion sensor. The digital image may be a representation of a scene that is visible for a human. The digital image may be a representation of the scene by a radar image or a LiDAR image or an ultrasound image or an infrared image.


The apparatus 100 may comprise an actuator 108 configured to determine an action depending on the input. The action may be an instruction for operating a technical system 110. The technical system 110 may be capable of autonomous operation.


The technical system 110 may be a robot, in particular a vehicle, a manufacturing machine, a household appliance, a power tool, a personal assistance device or an access control system. The technical system 110 may comprise the apparatus 100 or be configured to be operated according to its instructions.



FIG. 2 depicts steps in the method of unsupervised representation learning. The method is described below for the linear and the quadratic assignment problem.


The method comprises a step 202.


In step 202, the method comprises providing samples D. The samples D are e.g. provided from the at least one memory 104. The samples D comprise samples of a first domain D1 and of a second domain D2. The samples D may comprise samples of other domains as well.


The samples D comprise at least N1 samples D1i,i=1, . . . N1 of the first domain D1. The samples D comprise at least N2 samples D1j,j=1, . . . N2 of the second domain D2. The first domain D1 may comprise more than N1 samples of the first domain D1. The second domain D2 may comprise more than N2 samples of the second domain D1.


The method comprises a step 204.


In step 204 the method comprises providing 204 an input data set.


The input data set comprises N1 samples D1i,i=1, . . . N1 of the first domain D1. The input data set comprises N2 samples D1j,j=1, . . . N2 of the second domain D2.


In an example, providing the input data set comprises determining a first number N1 of first samples D1=D1i=1, . . . N1 that is a subset of the samples D. The subset represents a first view on the samples D.


In an example, providing the input data set comprises determining a second number N2 of second samples D2=D2j=1, . . . N2 that is a subset of the samples D. The subset represents a second view on the samples D.


In the example, the subset are of the same size N=N1=N2.


The steps 202 and 204 are optional. The method may start by providing the input data set, e.g. from the at least one memory 104.


Afterwards a step 206 is executed.


In step 206, the method comprises providing a reference assignment Ygt between pairs of one sample D1i from the first domain and one sample D2j from the second domain.


The method may comprise providing a matrix Ygt comprising as its elements the reference assignment.


Afterwards a step 208 is executed.


In step 208, the method comprises providing an encoder f(x) that is configured to map a sample x of the input data set depending on at least one parameter Θ of the encoder f(x) to an embedding.


In one example the encoder is a function ƒΘ(x) D→RN×E, which outputs an embedding vector of dimension E for each of its objects, i.e. each sample x.


Afterwards a step 210 is executed.


In step 210, the method comprises providing a similarity kernel s(f(x,y) for determining a similarity between embeddings (x,y).


The kernel is in one example configured for determining a distance between the embeddings (x,y). Preferably, the kernel is configured for determining an Euclidean distance between the embeddings (x,y).


Afterwards a step 212 is executed.


In step 212, the method comprises determining with the encoder f(x) embeddings ƒ(D1i) of samples D1i from the first domain D1 and embeddings ƒ(D2j) of samples D2j from the second domain D2.


Afterwards a step 214 is executed.


In step 214, the method comprises determining with the similarity kernel s(f(x,y) similarities Si,j=s(ƒ[D1i],ƒ[D2j]) for pairs of one embedding ƒ[D1i] of a sample from the first domain D1 and one embedding ƒ[D2j] of a sample from the second domain D2.


The method may comprise providing a matrix S comprising as its elements the similarities s(ƒ[D1i],ƒ[D2i]) between respective pairs of embeddings.


Afterwards a step 216 is executed.


In step 216, the method comprises determining at least one parameter θ of the encoder ƒθ(x) depending on either a loss for the linear or for the quadratic assignment problem.


The task is to learn such ƒθ(x) that embeddings of objects from the same class are pushed together, while embeddings of the objects from different classes are pushed apart.


The method may comprise determining the at least one parameter θ of the encoder ƒ(x)) depending on a solution to an optimization problem that is defined depending on the loss.


Linear Assignment Problem:


Given two input sets ƒ(D1)={ƒ(d1i)}i=1N1 and ƒ(D2)={ƒ(d2j)}j=1N2, the matrix S is an inter-set similarity matrix S∈RN1×N2 between each element in input set ƒ(D1) and each element in input set ƒ(D2).


This means, the similarity matrix S measures the distance between the elements of the sets, i.e. [S]i,j=ϕ(ƒ(d1i),(d2j)), where ϕ is a metric function, e.g. Euclidean distance.


The goal of linear assignment problem is to find one-to-one assignment ŷ(S), such that the sum of distances between assigned elements is minimized:








y
^

(
S
)

=


argmin

y

Π





tr

(

SY
T

)






where Π corresponds to a plurality of possible assignments, in particular a set of all N1×N2 permutation matrices.


Quadratic Assignment Problem:


Given two input sets ƒ(D1)={ƒ(d1i)}i=1N1 and {ƒ(d2j)}j=1N2, the matrix S is an inter-set similarity matrix S∈RN1×N2 between each element in input set ƒ(D1) and each element in input set ƒ(D2).


This means, the similarity matrix S measures the distance between the elements of the sets, i.e. [S]i,j=ϕ(ƒ(d1i),(d2j)), where ϕ is a metric function, e.g. Euclidean distance.


Additionally, the first matrix SD1 is an intra-set similarity matrix SD1∈RN1×N1 within the set D1 and the second matrix SD2 is an intra-set similarity matrix SD2∈RN2×N2


This means, the first matrix SD1 measures the distance between the elements within the set, i.e. [SD1]i,j=ϕ(ƒ(d1i),(d1j)), where ϕ is a metric function, e.g. Euclidean distance.


This means, the second matrix SD2 measures the distance between the elements within the set, i.e. [SD2]i,j=ϕ(ƒ(d2i),(d2j)), where ϕ is a metric function, e.g. Euclidean distance.


The goal of quadratic assignment problem is to find one-to-one assignment ŷQ(S,SD1,SD2), such that the sum of distances between assigned elements is minimized:








Q
α

(

S
,

S

D

1


,

S

D

2



)

=


min

y

Π



{


t


r

(

SY
T

)


+

α
·

tr

(


S

D

1




YS

D

2

T



Y
T


)



}






where Π corresponds to a plurality of possible assignments, in particular a set of all permutation matrices and α≥0 is a scalar. The linear assignment problem is derived from this quadratic assignment problem by setting α=0.


The loss for the linear assignment problem is in one example a sparsely-smoothed structured linear assignment loss:






L(S,Ygt)=tr(SYgtT)−sparsemax[{tilde over (x)}(−S)]T{tilde over (x)}(S)


wherein tr is the trace, {tilde over (x)}(S)=[tr(SY1T, . . . , tr(SYKT))] is a function {tilde over (x)}:RN×N→RN! that yields a realization of all K possible costs and sparsemax is defined as a minimum distance Euclidean projection of {tilde over (x)}(−S) to a K−1-dimensional simplex, e.g., as described in Martins, A. F. T. and Astudillo, R. F.; “From softmax to sparsemax: A sparse model of attention and multi-label classification;” In Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML′16, 2016:





Sparsemax(z)=argmin∥p−z∥2


wherein p∈ΔK-1 and ΔK-1 is the K−1 dimensional simplex. The loss LHO for the quadratic assignment problem is in one example a higher-order representation loss:







L

H

O


=


L

(

S
,

Y
gt


)

-


α
2

[


t


r

(


S

D

1




Y
gt



S

D

2

T



Y
gt
T


)


+





λ

D

1


,

λ

D

2





+


]






wherein L(S,Ygt) is the linear assignment loss and custom-characterλD1D2custom-character+ is a maximum scalar product between eigenvalues of the matrices SD1 and SD2.


This means, for these assignment problems, the loss depends on a first cost for the similarities Si,j=s(ƒ[D1i],ƒ[D2j]) for pairs of one embedding ƒ[D1i] of a sample from the first domain D1 and one embedding ƒ[D2j] of a sample from the second domain D2 that are assigned to each other according to the reference assignment Ygt.


The first cost depends on a sum tr(SYgtL) of the similarities Si,j=s(ƒ[D1i],ƒ[D2i]) between the embeddings ƒ[D1i],ƒ[D2i] that are assigned to each other according to the reference assignment Ygt.


The loss comprises a difference between the first cost and an estimate for a second cost.


For these assignment problems, the loss depends on an estimate for the second cost for the similarities Si,j=s(ƒ[D1i],ƒ[D2i]) for pairs of one embedding ƒ[D1i] of a sample from the first domain D1 and one embedding ƒ[D2j] of a sample from the second domain D2 that are assigned to each other according to a possible assignment Y∈Π of the plurality Π of possible assignments between pairs of one sample from the first domain D1 and one sample from the second domain D2. Preferably, a matrix Y∈Π is provided that comprises as its elements the possible assignment.


In the linear assignment problem, the method comprises providing the function {tilde over (x)}. The function {tilde over (x)} is configured to map a plurality of sums tr(SY1T, . . . , tr(SYKT) of the similarities Si,j=s(ƒ[D1i],ƒ[D2i]) between embeddings that are assigned to each other according to different possible assignments Y1T, . . . , YKT to a possible cost for the plurality of possible assignments Π. The function {tilde over (x)} is configured to map a plurality of sums tr(SY1T, . . . , tr(SYKT) of negatives of the similarities −Si,j=−s(ƒ[D1i],ƒ[D2i]) between embeddings that are assigned to each other according to different possible assignments Y1T, . . . , YKT to a virtual cost.


In the linear assignment problem, the second cost depends on the possible cost that is weighted by a weight sparsemax[{tilde over (x)}(−S)]T.


The weight sparsemax[{tilde over (x)}(−S)]T depends on the projection, in particular the minimum distance Euclidean projection, e.g. sparsemax, of the virtual cost {tilde over (x)}(−S) to the k-dimensional simplex. This simplex has one dimension less than the plurality of possible assignments Π.


In the quadratic assignment problem, the method comprises determining with the similarity kernel s(ƒ(x,y) the first matrix SD1 and the second matrix SD2.


In the quadratic assignment problem, the second cost depends on a sum tr(SD1YgtSD2TYgtT) of the similarities SD1 between the embeddings within the first domain D1 that are assigned according to the reference assignment Ygt, and the similarities SD2 between the embeddings within the second domain D2 that are assigned according to the reference assignment Ygt, and on the maximum scalar product between the eigenvalues, λ(SD1) of the first matrix SD1 and the eigenvalues λ(SD2) of the second matrix SD2.


Afterwards the step 202 may be repeated e.g. for other samples.


Afterwards the encoder may be used in a step 218 for operating the technical system 110.



FIG. 3 depicts the method of operating the technical system 110.


The method of operating the technical system 110 comprises a step 302 of determining in particular with the capturing device 106 the input.


In a step 304, the method comprises determining in particular with the encoder a representation of the input, i.e. the embeddings.


In a step 306, the method comprises determining the output for operating the technical system 110 depending on the representation of the input.


Operating comprises in one example, determining with the trained encoder ƒ for an in particular previously unseen input, e.g. sample D1, the embeddings ƒ(Di)={ƒ(di)}i=1N1, wherein N is the dimension of the sample.


The operating is preferably applied in computer vision or autonomous driving.


In one example, the embeddings ƒ(Di) represent objects in a digital image. In this example the goal is e.g. to determine a classification for the objects in the digital images. To this end, the embeddings ƒ(Di) are classified according to whereto they are mapped.


This classification may be used for tracing objects or for determining actions (i.e., operating, step 308). In autonomous driving an action of the technical system may be determined depending on the classification.

Claims
  • 1. A computer implemented method of unsupervised representation learning, the method comprising the following steps: providing an input data set including samples of a first domain and samples of a second domain;providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain;providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding;providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings;determining with the encoder embeddings of the samples from the first domain and embeddings of the samples from the second domain;determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain; anddetermining at least one parameter of the encoder depending on a loss, wherein the loss depends on a first cost for the similarities of the pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain.
  • 2. The method according to claim 1, wherein the first cost depends on a sum of the similarities between the embeddings that are assigned to each other according to the reference assignment.
  • 3. The method according to claim 1, wherein the loss includes a difference between the first cost and the estimate for the second cost.
  • 4. The method according to claim 1, further comprising: providing a function that is configured to map a plurality of sums of the similarities between embeddings that are assigned to each other according to different possible assignments to a possible cost for the plurality of possible assignments, wherein the possible cost is weighted by a weight, wherein the function is configured to map a plurality of sums of negatives of the similarities between the embeddings that are assigned to each other according to the different possible assignments to a virtual cost, wherein the weight depends on a projection including a minimum distance Euclidean projection, of the virtual cost to a simplex that has one dimension less than the plurality of possible assignments, and wherein the second cost depends on the possible cost that is weighted by the weight.
  • 5. The method according to claim 1, further comprising: determining with the similarity kernel a first matrix including as its elements similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the first domain, and a second matrix including as its elements similarities for pairs of one embedding of a sample from the second domain and one embedding of a sample from the second domain, wherein the second cost depends on a sum of the similarities between the embeddings within the first domain that are assigned according to the reference assignment, and the similarities between the embeddings within the second domain that are assigned according to the reference assignment, and on a maximum scalar product between the eigenvalues of the first matrix and the eigenvalues of the second matrix.
  • 6. The method according to claim 1, further comprising: providing a matrix including as its elements the reference assignment;providing a matrix including as its elements the possible assignment; andproviding a matrix including as its elements the similarities between the pairs of embeddings.
  • 7. The method according to claim 1, further comprising: determining the at least one parameter of the encoder depending on a solution to an optimization problem that is defined depending on the loss.
  • 8. The method according to claim 1, further comprising: providing samples of the first domain and of the second domain, wherein the providing of the input data set includes determining a first number of first samples that is a subset of the samples including samples of the first domain and determining a second number of second samples that is a subset of the samples including samples of the second domain.
  • 9. The method according to claim 1, further comprising: operating a technical system, wherein the operating of the technical system includes: determining with a capturing device an input;determining with the encoder a representation of the input; anddetermining an output for operating the technical system depending on the representation of the input.
  • 10. An apparatus configured for unsupervised representation learning, the apparatus comprising: at least one processor; andat least one memory configured to store computer readable instructions, that when executed by the at least one processor cause the apparatus to perform: providing an input data set including samples of a first domain and samples of a second domain,providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain,providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding,providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings,determining with the encoder embeddings of the samples from the first domain and embeddings of the samples from the second domain,determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain, anddetermining at least one parameter of the encoder depending on a loss, wherein the loss depends on a first cost for the similarities of the pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain;wherein the at least one processor is configured to execute the computer readable instructions.
  • 11. A non-transitory computer-readable medium on which is stored a computer program of for unsupervised representation learning, the computer program, when executed by a computer, causing the computer to perform the following steps: providing an input data set including samples of a first domain and samples of a second domain;providing a reference assignment between pairs of one sample from the first domain and one sample from the second domain;providing an encoder that is configured to map a sample of the input data set depending on at least one parameter of the encoder to an embedding;providing a similarity kernel for determining a similarity between embeddings, the kernel being for determining a Euclidean distance between the embeddings;determining with the encoder embeddings of the samples from the first domain and embeddings of the samples from the second domain;determining with the similarity kernel similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain; anddetermining at least one parameter of the encoder depending on a loss, wherein the loss depends on a first cost for the similarities of the pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to the reference assignment and an estimate for a second cost for the similarities for pairs of one embedding of a sample from the first domain and one embedding of a sample from the second domain that are assigned to each other according to a possible assignment of a plurality of possible assignments between pairs of one sample from the first domain and one sample from the second domain.
Priority Claims (1)
Number Date Country Kind
22 17 2147.5 May 2022 EP regional