The present disclosure generally concerns signal detection during the acquisition of tensor data, and in particular during the acquisition of image representations.
Sensors, such as multi-sensors, configured to acquire data such as multi-channel images, videos, hyper-spectral images, etc. are subject to noise, of internal and/or external origin. The acquired data are then corrupted by noise, the amount of which is defined by a signal-to-noise ratio.
Signal detection methods exist when data are represented in the form of matrices, which is for example the case of gray-scale images. However, the application of these methods for data in the form of tensors of an order greater than or equal to 3 requires data reduction in order to convert them into matrices or vectors. This data reduction accordingly causes a loss of performance in signal detection.
It is desirable to improve signal detection methods for data represented in the form of a tensor of order greater than or equal to 3.
An embodiment provides a method of detecting a useful signal, the method comprising:
According to an embodiment, the sensor is configured to acquire representations of images and the circuit “configured to process the raw signal is an image processing circuit.
According to an embodiment, the invariance value associated with the tensor is a linear combination of a plurality of trace invariants Ij for the orthogonal group for tensors of order d, the combination being in the form ΣjαjIj, where values αj are weighting coefficients.
According to an embodiment, the linear combination comprises melonic and/or “tadpole”-type and/or tetrahedral and/or “pillow”-type trace invariants.
According to an embodiment, the weighting coefficient of a “pillow”-type trace invariant is equal to the inverse of the variance of the invariant for a pure noise tensor.
According to an embodiment, the weighting coefficient of a melonic and/or “tadpole” and/or tetrahedral-type trace invariant is equal to the symmetrization weight of the invariant.
According to an embodiment, the first reference value is a function of the expectation of the invariance value for a pure noise tensor.
According to an embodiment, the first reference value is equal to E[I]+2√{square root over (Var(I))}, where E[I] and Var(I) respectively are the expectation and the variance of the invariance value for a pure noise tensor.
According to an embodiment, if the invariance value is greater than the first reference value, the processing device is configured to determine that the raw signal acquired by the sensor comprises a useful signal.
According to an embodiment, the above method further comprises, when it is determined that the invariance value is greater than the first reference value:
An embodiment provides a device comprising:
According to an embodiment, the sensor is configured to acquire image representations and the circuit configured to perform raw signal processing operations is an image processing circuit.
An embodiment provides a method of determining a combination of trace invariants, the combination being in the form ΣjαjIj, where the Ij are trace invariants and values αj are weighting coefficients, adapted to a device, the method comprising:
According to an embodiment, the weights, associated with each trace invariant in each combination, are determined by the achieving of a gradient descent on an objective function determining a distance between the distribution of the invariance value associated with the combination for a pure noise tensor and for a tensor having a non-zero signal-to-noise ratio.
According to an embodiment, each trace invariant is a melonic and/or “tadpole”-type and/or tetrahedral and/or “pillow”-type trace invariant.
The foregoing features and advantages, as well as others, will be described in detail in the rest of the disclosure of specific embodiments given as an illustration and not limitation with reference to the accompanying drawings, in which:
Like features have been designated by like references in the various figures. In particular, the structural and/or functional features that are common among the various embodiments may have the same references and may dispose identical structural, dimensional and material properties.
For clarity, only those steps and elements which are useful to the understanding of the described embodiments have been shown and are described in detail.
Unless indicated otherwise, when reference is made to two elements connected together, this signifies a direct connection without any intermediate elements other than conductors, and when reference is made to two elements coupled together, this signifies that these two elements can be connected or they can be coupled via one or more other elements.
In the following description, where reference is made to absolute position qualifiers, such as “front”, “back”, “top”, “bottom”, “left”, “right”, etc., or relative position qualifiers, such as “top”, “bottom”, “upper”, “lower”, etc., or orientation qualifiers, such as “horizontal”, “vertical”, etc., reference is made unless otherwise specified to the orientation of the drawings.
Unless specified otherwise, the expressions “about”, “approximately”, “substantially”, and “in the order of” signify plus or minus 10%, preferably of plus or minus 5%.
The image representations acquired by sensors such as sensor 102 are generally corrupted by noise. As an example, this noise comes from outside the device and/or is noise internal to sensor 102. Generally, the amount of noise in an image is assessed by a signal-to-noise ratio. A piece of data having a signal-to-noise ratio equal to 0 is pure noise data, that is, a piece of data only comprising noise. In particular, in the case where an image acquired by sensor 102 is pure noise data, this means that no signal has been measured during its acquisition. The higher the signal-to-noise ratio of a piece of data, the less the piece of data is corrupted by noise.
As an example, the noise present in the data acquired by sensor 102 is modeled by Gaussian noise. Thus, in the following, when reference is made to noise present in a tensor, this noise has the form of a tensor of same order and dimension and is formed of elements, each following a standard normal distribution, each element being independent of the others. This model is realistic, since noise sources are so diverse and numerous that the central limit theorem applies.
Device 100 further comprises a non-volatile memory 104 (NV MEM), a volatile memory 106 (RAM), and a processor 108 (CPU) coupled to sensor 102 via a bus 110. As an example, memory 106 is a random access memory (RAM) and processor 108 is a central processing unit (CPU). In another example, memory 106 is a video memory (VRAM-“Video Random Access Memory”), and processor 108 is a graphics processing unit (GPU). When sensor 102 acquires a piece of data, represented by a tensor, processor 108 is configured to execute instructions 112 (INSTRUCTIONS). As an example, instructions 112 are stored in non-volatile memory 104 and are loaded into volatile memory 106 to be executed. According to an embodiment, instructions 112 are instructions enabling to implement a method of signal detection in image representations, in the form of tensors of order greater than or equal to 3, acquired by sensor 102. In particular, the data manipulated during the implementation of the signal detection method are not converted in the form of matrices, or more broadly, in the form of tensors of order lower than 3.
Trace invariants are generally formed by contraction of one or a plurality of copies of a tensor. The number of copies determines the degree of the invariant. Thus, the trace of a matrix is an order-1 invariant. In particular, trace invariants admit representations in the form of graphs.
Graphs 200 and 202 illustrate examples of order-1 trace invariants. Graphs 204 and 206 illustrate examples of order-2 trace invariants.
Copies of tensors are represented by vertices 208 for order-3 tensors and vertices 210 for a matrix. Each edge, numbered 1, 2, or 3 in
Graph 200 represents a copy of an order-3 tensor. In particular, the copy represented by graph 200 is a copy Ti,j,k of a tensor T. The edges numbered 1, 2, and 3 respectively represent indices i, j, and k.
Graph 202 shows the trace of a matrix. The two edges 1 and 2 meet, which means that the sum is performed on the diagonal elements of the matrix. Thus, the trace of a matrix is equal to Ti,i.
Graphs 204 and 206 represent order-2 trace invariants for order-3 tensors. In particular, graph 204 represents the contraction of two copies 208 and the shown invariant is Ti,j,kTi,j,k. Graph 206 illustrates a contraction of two copies 208, the contraction taking place on indices having different positions. The junction of the two edges at the top means that the sum is performed on the first copy having its first index equal to the second index of the second copy. The shown trace invariant then is Ti,j,kTk,i,j.
In the context of graph representation, a tensor symmetrization method consists of summing all possible edge permutations, the sum being weighted by the inverse of the number of possible permutations. Thus, in the example of graph 200, and when tensor T is a cubic order-3 tensor, each element
In particular,
Graph 310 is an example of a so-called “tadpole” graph. The invariant represented by graph 310 is equal to Tj,j,kTi,i,k. Each graph in the “tadpole” category has its vertices coupled by a central edge corresponding to a single index, and the other two edges of each copy meet. The “tadpole” category comprises 6 different graphs, and the symmetrization weights are {1,1,1,2,2,2}. The 6 invariants represented by tadpole graphs are I′1=Tj,j,kTi,i,k, I′2=Ti,j,iTk,j,k, I′3=Ti,j,jTi,k,k, I′4=Ti,j,iTk,k,j, I′5=Ti,j,jTk,k,i, and I′6=Ti,j,iTk,i,j.
Graph 312 is an example of a tetrahedral graph. Tetrahedral graphs represent trace invariants involving 4 copies of a tensor. The trace invariants represented by these graphs are accordingly order-4 trace invariants. The invariant represented by graph 312 is equal to Ti,j,kTi,j′, k′Ti′, j,k′Ti′,j′k. The tetrahedral graphs represent 60 different trace invariants, among the 216 invariants without taking symmetries into account, with weights varying between the values {1,2,4}.
Graph 314 is an example of a so-called “pillow” graph, formed by double contraction between two pairs of tensor copies. The trace invariants represented by pillow graphs are order-4 trace invariants. In particular, graph 314 represents an invariant equal to Ti,j,kTi,j′,k′Ti′,j,k′Ti′j′k. Pillow graphs represent 99 different trace invariants, from among 348, with weights varying between the values {1,2,4}.
According to an embodiment, instructions 112 enable to compute an invariance value by calculating a trace invariant, or of a combination of trace invariants, represented by a single, or a plurality of, graph categories. Instructions 212 are further configured so that the invariance value is compared with the invariance values distribution for pure noise tensors. In particular, the invariance value is compared with a value based on the expectation of the invariance value for a tensor formed of pure noise only. In another example, the invariance value is compared with a value based on the expectation and the variance of the invariance value for a tensor formed of pure noise only. Such a tensor can then be written as T=Z where Z is, for example, a Gaussian tensor, each element of which is independent of the others and follows a standard normal distribution. In another example, tensor Z is a tensor modeling a noise having its elements, for example, correlated and/or following a distribution different from a Gaussian distribution.
In particular,
The computing of the expectation of an invariant represented by a graph is based on the recognition of cycles in the spanning graph. Starting from a first vertex v1, an edge, made up of two indexed half-edges, is followed up to a second vertex v2, after which the propagator coupling vertex v2 to a vertex v3 is followed. From vertex v3 the half-edge starting from vertex v3 being of the same index as the half-edge preceding vertex v2 is followed. The reading of the graph continues in this way until it is returned to the first edge which has been followed, that is, the edge running from vertex v1 to vertex v2. Spanning graph 400 thus comprises two cycles 402 and 404.
Each cycle of the spanning graph then contributes by a factor n to the value of the moment, where n is the dimension of the tensor. Thus, the value of a moment of order m, for an invariant T and represented by a graph G, is given by the equation:
where the sum is performed on all the spanning graphs G′ of graph G, and value {nb cycle} corresponds to the number of cycles on each spanning graph. This relation is verified in the Gaussian case, but has a universal aspect. Indeed, this relation is also verified for other noise distributions. The universality of this relation is, for example, discussed in “Universality for Random Tensors” published in the Annales de l'I.H.P. Probabilités et statistiques by Gurau, R. in 2014.
In the case of tensors of dimensions n1× . . . ×nd, the computing of moments is generalized. In an example, in this case, only the invariants associated with graphs having a single index on each edge are used in trace invariant computing. In another example, a matrix A is constructed from the initial tensor T. For example, for an order-3 tensor T, of dimension n1×n2×n3, matrix A is a square matrix of dimension n1×n1 and each component Ai,l of this matrix is equal to the sum Σ≤j≤n
The following tables gather the values of expectation, variance, as well as the symmetrization weights for melonic and “tadpole” graphs. In particular, Table 1 gathers melonic graphs and Table 2 gathers “tadpole” graphs.
In order to compute the moments of an invariant for tensors having a signal-to-noise ratio strictly greater than 0, each vertex of the associated graph is written in the form:
where part vi
Graph 500 shows distributions of a “pillow”-type trace invariance value.
A curve 502 illustrates the distribution of “pillow”-type trace invariance values for a pure noise tensor. Curves 504 and 506 respectively illustrate the distributions of these same invariance values when signal-to-noise ratio β is equal to 1.6 and 2.6. Distributions 504 and 506 then correspond to distribution 502 shifted by a value, depending on the signal-to-noise ratio.
In the case of “tadpole”, tetrahedral, or melonic graphs, the trace invariance values distributions have the same shape, and a shift depending on the value of the signal-to-noise ratio can be observed.
In the rest of the disclosure and unless otherwise specified, a moment, in particular the expectation, or the variance of a trace invariant, corresponds to the moment, in particular to the expectation, or to the variance, of the invariant for a pure noise tensor.
At a step 600 (RECEIVE TENSOR), a tensor is supplied to processor 108. As an example, the tensor is a digital object, obtained by conversion of analog data measured by sensor 102. As an example, after the reception of the tensor, processor 108 is configured to symmetrize it.
At a step 601 (NORMALIZATION), the variance of the tensor components is computed. The tensor is then normalized based on the computed variance.
At a step 602 (COMPUTE INVARIANT I), an invariance value I for the symmetrized tensor is computed by processor 108. As an example, the invariance value corresponds to an trace invariant IG, represented by a graph G. In another example, the invariance value is a linear combination of a plurality of trace invariants I=τjαjIG
At a step 603 (COMPUTE MEAN AND VARIANCE), the expectation and the variance of the invariance value for a pure noise tensor are computed. The computed expectation and variance correspond to the expectation E[I] and/or the variance Var[I] of the trace invariant, or of the combination of trace invariants considered at step 601. In the case where a single invariant is considered, expectation E[I] is equal to E[IG] and variance Var(I) to Var(IG). In the case where the invariant is a combination of a plurality of invariants/=τjαjIG
At a step 604 (I<Iref), processor 108 is configured to, by executing instructions 112, compare the invariance value with a reference value. As an example, reference value is equal to value E[I]+2Var(I). As an example, the reference value is included in memory 104. In other examples, the reference value is another value, representing the invariance value distribution for the considered invariant or combination of invariants. In another example, the reference value is equal to expectation E[I]. Still in another example, the reference value is equal to E[I]+3 Var(I).
In the case where the invariance value is greater, for example strictly greater, than the reference value (branch Y at the output of block 604), processor 108 is configured to determine, at a step 605 (SIGNAL), that the tensor comprises a useful signal, that is, that signal-to-noise ratio β is strictly greater than 0. In the case where the invariance value is lower than the reference value (branch N at the output of block 604), processor 108 is configured to determine, at a step 606 (NOISE), that the tensor is a pure noise tensor, that is, that signal-to-noise ratio β is equal to 0.
As an example, when processor 108 determines that the tensor is a pure noise tensor, the latter is removed from device 100. On the contrary, when processor 108 determines that the tensor comprises a useful signal, the tensor is, for example, delivered to a data processing circuit. As an example, the data processing circuit is configured to perform denoising operations on the tensor. In particular, when the tensor is an image representation, such as a color image, or a video, the data processing circuit is configured to perform image processing operations.
As an example, the method described in relation with
At a step 700 (COMPUTE MEAN AND VARIANCE FOR DIFFERENT B), a value of expectation Eβ[I] and of variance Varβ(I) of an invariance value of a tensor having a signal-to-noise ratio β are computed. As an example, step 700 is carried out for a plurality of values of β, where β may for example, be zero. These values are computed upstream, for example during the programming of instructions 112. In another example, these values are computed on the fly, during the execution by processor 108 of instructions 112.
At a step 702 (DETERMINE B), processor 108 is configured to determine the value of the signal-to-noise ratio of the tensor received at step 600. AS an example, invariance value I is compared with the different expectations Eβ[I] computed during step 700. The retained signal-to-noise ratio is the value of β minimizing value |I−Eβ[I]|. In other examples, invariance value/is compared with the different values Eβ[I]+2Varβ(I). The retained signal-to-noise ratio is the value of β minimizing value |1−Eβ[I]+2√{square root over (Varβ(I))}|.
As an example, when it is determined that the signal-to-noise ratio is different from 0, the tensor is supplied to a data processing circuit, for example configured to perform image processing operations, in association with the estimated value of the signal-to-noise ratio.
However, as shown in
According to an embodiment, the invariance value is computed based on a combination of at least one trace invariant for which the overlap interval between distributions is reduced.
For an invariance value I, a distance between the distributions associated with a pure noise tensor and a tensor comprising a useful signal with a signal-to-noise ratio β is represented by an objective function:
where mS(1, β) and σS(I, β) respectively are the expectation and the standard deviation of the distribution of an invariant, or combination of invariants, for a tensor having a signal-to-noise ratio equal to β and where mN(I) and σN(I) are the expectation and the standard deviation of the distribution of the same invariant, or same combination of invariants, for a pure noise tensor. The idea is then to look for an invariance value in the form of a combination I=ΣjαjIG
The parameter space then describing invariant I is t-simplex Δt, where t is the number of invariants IG
In the case of invariants of degree 4, the numerator is more complex and has the form PI(n)nβ2+n2β4. For certain graphs, such as tetrahedral graphs, and certain “pillow” graphs, value PI(n) is lower than nβ2 when dimension n is sufficiently large. Thus, contribution n2β4 factorizes in the numerator. Looking at the denominator, and more particularly σS(I, β), the largest contribution can be approximated by 2σN(I). These approximations show that for invariants of order 2, or of order 4 satisfying condition PI(n)<n2β4, quantity σN(ΣjαjIG
A curve 800 is the curve of equation n2β4 as a function of β.
In particular, when the term PI is negligible when n is large, the objective function is such that
where s corresponds to the order of the invariants. In the case where the invariance value is computed from invariants of different orders, the value of s is an approximation. For example, the value of s corresponds to the average of the orders of the trace invariants used. The minimization of the objective function consists in searching for a combination of values for each αi, under the condition that Σαi=1 and so that, for this combination of coefficients, the value of the objective function is as low as possible.
In particular,
The weight values used for
In particular,
The distributions illustrated in
The distributions illustrated in
The distributions illustrated in
The overlap interval is smaller when the invariance value is computed from a combination of invariants of all types.
As an example, the success rates have been computed based on a large number of samples, for example between 500 and 2,000, of tensors having a known signal-to-noise ratio. The method described in relation with
As an example, the method described in relation with
At a step 1600 (INFORMATION PROVISION), information relative to device 100 is provided to the external device. The information includes, for example, the memory resource of device 100. As an example, the memory resource corresponds to the capacity of volatile memory 106. The information further comprises, for example, the performance of processor 108. The information comprises, for example, the order of the tensors acquired by sensor 102 as well as the dimensions of the tensors. As an example, the information further comprises an indication of the time, for example in the form “short”, “medium”, or “long”, within which the computing of the invariance value, or the signal detection method such as described in relation with
At a step 1601 (INVARIANT ESTIMATION), the external device estimates a combination of invariants adapted to device 100. As an example, the estimate is made by reading of the list stored in the non-volatile memory. The retained combination of invariants is, for example, that most closely corresponding to the information delivered during step 1600. In another example, certain criteria, such as memory resources, are prioritized.
At a step 1602 (OPERATION ESTIMATION) a number of operations required for the computing of each of the trace invariants comprised in the combination is estimated. In another example, a number of operations required for the computing of the invariance value, associated with the combination, is estimated at step 1602. As an example, the number of operations is estimated based on the dimensions and on the order of the tensor.
At a step 1603 (INSTRUCTION PROGRAMMING) device 100 is, for example, programmed to implement the method described in relation with
Curve 1702 illustrates the success rates (DETECTION SUCCESS) for the implementation of a signal detection method such as described in relation with
An advantage of the described embodiments is that they enable to implement a signal detection method on data represented by tensors of orders greater than or equal to 3, without converting them into matrices.
Another advantage of the described embodiments is that they enable to adapt the computing of the invariance value so as to minimize the failure rate in the detection method.
Various embodiments and variants have been described. Those skilled in the art will understand that certain features of these various embodiments and variants could be combined, and other variants will become apparent to those skilled in the art. The estimation of a combination of invariants enabling to compute the invariance value may be based on criteria other than memory resource and/or time criteria.
Finally, the practical implementation of the described embodiments and variants is within the abilities of those skilled in the art, based on the functional indications given hereabove.
Number | Date | Country | Kind |
---|---|---|---|
2315434 | Dec 2023 | FR | national |