Method for Training a ML System, ML System, Computer Program, Machine-Readable Storage Medium and Device

Information

  • Patent Application
  • 20240028891
  • Publication Number
    20240028891
  • Date Filed
    December 15, 2021
    2 years ago
  • Date Published
    January 25, 2024
    10 months ago
Abstract
A method is for training an artificial neural network, for classifying sensor data, as a function of a first loss function and a second loss function. The first loss function is calculated as a function of an output of the artificial neural network. The second loss function is configured such that the output of the artificial neural network is essentially normalized.
Description

The present invention relates to a method for training a machine learning (ML) system, in particular an artificial neural network, especially for classification of sensor data.


Furthermore, the present invention relates to a corresponding ML system, computer program, machine-readable storage medium, and a corresponding apparatus.


PRIOR ART

When training an artificial neural network, a common loss function (e.g., for a classification task) is the cross-entropy loss function. Typically, this loss function is preceded by a softmax function or layer that normalizes the incoming data by using the following function:







σ

(

z
j

)

=


e

z
j









k
=
1

K



e

z
k








The softmax function ensures that each value of the output data or output vector is between [0,1] and that the sum of all output vector values is 1.


This softmax function is often expensive or impossible to compute on inference hardware because it has exponential terms.


When running the trained neural network on the inference hardware, especially when running the forward pass on the inference hardware, the loss function computation is no longer needed. The softmax function could also be omitted, but this will result in different output ranges.


In particular, in a classification task (e.g., pixel-wise classification in semantic segmentation; the object classification of objects in bounding boxes), the normalized output is needed. For example, in pixel-by-pixel classification, each pixel is normalized individually. After this normalization, the class values can be compared between the pixels. If a semantic segmentation network outputs 5 classes, there will be a class score for each of these 5 classes. If these pixel values are not normalized, it is difficult to compare them between pixels because there is no guarantee that the scores for each pixel are in the same range. For the classification of bounding box objects, it is important that the scores are also normalized, since there is usually a threshold that excludes boxes in which there is not a single object class whose score exceeds a certain threshold.


DISCLOSURE OF THE INVENTION

With this in mind, the present invention creates a method for training an ML system as a function of a first loss function and as a function of a second loss function, where the first loss function is computed as a function of the output of the artificial neural network.


The method is characterized in that the second loss function is configured in such a way that the output of the artificial neural network is essentially normalized.


A machine learning (ML) system can be understood as a system for the artificial creation of knowledge from information, e.g., training data. Such a system “learns” from the matching of input data and the output data expected of that input data.


For example, an artificial intelligence can be counted among machine learning systems. In particular, artificial neural networks are among the machine learning (ML) systems.


An artificial neural network can be understood as a network of artificial neurons for information processing. Artificial neural networks essentially go through three phases. In an initial phase, a basic topology is specified, usually depending on the task. This is followed by a training phase in which the basic topology is taught to solve the task efficiently using training data. Within the training phase, the topology of the network can also be adjusted. The output data of the taught-in network then represent output data searched for according to the task.


The ML system of the present invention, especially the artificial neural networks are suitable for classification of sensor data.


The sensor data can be data from sensors in the automotive sector. These include video, radar, lidar, ultrasound and infrared sensors as well as thermal imaging cameras.


In this regard, the method of the present invention solves the problem of ensuring that the training of the ML system already normalizes the output of the ML system. This means, for example, that the sum of the output values along a dimension (in the case of a classification task or semantic segmentation to be solved by the ML system) is 1 or comes close to the value 1.


This is achieved in particular by introducing the second loss function.


According to one embodiment of the method according to the present invention, an artificial neural network is applied to the output of the artificial neural network to approximate a softmax function to calculate the second loss function.


This embodiment has the advantage that the exponential terms can be omitted in a network for approximating a softmax function.


According to one embodiment of the method according to the present invention, the output of the artificial neural network is added up along at least one dimension to calculate the second loss function.


According to one embodiment of the method according to the present invention, the second loss function is configured in such a way that the output of the artificial neural network adds up to 1.


According to one embodiment of the method according to the present invention, an artificial neural network is applied to the output of the artificial neural network to approximate a softmax function to calculate the first loss function.


According to one embodiment of the method according to the present invention, a softmax function is applied to the output of the artificial neural network to calculate the second loss function.


This embodiment is characterized in that the second loss function is configured in such a way that the output of the artificial neural network approximates the output of the softmax function.


Another aspect of the present invention is an ML system trained according to the method according to the present invention.


A machine learning (ML) system can be understood as a system for the artificial creation of knowledge from information, e.g., training data. Such a system “learns” from matching input data and expected output data.


For example, an artificial intelligence can be counted among machine learning systems. In particular, artificial neural networks are among the machine learning (ML) systems.


The output of the ML system according to the present invention can be used to control an actuator or generate a control signal to control an actuator.


In the present context, an actuator can be understood as a robot. Such a robot can be an at least partially automated vehicle or part of such a vehicle, such as a longitudinal or transverse control system.


For clarification, the method for training an ML system according to the present invention may be part of a method comprising, in a first step, training an ML system and, in a second step, controlling an actuator or robot in response to the output of the ML system.


In another aspect of the present invention, there is provided a computer program configured to carry out the method according to the present invention.


Another aspect of the present invention is a machine-readable storage medium on which the computer program according to the present invention is stored.


Another aspect of the present invention is an apparatus configured to carry out the method according to the present invention.





Embodiments of the invention are explained in more detail in the following with reference to the accompanying drawings. The drawings show:



FIG. 1 a flowchart of an embodiment of the training method according to the present invention;



FIG. 2 a flowchart of an embodiment of the manufacturing method according to the present invention;



FIG. 3 a block diagram of a first embodiment of the present invention;



FIG. 4 a block diagram of a second embodiment of the present invention;



FIG. 5 a block diagram of a third embodiment of the present invention.






FIG. 1 shows a flowchart of an embodiment of the training method (100) according to the present invention. This flowchart describes one way to introduce a second loss function according to the present invention into the training of an ML system to solve the task of the present invention.


In step 101, the usual loss function for training an ML system for a classification task is computed. This common loss function can be, for example, the cross-entropy loss function.


In step 102, the output data of the network to be trained is recorded before applying a softmax function. This output data can be present in a tensor with the dimensions H×W×C.


In step 103, a 1×1 operation with a filter of dimensions 1×1×C is applied to the output data extracted in step 102. The coefficients of the filter may each be 1. This step for adding up the output data along the dimension C. The resulting feature map has the dimension H×W.


In step 104, a filter with dimensions H×W is subtracted from the resulting feature map. The coefficients of the filter have the value 1, therefore the filter is a unit matrix with the dimensions H×W.


In step 105, a norm, for example the L2 norm, is applied to the result of the subtraction of step 104.


In step 106, the network to be trained is trained as a function of a total loss function composed of the usual loss function according to step 101 and the result after applying the standard according to step 105. Furthermore, a correspondingly selected weighting factor w can be used to appropriately account for the result of the standard according to step 105 in the composition of the total loss functions.


It is conceivable that the weight factor remains constant throughout the training. Likewise, it is conceivable that the weight factor increases over training. Furthermore, it is conceivable that the weight factor is adjusted over the training in such a way that the influence of the result of the standard according to step 105 is stronger in the last training epochs.



FIG. 2 shows a flow chart of a method according to the present invention.


In step 201, the ML system, e.g., an artificial neural network is trained according to the training method of the present invention.


In step 202, the output of the trained ML system is used to control an actuator.


In this context, an actuator can be understood as a robot. Such a robot can be an at least partially automated vehicle or part of such a vehicle, such as a longitudinal or transverse control system.



FIG. 3 shows a block diagram of a first embodiment of the present invention.


Input data 30 is supplied to the artificial neural network 3 to be trained. From the network 3, the input data 30 is converted into output data 35. In the representation, the output data 35 is represented as a tensor with dimensions H×W×C.


For example, if the network 3 is trained to classify image data. Thus, in the dimension C the possible classes can be plotted. In the dimensions H×W, a probability of belonging to the respective class can be entered for each pixel of the input data.


In order to feed the output data 35 to a first loss function Lce a softmax function is performed on the output data 35 to obtain normalized output data 35′.


The normalized output data is fed to a first loss function Lce. For this purpose, a common loss function, such as the cross-entropy loss function, can be used as the first loss function Lce.


The embodiment of the present invention is based on the realization that for the subsequent inference of the trained network 3, the application of the softmax function can be omitted if, as part of the training, a second loss function Ladd is provided which is configured in such a way that the values of the output data 35 add up to 1 along the dimension C.


This is achieved by feeding the output data 35 to a second loss function Ladd without applying a softmax function, as shown in the block diagram of FIG. 3.


According to the representation, the second loss function Ladd is an L2 norm, represented by the double bars, which returns the distance to a unit matrix 36 of dimensions H×W.


For this purpose, a filter 37 with dimensions 1×1×C is applied to the output data 35. The filter is configured in such a way that the output data 35 is added up along the dimension C. For this purpose, the coefficients of the filter can be 1. It is also conceivable that the coefficients of the filter are also trained. For this purpose, it is advisable to initialize the coefficients with the value 1 first.


The introduction of the second loss function Ladd results in the normalization of the output data 35 of the trained network 3.


For inference, according to this embodiment, the trained network 3 is transferred to the inference hardware.



FIG. 4 shows a block diagram of a second embodiment of the present invention.


In the illustrated second embodiment, the application of the softmax function is omitted in the training of the artificial neural network 3.


To normalize the output data 35, it is fed to another artificial neural network 4, which is trained to output an approximation of the softmax function.


The approximated output data 35″ is fed to both a first loss function Lce and a second loss function Ladd. The first loss function Lce can be a common loss function, e.g., a cross-entropy loss function can be used.


According to the representation, the second loss function Ladd is an L2 norm, represented by the double bars, which returns the distance to a unit matrix 36 with the dimensions H×W.


For this purpose, a filter 37 with dimensions 1×1×C is applied to the approximated output data 35″. The filter is configured in such a way that the approximated output data 35″ is added up along the dimension C. For this purpose, the coefficients of the filter can be 1. It is also conceivable that the coefficients of the filter are also trained. For this purpose, it is advisable to initialize the coefficients with the value 1 first.


The introduction of the second loss function Ladd results in normalizing the approximated output data 35″ of the trained network 3.


According to this embodiment, the trained network 3 and the artificial neural network for approximating a softmax function 4 are transferred to the inference hardware.



FIG. 5 shows a block diagram of a third embodiment of the present invention.


According to this embodiment, a softmax function is applied to output data 35 of the network 3 to be trained for feeding to first loss function in order to obtain normalized output data 35′.


For this purpose, a common loss function, such as the cross-entropy loss function, can be used as the first loss function Lce.


For feeding to a second loss function Ladd, the output data 35 is fed to another artificial neural network 4 trained to output an approximation of the softmax function.


According to the embodiment shown, in addition to the output data 35″ thus approximated, the normalized output data 35′, which is also fed to the first loss function Lce, is also fed to the second loss function Ladd. The second loss function Ladd may be the L2 norm as in the previously described embodiments. Presently, this is used to cause the approximated output data 35″ to approximate the normalized output data 35′.


To normalize the output data 35, it is fed to another artificial neural network 4, which is trained to output an approximation of the softmax function.


According to this embodiment, the trained network 3 and the artificial neural network for approximating a softmax function 4 are transferred to the inference hardware.

Claims
  • 1. A method for training an artificial neural network, comprising: training the artificial neural network as a function of a first loss function; andtraining the artificial neural network as a function of a second loss function,wherein the first loss function is calculated as a function of an output of the artificial neural network, andwherein the second loss function is configured such that the output of the artificial neural network is essentially normalized.
  • 2. The method according to claim 1, wherein: another artificial neural network is configured to approximate a softmax function, andthe other artificial neural network is applied to the output of the artificial neural network to calculate the second loss function.
  • 3. The method according to claim 1, further comprising: calculating the second loss function by adding up the output of the artificial neural network along at least one dimension.
  • 4. The method according to claim 1, wherein the second loss function is further configured such that the output of the artificial neural network adds up to 1.
  • 5. The method according to claim 1, wherein: another artificial neural network is configured to approximate a softmax function, andthe first loss function is calculated by applying the other artificial neural network to the output of the artificial neural network.
  • 6. The method according to claim 2, wherein: the softmax function is applied to the output of the artificial neural network to compute the first loss function, andthe second loss function is further configured such that the output of the artificial neural network approximates an output of the softmax function.
  • 7. An artificial neural network for classification of sensor data, wherein training the artificial neural network comprises: training the artificial neural network as a function of a first loss function; andtraining the artificial neural network as a function of a second loss function,wherein the first loss function is calculated as a function of an output of the artificial neural network, andwherein the second loss function is configured such that the output of the artificial neural network is essentially normalized.
  • 8. The method according to claim 1, wherein a computer program is configured to execute the method.
  • 9. The method according to claim 8, wherein the computer program is stored on a non-transitory machine-readable storage medium.
  • 10. The method according to claim 1, wherein an apparatus is configured to carry out the method.
Priority Claims (1)
Number Date Country Kind
10 2020 215 945.9 Dec 2020 DE national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2021/085951 12/15/2021 WO