The present application claims the benefit under 35 U.S.C. § 119 of European Patent Application No. EP 19186797.7 filed on Jul. 17, 2019, which is expressly incorporated herein by reference in its entirety.
The present disclosure relates to devices and methods for operating a neural network.
Neural networks are becoming more and more widely used to classify images into a pre-defined number of classes. Understanding how the neural network has reached its results can greatly help in determining how trustworthy the classification is, but is alas notoriously difficult.
The document “Deep inside convolutional networks: Visualising image classification models and saliency maps” by Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman, in International Conference on Learning Representations (ICLR), 2013, describes interpreting a network decision making process by generating low-level visual explanations. Such and similar approaches mostly focus on the task of image classification and can be divided into two categories: gradient-based and perturbation-based methods.
Gradient-based methods compute a saliency map that visualizes the sensitivity of each image pixel to the specific class prediction, which is obtained by backpropagating the gradient for this prediction with respect to the image and estimating how moving along the gradient influences the class output. Gradient based methods mostly rely on heuristics for backpropagation and may provide explanations which are not faithful to the model or data. Perturbation-based methods evaluate the class prediction change with respect to a perturbed image, e.g., for which specific regions of the image are replaced with mean image values or removed by applying blur or Gaussian noise.
Efficient approaches for saliency determination, which are not limited to explanations of image classification networks but also for other neural networks such as dense prediction networks, are desirable.
An example method and device in accordance with the present invention allow providing spatially coherent explanations for neural networks such as dense prediction (or semantic segmentation) networks. For a semantic segmentation neural network, spatial and semantic correlations in the training data picked up by the neural network may thus be discovered and for example taking into account in further processing, such as vehicle control for automated driving.
Further examples are described in the following.
An example method for operating a neural network in accordance with the present invention, performed by one or more processors, may include determining, for neural network input sensor data, neural network output data using the neural network, wherein the neural network input data sensor data includes a multiplicity of input data points, each input data point being assigned one or more input data point values and wherein the neural network output data includes a multiplicity of output data points, each output data point being assigned one or more output data point values and wherein each output data point is associated with one or more input data points, selecting a portion of output data points out of the multiplicity of output data points to form a region of interest, wherein the region of interest includes a plurality of output data points and determining, for each of at least some output data points outside the region of interest, a contribution value representing a contribution of the one or more input data points associated with the output data point for the neural network determining the output data point values assigned to output data points in the region of interest. The method mentioned in this paragraph provides a first example.
Each output data point may be associated with one or more input data points by a mapping of input data point coordinates to output data point coordinates. The features mentioned in this paragraph in combination with the first example provide a second example.
The input data points may be structured as an input array and the output data points may be structured as an output array and each output data point is associated with one or more input data points by a mapping of positions in the input array to positions in the output array. The features mentioned in this paragraph in combination with any one of the first example to second example provide a third example.
The input data points may be structured as an input image and the output data points may be structured as an output image and each output data point may be associated with one or more input data points by a mapping of pixel positions in the input array to pixel positions in the output array. The features mentioned in this paragraph in combination with any one of the first example to third example provide a fourth example.
The method may include that the output data points with contribution values and the output data points of the region of interest are presented to a user. The method may further include that the relative position of the output data points with contribution values with the output data points of the region of interest are compared. Both proceedings have the advantage that the output of the neural network can be verified depending on the contribution values.
The method may further include determining the contribution value of a data point based on a measure of the effect that a perturbation of input data point values of an input data point associated with the output data point has on the one or more output data point values of the output data point. The features mentioned in this paragraph in combination with any one of the first example to fourth example provide a fifth example.
The contribution values may be determined based on a trade-off between a total measure of the contribution values and a preservation loss which occurs when determining the output data point values assigned to output data points in the region of interest and when information in the input data values is disregarded based on the contribution values. The features mentioned in this paragraph in combination with any one of the first example to fifth example provide a sixth example.
The portion of output data points selected to form a region of interest may be a true subset of the multiplicity of output data points of the output data. The features mentioned in this paragraph in combination with any one of the first example to sixth example provide a seventh example.
The output data point value of each output data point may specify a data class of the input data point values of the one or more input data points associated with the output data point. The features mentioned in this paragraph in combination with any one of the first example to seventh example provide an eighth example.
The contribution value for an output data point may represent a contribution of the one or more input data points associated with the output data point to the decision of the neural network to set the output data point values of the output data point to specify the data class. The features mentioned in this paragraph in combination with the eighth example provide a ninth example.
The neural network input sensor data may include one or more images. The features mentioned in this paragraph in combination with any one of the first example to ninth example provide a tenth example.
The neural network output data may include a result image. The features mentioned in this paragraph in combination with any one of the first example to tenth example provide an eleventh example.
The region of interest may be an image region in the result image. The features mentioned in this paragraph in combination with the eleventh example provide a twelfth example.
The neural network may be trained for image segmentation wherein the result image represents a semantic segmentation. The features mentioned in this paragraph in combination with any one of the eleventh example to twelfth example provide a thirteenth example.
The region of interest may correspond to one or more segments of the semantic segmentation. The features mentioned in this paragraph in combination with the thirteenth example provide a fourteenth example.
The result image may be a depth image or a motion image. The features mentioned in this paragraph in combination with any one of the eleventh example to twelfth example provide a fifteenth example.
The method may include generating a saliency map representing the contribution values. The features mentioned in this paragraph in combination with any one of the first example to fifteenth example provide a sixteenth example.
The contribution values may be the pixel values of the saliency map. The features mentioned in this paragraph in combination with the sixteenth example provide a seventeenth example.
Each input data point may be associated with exactly one output data point. The features mentioned in this paragraph in combination with any one of the first example to seventeenth example provide an eighteenth example.
The portion of output data points may be selected such that the output data point values of the output data points lie within a predetermined range. The features mentioned in this paragraph in combination with any one of the first example to eighteenth example provide a nineteenth example.
The method may further include controlling training of the neural network based on the contribution values. The features mentioned in this paragraph in combination with any one of the first example to nineteenth example provide a twentieth example.
The method may further include controlling an actuator based on the contribution values. The features mentioned in this paragraph in combination with any one of the first example to twentieth example provide a twenty-first example.
The method may further include evaluating the performance of the neural network based on the contribution values. The features mentioned in this paragraph in combination with any one of the first example to twenty-first example provide a twenty-second example.
A device in accordance with the present invention may be configured to perform a method of any one of the first example to twenty-second example. The features mentioned in this paragraph provide a twenty-third example.
A vehicle in accordance with the present invention may include at least one image sensor configured to provide digital image data and a driver assistance system including a neural network operated according to any one of the first example to twenty-second example, wherein the neural network is configured to classify the digital image data and wherein the driver assistance system is configured to control the vehicle based on the classified digital image data and the contribution values. The features mentioned in this paragraph provide a twenty-fourth example.
A computer program may have program instructions that are configured to, when executed by one or more processors, to make the one or more processors perform the method according to one or more of the first example to twenty-second example.
The computer program may be stored in a machine-readable storage medium.
In the figures, like reference characters generally refer to the same parts throughout the different views. The figures are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various aspects are described with reference to the figures.
The following detailed description refers to the figures that show, by way of illustration, specific details and aspects of this disclosure in which the present invention may be practiced.
Other aspects may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The various aspects of this disclosure are not necessarily mutually exclusive, as some aspects of this disclosure can be combined with one or more other aspects of this disclosure to form new aspects.
In the following, various examples will be described in more detail.
In the example of
The vehicle controller 102 includes data processing components, e.g., a processor (e.g. a CPU (central processing unit)) 103 and a memory 104 for storing control software according to which the vehicle controller 102 operates and data on which the processor 103 operates.
For example, the stored control software includes instructions that, when executed by the processor 103, make the processor implement a neural network 107.
The data stored in memory 104 can include image data from one or more image sources 105, for example acquired by one or more cameras. An image can include a collection of data representing one or more objects or patterns. The one or more image sources 105 may for example output greyscale or colour pictures of the vehicle's environment. The one or more image sources 105 may be responsive to visible light or non-visible light such as e.g. infrared or ultraviolet light, ultrasonic or radar waves, or other electromagnetic or sonic signals.
The vehicle controller 102 may determine the presence of objects, e.g. fixed objects, such as traffic signs or road markings, and/or moving objects, such as pedestrians, animals and other vehicles, based on the image data.
The vehicle 101 may then be controlled by the vehicle controller 102 in accordance with the results of the object determination. For example, the vehicle controller 102 may control an actuator 106 to control the vehicle's speed, e.g., to actuate the brakes of the vehicle.
The control may be performed on the basis of an object classification performed by the neural network 107.
In this example, the neural network 200 includes one input layer 201, two layers 202a and 202b and one output layer 203.
It should be noted that the neural network 200 is a simplified example of an actual deep neural network, e.g., a deep feed forward neural network, used for classification purposes, which may include many more processing nodes and layers.
The input data corresponds to the input layer 201, and can generally be seen as a multi-dimensional array of values, e.g. an input image can be seen as a 2-dimensional array of values corresponding to the pixel values of the image.
The inputs from the input layer 201 are then connected to processing nodes 204. A typical node 204 multiplies each input with a weight and sums the weighted values up. Additionally, a node 204 may add a bias to the sum.
The nodes 204 are typically each followed by a non-linear activation function 205, e.g., Rectified Linear Unit, ReLU ƒ(x)=max(x,0) or a sigmoid function
The resulting value is then output to the next layer.
Layers 202a and 202b may be hidden layers, e.g., fully connected layers, as shown in
The layers may also be (or be supplemented by) non-fully connected layers, e.g., convolutional or pooling layers in case of a convolutional neural network, CNN (typically followed by one or more hidden layers).
In a convolutional layer, the inputs are modified by convolutional filters. These filters operate on a subset of the input data, and may help to extract features of the input data, e.g., a particular shape or pattern. A filter implemented by the convolutional layer causes several corresponding nodes 204 of the convolutional layer to receive inputs from only a portion of the previous layer.
A pooling layer can be seen as a form of non-linear down-sampling, reducing the dimensions of the data by combining the outputs of several nodes into a single node in the next layer, e.g., by taking the maximum value of the outputs.
In a neural network designed for classification such as neural network 200, the output layer 203 receives values from at least one of the preceding layers, e.g. from a hidden layer 202b. These values may then be turned into probabilities by the output layer, e.g. by applying the softmax function
where vi, i=1, . . . , K, are the values received by the output layer) or the sigmoid function on them. The highest probability value contained in an output vector corresponds to a class prediction.
In the following, class predictions may also be referred to as predictions, predicted class labels or predicted classification labels.
An output vector of output layer 203 is thus a probability vector indicating, for each of the pre-defined classes, the probability that an image corresponds to the pre-defined class, e.g., that it shows a predefined object. For example, assuming there are 10 pre-defined classes (0, 1, . . . , 9) for the input image of a digit, the output vector is a vector consisting of 10 elements where each element corresponds to the probability for a digit. The class prediction will be the digit corresponding to the highest probability in the output vector. The output layer 203 may output the entire vector consisting of probability values, or only output the class predictions.
For being able to classify an image, the neural network 200 is first trained accordingly. In case of automotive driving, this may be done based on a collection of traffic scenes such as Cityscapes.
It should be noted in the above example, one image is classified, e.g., an image is classified as showing a pedestrian. However, e.g., in automotive driving scenarios, an image (e.g., taken by camera 105) typically includes a plurality of objects. Therefore, in such an application, dense prediction (or semantic segmentation) may be used which can be seen to classify each pixel of an image. For example, certain pixels may be classified to show a pedestrian while others are classified to show another vehicle. Such a dense prediction may be performed analogously using a neural network as explained above for image classification, with the difference that the output includes a class prediction per pixel of an image instead of a class prediction per image. The output for an image x may thus be another image which indicates the class prediction for each pixel (e.g., coded by colour, for example pedestrians green, vehicles red, background grey, etc.), i.e., ƒ(x) is itself an image with a plurality of channels corresponding to the possible class predictions. For semantic segmentation a CNN may be used.
In many real-world scenarios, presence of an object, its location and appearance are highly correlated with the contextual information surrounding this object, such as the presence of other nearby objects or more global scene semantics. For example, in the case of an urban street scene, a cyclist is more likely to co-occur on a bicycle and a car is more likely to appear on a road below sky and buildings. These semantic correlations are inherently present in real-world data. A data-driven model, such as a deep neural network 200, is prone to exploit these statistical biases in order to improve its prediction performance. An effective and safe utilization of deep learning models for real-world applications, e.g. autonomous driving, requires a good understanding of these contextual biases inherent in the data and the extent to which a learned model incorporated them into its decision making process. Otherwise, there is a risk that an object is wrongly classified, e.g., because it occurs in an unusual position.
A saliency method may explain predictions of a trained model (e.g., a trained neural network) by highlighting parts of the input that presumably have a high relevance for the model's predictions, i.e., by identifying the image pixels that contribute the most to the network prediction.
According to various embodiments, an approach for saliency determination, i.e., a saliency method, is provided which can be seen as an extension of a saliency method for image classification towards (pixel-level) dense prediction tasks and which allows to generate spatially coherent explanations by exploiting spatial information in the dense predictions (and to spatially differentiate between prediction explanations).
In the following examples, this approach is referred to as grid saliency, which is a perturbation-based saliency method, based on a formulation as an optimization problem of identifying the minimum unperturbed area of the image needed to retain the network predictions inside a target object region. As grid saliency allows differentiating between objects and their associated context areas in the saliency map, it allows producing context explanations for semantic segmentation networks, discovering which contextual information most influences the class predictions inside a target object area.
Let ƒ: I→O denote the prediction function, e.g. implemented by a deep neural network 200, which maps a grid input space I=RH
In the following, for ease of explanation, only images are considered as input and per-pixel dense predictions of the network are considered as output, i.e., the input is an image, x∈I and the output is a per-pixel dense prediction ƒ(x).
Furthermore, it is for simplicity assumed that the input and output spatial dimensions are the same.
According to an embodiment, the grid saliency determination for the input image x can be seen to be based on finding the smallest saliency (map) M∈[0,1]H
According to various embodiments of the present invention, grid saliency is based on a perturbation saliency approach. This means that a salient image region most responsible for a classifier decision (in a request area) is determined by replacing parts of the image with uninformative pixel values, i.e., perturbing the image, and evaluating the corresponding class prediction change.
Let p denote a perturbation function that removes information from an image x outside of the saliency M (wherein outside means pixels for which M is not 1). For example, such perturbation function can be the interpolation between x and a∈I, where a can be a constant colour image, Gaussian blur, or random noise. In this case, p(x,M)=x∘M+a∘(1−M), wherein ‘∘’ denotes the Hadamard product.
It should be noted that in practice the saliency M may be defined on a lower resolution than the input image to avoid adversarial artefacts and may be later upsampled to the input image resolution. In addition, the pixel values of the perturbed image p(x,M) may be clipped to preserve the range of the original image pixel value space.
Based on the above notation, the task of finding the saliency map M for the prediction of class c can be formulated as the optimization
where ∥.∥1 denotes the l1 norm and ƒC(x) is the network prediction for class c, i.e. the pixel value of the output image for the class (channel) c.
The first term on the right hand side of equation (1) can be considered as a mask loss that minimizes the salient image area such that the original image is perturbed as much as possible.
The second term serves as a preservation loss that ensures that the network prediction ƒC(p(x,M)) for class c on the perturbed image p(x,M) reaches at least the confidence of the network prediction ƒC(x) on the original unperturbed input image. Thus, the second loss term can be considered as a penalty for not meeting the constraint ƒC(p(x,M))>ƒC(x), hence the use of max(•,0) in equation (1). The parameter λ controls the sparsity of M, i.e. controls how strongly the size of the saliency is punished. An exemplary value of λ is 0.05 but other values are possibly to generate smaller or larger saliencies.
It is further possible to spatially disentangle explanations given in the saliency map M for the network predictions in the requested area of interest R from the explanations for the other predictions, by restricting the preservation loss to the request mask R in equation (1) according to
In the following, the result of equation (2) is referred to as a grid saliency map.
The grid saliency formulation from equation (2) may be adapted to specifically provide context explanations for the requested area of interest R. Context explanations are of particular interest for semantic segmentation, as context typically often serves as one of the main cues for semantic segmentation networks.
Thus, according to various embodiments of the present invention, there is a focus on context explanations for semantic labelling predictions and it is assumed that R is the area covering an object of interest in the input image x. To optimize for salient parts of the object context, the object request mask R is integrated into the perturbation function. For the request mask R, the perturbed image p(x,R)∈I contains only the object information inside R and all the context information outside R is removed (e.g., with a constant colour image a).
For optimization (i.e., determination of the saliency) this new perturbed image p(x,R) is used instead of the maximally perturbed image p(x,M=0)=a and the context perturbation function is pcontext(x,R,M)=x∘M+p(x,R)∘(1−M). In other words, the image information within R is pertained when the input image is perturbed (therefore, the saliency does not “need” to include the request area R).
The context saliency map for class c and request object R is then given by
This can be seen as optimizing of the saliency map to select the minimal context necessary to at least yield the original prediction for class c inside the request mask R.
It should be noted that the context saliency map may be seen as a special case of a grid saliency map since the perturbation takes a special form.
In 301, a neural network is trained for dense prediction, denoted by a function ƒ.
In 302, given the network trained for ƒ (the dense prediction task), an input image x and the prediction (map) of the network ƒC(x) for this input image and class c (e.g., output image channel c), a target area R in the prediction map for which the visual explanation is required is selected.
In 303, these components are used to define an optimization problem according to equation (3) which is solved to get a saliency map as a post-hoc explanation for the prediction in the target area R.
The optimization problem can be solved (i.e., the saliency map can be optimized) using various optimization techniques, such as stochastic gradient descent (SGD) with momentum or Adam. For example, for SGD, a momentum of 0.5 and a learning rate of 0.2 for 100 steps may be used and the saliency map may be initialized with 0.5 for each pixel. Another example is an optimization of a coarse 16 by 32 pixel mask using SGD with a learning rate of 1 for 80 steps.
In 304, the resulting saliency map is used for various purposes, e.g., as visual explanations for the network predictions, as well as for debugging the errors in the trained network and determination of the reliability of a classification.
For example, a controller 102 may determine the reliability of a classification based on the saliency map and may further process a class prediction based on the saliency map. For example, a class prediction may be accepted or discarded for further processing (e.g., vehicle control) depending on the determined context. For example, the controller 102 may discard a classification if the context of the classification does not make sense, e.g., when the determined context does not include a region of an image which can be expected to be relevant as explained in the following.
In
In
As can be seen, grid saliency can be used contextually explain correct predictions: in the second picture 402 of
This is a case where the controller 102 may discard the classification: Since the bike is not salient for the classification of the rider, but is salient for the rider's legs, the controller may suspect that the bike has been ignored by mistake for the classification of the rider as pedestrian and may ignore this classification.
Grid saliency can for example be used to enhance a trained model (e.g., any dense prediction network) by detecting biases picked up by the trained network via the obtained grid saliency maps. Obtained grid saliency maps can be used for anomaly detection by checking inconsistencies across the saliency explanations for the same semantic object class and identifying the cause of the anomaly by the obtained saliency map. More generally, it can be used to explain any unexpected model behaviour, e.g., in case of erroneous predictions for corner cases, grid saliencies can be used to understand which part of the image (any data sample) is “an outlier” (abnormal) and contributes to the failure.
Grid saliency can in particular be used in an on-line diagnosis tool, e.g. in a controller such as vehicle controller 102.
It should be noted that while in the above examples the grid saliency is determined via perturbation it may also be determined based on other approaches such as in a gradient-based manner as explained in the following.
Let G(x, c)=∂ƒc(x)/∂x∈H
respectively, wherein n is the number of approximation steps for IG or the number of samples for SG, and (0,σ2) represents Gaussian noise with standard deviation σ.
Similarly to the perturbation-based approach, explanations given in the saliency M for the network predictions in the request area R are spatially disentangled from other predictions. For a given input x and a binary request mask R, the normalized network prediction score for class c in the request area R is denoted as
x(x,R,c)=∥R∘ƒc(x)∥1/∥R∥1,S(x,R,c)∈.
Similarly to G(x, c), for grid saliency, Ggrid(x, R, c):=∂S(x,R,c)/∂x∈H
M
context
VG/IG/SG(x,R,c):=(1−R)∘MgridVG/IG/SG(x,R,c) (5)
It should be noted that gradient-based saliency maps are prone to be noisy. Thus, to circumvent this a spatial mean filter may be used on top of the saliency map with a (WI/WS)×(HI/HS) kernel and stride, where WS×HS is the resolution of the perturbation-based saliency map.
In summary, according to various embodiments of the present invention, a method is provided as illustrated in
In 601, neural network output data is determined, for neural network input sensor data using the neural network, wherein the neural network input data sensor data includes a multiplicity of input data points, each input data point being assigned one or more input data point values and wherein the neural network output data includes a multiplicity of output data points, each output data point being assigned one or more output data point values and wherein each output data point is associated with one or more input data points.
In 602, a portion of output data points is selected out of the multiplicity of output data points to form a region of interest, wherein the region of interest includes a plurality of output data points.
In 603, for each of at least some output data points outside the region of interest, a contribution value is determined representing a contribution of the one or more input data points associated with the output data point for the neural network determining the output data point values assigned to output data points in the region of interest.
According to various embodiments of the present invention, in other words, a saliency of input data values is determined for a certain region of interest in the output. For example, a preservation loss used as a basis to determine the contribution values (which may be seen as a saliency determination result, e.g., may form a (grid) saliency map) is restricted to the region of interest.
For example, context explanations for a semantic segmentation in terms of the contribution values are determined. The contribution values may for example be the values of M∈[0,1]H
The method of
While in the above examples, the neural network is a dense prediction network for a vehicle having camera images as input data, the approach of
It should in particular be noted that the input data are not limited to images but can also be applied to any image-like data (e.g. data structured in the form of one or more two-dimensional or also higher-dimensional arrays) such as spectrograms of sounds, radar spectra, ultrasound images, etc. Moreover, raw 1D (e.g. audio) or 3D data (video, or RGBD (Red Green Blue Depth) data) can also be used as input.
A generated saliency determination result, e.g., a grid saliency map, may be used as a basis for computation of a control signal for controlling a physical system, like e.g., a computer-controlled machine, like a robot, a vehicle, a domestic appliance, a power tool, a manufacturing machine, a personal assistant or an access control system or a system for conveying information, like a surveillance system or a medical (imaging) system in order to interpret and understand decision making process of the dense prediction network used in the above physical system. It does so by generating low-level visual explanations (e.g., grid saliency maps). In particular, the result image allows identifying the cause of the anomaly by analysing explanations given by the result image.
According to various embodiments, the input data is in the form of images (or at image-like data structures). Thus, according to various embodiments, a method for analysing a neural network, performed by one or more processors, is provided, including determining, for an input image, a result image by means of the neural network, wherein the result image includes a multiplicity of pixels which have each one or more pixel values, selecting a region of interest of the result image and determining regions of the result image outside the region of interest depending on which the neural network has determined the pixel values of the pixels in the region of interest.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein.
Number | Date | Country | Kind |
---|---|---|---|
19186797.7 | Jul 2019 | EP | regional |