This application claims the benefit of EP18171788.5, filed on May 11, 2018, which is hereby incorporated by reference in its entirety
Embodiments describe a method of creating an image chain.
In imaging techniques such as X-ray imaging, initial or “raw” data may be subject to several processing steps to obtain an image that may be presented to a user, for example for diagnostic purposes. The initial processing steps may be referred to collectively as “image pre-processing”, since the steps are necessary to obtain an image that may be viewed by a user. A sequence of image processing steps may be performed. The sequence or chain of steps or “method blocks” may be referred to as the “imaging chain” or “image chain”. An image chain may include several method blocks, for example Laplace pyramid decomposition, shrinkage, and re-composition.
Each method block may involve several linear and/or non-linear operations or functions. Examples of functions are filtering, filtering, padding, edge detection, edge preservation, convolution, wavelet shrinkage, etc. An image chain may include a specific sequence of imaging method blocks, each with a specific sequence of image processing functions.
To operate correctly, an image processing function may be configured or set up using appropriate parameters. Many parameters may be necessary to configure a single processing step, for example cut-off frequencies for a Laplace pyramid, standard deviation of Gaussian or bilateral filters, ε (epsilon) for parameter shrinkage, etc. The results of one image processing function may affect an image processing function further downstream in the image chain, and may also need to be taken into account when choosing the input parameter set for the image processing functions of a method block. However, it may be very difficult to determine the extent to which a specific parameter will affect the overall image quality.
For these reasons, it is difficult and time-consuming to identify a satisfactory input parameter set for each method block of an image chain. A customer of an imaging system may expect this step to be taken care of by the manufacturer. However, it is difficult for the manufacturer of an imaging system to configure the image chain in such a way that all customers will be equally satisfied with the results. One possible approach may be to allow the customer to take care of parameter selection to some extent, for example using a multiple-choice approach, but this might require the customer to obtain an in-depth understanding of the entire image chain. It may be expected that such an additional level of effort would be unacceptable to most customers.
The scope of the present invention is defined solely by the appended claims and is not affected to any degree by the statements within this summary. The present embodiments may obviate one or more of the drawbacks or limitations in the related art.
Embodiments provide an image processing method.
In an embodiment, a method of creating an image chain includes the steps of identifying a set of image processing functions required by the image chain; replacing each image processing function by a corresponding neural network; determining a sequence of execution of the neural networks; and applying backpropagation to adjust the performance of the neural networks.
Instead of a sequence of functions that each operates independently and separately, the performance or behavior of a neural network in the image chain is adjusted according to the other neural networks. The mutual adjustment results in significantly fewer parameters required for the image chain. A conventional type of image chain may require several hundred parameters to be specified for a sequence of independent and separate image processing functions.
In an embodiment, the image processing method includes the steps of creating such an image chain, and passing an image through the image chain to obtain an image processing result. The image quality obtained by the image processing method compares favorably with the image quality obtained by a conventional image chain, but may be achieved with significantly less effort.
In an embodiment, the imaging system includes an input for obtaining an image generated by an imaging device; a processor realized to carry out the image processing method; and a display unit for presenting the image processing results to a user.
Units or modules of the imaging system mentioned above, for example, the image chain, may be completely or partially realized as software modules running on a processor. A realization largely in the form of software modules may include have the advantage that an image processing application already installed on an existing imaging system may be updated, with relatively little effort, to implement an image chain in the image processing method. Embodiments also provide a computer program product with a computer program that is directly loadable into the memory of a control unit of an imaging system, and that includes program units to perform the steps of the method when the program is executed by the control unit. In addition to the computer program, such a computer program product may also include further parts such as documentation and/or additional components, also hardware components such as a hardware key (dongle etc.) to facilitate access to the software. A computer readable medium such as a memory stick, a hard-disk or other transportable or permanently-installed carrier may serve to transport and/or to store the executable parts of the computer program product so that the parts may be read from a processor unit of an imaging system. A processor unit may include one or more microprocessors or their equivalents.
As indicated above, the terms “imaging chain” and “image chain” are synonymous and may be used interchangeably in the following. The method may be used to create an image chain for any kind of image processing task. The image chain may be used in a medical imaging system. The imaging device may be used to generate the initial or raw data for an X-ray imaging device.
An image chain may be regarded as a sequence of basic image processing blocks or “method blocks”. Each method block serves to complete a certain image processing task such as Laplace decomposition, bilateral filtering, Gaussian filtering, shrinkage, Fourier transformation, or median/quantile filtering. In a conventional image chain, each method block may perform a task using a sequence of image processing functions, and the same image processing function may be used by one or more method blocks. A conventional image chain may therefore be associated with a pool or set of image processing functions.
In an embodiment, the step of identifying the set of image processing functions includes identifying method blocks of the image chain and identifying any image processing functions implemented in each method block of the image chain.
An image processing function may be a linear operation such as a Gauss filter operation, a Vesselness filter operation to enhance threadlike structures, a wavelet shrinkage operation to perform smoothing or de-noising, etc. A linear operation may be modelled in a mathematically exact manner by a neural network.
An image processing function may be a non-linear operation such as an erosion filter operation, a dilatation filter operation, a median filter operation, etc. A sub-gradient descent technique may be used to identify a neural network for such an image processing function.
Not all non-linear functions may be represented in a mathematically exact manner by a neural network. In an embodiment, the method of creating an image chain includes a step of applying a universal approximation theorem to obtain a neural network for the non-linear operation.
The set of image processing functions or operations required to construct the conventional image chain is replaced by an equivalent set of neural networks.
As indicted above, significantly fewer parameters are required to configure the image chain, compared to a conventional image chain. In an embodiment, the method includes a step of identifying an initial parameter set for the neural networks of the image chain. The initial parameter set may include a set of parameters chosen as an “intelligent guess”, without any significant effort to choose the set of parameters with the aim of obtaining an optimal image processing result. Instead, in an embodiment, the parameters are fine-tuned or adjusted in the backpropagation step.
One way of creating the image chain includes replacing each image processing function by its neural network equivalent, as explained above. This may include the advantage of requiring less effort in choosing parameters for the image chain method blocks. However, the image chain may be optimized even further by making use of a property of neural networks, e.g. that a cascade or chain of many neural networks may be “collapsed” to provide a much shorter chain that approximates the behavior of the original chain. In an embodiment, the method of creating an image chain includes a step of re-arranging the order of image processing functions to obtain an image chain approximation. This results in even fewer parameters and fewer computation steps to arrive at comparable results. In an embodiment, the image chain approximation includes at most a single instance of each neural network of the set of neural networks originally identified for the image chain.
As explained above, an initial parameter set may be identified for the image chain. The initial parameter set may be adjusted after performing image processing on one or more test images, for example by comparing a result with an expected or desired result and adjusting the parameter set accordingly. In an embodiment, a calibration step may be carried out before using the image chain in actual real-life imaging procedures. In the calibration step, an image (for example any image previously obtained by that imaging system or a comparable imaging system) is passed through the image chain multiple times, using a different parameter set each time, to obtain a plurality of image processing results. The plurality of image processing results may be shown to a user, who may then select the best image (the user is effectively taking on the role of “loss function”). Subsequently, backpropagation is performed on the basis of the selected, e.g. optimally processed image to identify an optimal set of parameters for the image chain. A calibration sequence might involve N first passes using N variants of a “rough” set of parameters, and the process may be repeated for successive adjustments of the parameter set. For example, a calibration sequence may include four first passes using four variants of a “rough” set of parameters. Of the four candidate result images, the user selects the best one, and the parameter set is adjusted accordingly. In a subsequent step, four second passes are made, using four variants of that updated parameter set. This may be repeated a number of times, resulting in convergence to an optimal parameter set. An advantage of this calibration step is that the step is simple and intuitive from the user's point of view. The user may easily identify which image is “best” without having to understand the significance of the parameters actually being used by the image chain.
In the figures, like numbers refer to like objects throughout. Objects in the diagrams are not necessarily drawn to scale.
After completion of the image chain 10, input parameters to the method blocks MI, MII, MIII are adjusted by applying a back-propagation algorithm as indicated by the arrow BP.
As explained above, an initial training step may be performed by a user to optimize the results. Initial parameters Pinitial for an image chain 10 may have been set automatically or may have been set by the user.
The image chain 10 of
Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope. For example, although the method has been described in the context of processing 2D X-ray images, the method may equally be applied to the processing of 3D, 2D-plus-time and also 3D-plus-time images. It is to be understood that the elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims may, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent, and that such new combinations are to be understood as forming a part of the present specification.
While the present invention has been described above by reference to various embodiments, it may be understood that many changes and modifications may be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.
Number | Date | Country | Kind |
---|---|---|---|
18171788.5 | May 2018 | EP | regional |