The present disclosure relates to a medical ultrasound imaging system, and more particularly, relates to a method for optimizing parameters of the ultrasound imaging system.
Ultrasound imaging has been widely used in clinical practice because the ultrasound imaging has advantages of no invasion, low price, fast imaging, and the like. However, there are many parameters of an image due to the physical characteristics of ultrasound imaging. To obtain a desired ultrasound image, many parameters need to be adjusted, and it is relatively cumbersome for a user to adjust the parameters.
At present, most of ultrasonic diagnostic apparatuses provide presets for setting presets of various imaging parameters. The preset is a set that contains all adjustable imaging parameters. Commonly used imaging parameters can be roughly divided into three categories: image acquisition parameters, display parameters, and signal processing parameters. The image acquisition parameters mainly control front-end modules, such as a transmission circuit, a receiving circuit, a transducer, and a beam combiner. These parameters can control the brightness, contrast, resolution, penetration rate, and other properties of an image. For example, when the image is relatively dark, a gain parameter may be properly increased to make the entire image become brighter. If the brightness of ranges on the image needs to be precisely controlled, a plurality of time compensation gains can be controlled to control the brightness of images in different ranges. The display parameters mainly control back-end modules, such as an image processor and a display. These parameters mainly affect the brightness, contrast, zoom in and zoom out ratios, pseudo-color displaying, and the like of final image displaying. The signal processing parameters mainly control a signal processor module and an image processor module, and are configured to perform various filtering processing on a beam combined signal. Values of these parameters have relatively high impact on an image effect.
With the development of social technologies, a deep learning technology has made some progresses in other fields. However, in the field of medical ultrasound, the deep learning technology still has some defects in the field of medical devices due to the complexity of an ultrasound system and still cannot adjust the current presets fast, accurately and intelligently according to a current image.
The objective of the present disclosure is to overcome the conventional shortcomings, and provides a deep learning-based method for optimizing parameters for an ultrasound imaging system, which may be applied to ultrasound imaging. In the present disclosure, a mapping relation is established between the quality of an image and parameters of the ultrasound imaging system by training an artificial neural network, thereby achieving the objective of improving the quality of an ultrasound image by optimizing the parameters of the ultrasound imaging system. The technical solution adopted by the present disclosure is as follows.
A deep learning-based method for optimizing parameters of an ultrasound imaging system includes the following steps:
step 1: collecting samples for training a neural network, wherein the samples include ultrasound image samples I and corresponding ultrasound imaging system parameter vector samples P used by an ultrasound imaging system during collection of the ultrasound image samples;
step 2: building a neural network model, training a neural network by using the samples collected in Step 1 till the neural network is converged, so as to obtain a trained neural network system onn; and
step 3: inputting an original ultrasound imaging system parameter vector p or an original ultrasound image as an input into the neural network system onn trained in step 2, wherein at the moment, a parameter obtained from the output end of the “onn” is an optimized ultrasound imaging system parameter vector ep=onn(p).
The present disclosure has the advantages that the present disclosure optimizes the ultrasound imaging system parameters by means of deep learning, and establishes the mapping relation between the quality of the image and the ultrasound imaging system parameters, thereby achieving the objective of improving the quality of the image by optimizing the ultrasound imaging system parameters.
The present disclosure is further described below in combination with specific drawings and embodiments.
The system block diagram of the technical solution of the present disclosure is as shown in
A deep learning-based method for optimizing parameters of the ultrasound imaging system can include the following steps.
In step 1, N groups of samples for training a neural network can be collected. The samples include, but not limited to, ultrasound image samples I and corresponding parameter vector samples P of an ultrasound imaging system used by the ultrasound imaging system during collection of the ultrasound image samples.
In step 2, a neural network model can be created and trained till the neural network is converged by using the samples collected in step 1, so as to obtain a trained neural network system “onn.”
In step 3, an original parameter vector p of the ultrasound imaging system or an original ultrasound image can be input into the neural network system “onn” trained in step 2 as inputs. A parameter obtained from the output end of “onn” is an optimized ultrasound imaging system parameter vector ep=onn(p) at the moment.
Embodiment 1: specifically, this technical solution is divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 101, presets of the parameters of the ultrasound imaging system are randomly selected from ultrasound apparatus. And one ultrasound image can be acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameters used by the acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 1000 images) and a preset parameter sample set OP (e.g., including 1000 groups of preset parameters).
In step 102, an optimized image sample set EI (e.g., including 1000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 103, a Deep Convolutional Generation Adversarial Network (DCGAN) as shown in
The DCGAN includes the generator network and a discriminator network. An OI sample is input into the generator network, and an image corresponding to the OI sample can be generated through the generator network. Then the discriminator network performs consistency comparison on the image generated by the generator network and a corresponding EI sample.
In step 104, a multi-mode Depth Boltzmann Machine (DBM) as shown in
The multi-mode DBM includes a convolutional DBM, a common DBM, and a shared hidden layer for connecting the convolutional DBM to the common DBM. The OI is input into the convolutional DBM, and the OP is input into the common DBM. The shared hidden layer establishes a connection between OI information and OP information;
An application stage can include steps as below.
In step a101, an artificial neural network system as shown in
In step a102, an original ultrasound image is input into the input end (i.e., an image input end) of the generator network in
Specifically, the ultrasonic imaging system parameter set of step 101 includes: a transmission power p1, a transmission frequency p2, a receiving frequency p3, a beam density p4, a penetration depth p5, a total gain p6, time gain compensation p7, a focus position p8, a pulse repetition frequency p9, a dynamic range p10, an image resolution p11, edge enhancement p12, and the like. And p1 to p12 represent ultrasound images obtained by the respective parameters at the moment under the preset set.
Embodiment 2: this technical solution is specifically divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 201, presets of parameters of the ultrasound imaging system are randomly selected from an ultrasound apparatus, and one ultrasound image is acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameter used by acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 2000 images) and a preset parameter sample set OP (e.g., including 2000 groups of preset parameters).
In step 202, an optimized image sample set EI (e.g., including 2000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 203, a multi-mode Depth Boltzmann Machine (DBM) as shown in
The multi-mode DBM includes a convolutional DBM, a common DBM, and a shared hidden layer for connecting the convolutional DBM to the common DBM. the OI is input into the convolutional DBM, and the OP is input into the common DBM. The shared hidden layer establishes a connection between OI information and OP information.
In step 204, the sample set EI is input to the input end of the convolutional DBM of the trained multi-mode DBM in step 203. At this moment, a result output by the multi-mode DBM from the preset parameter end (i.e., the parameter vector end of the common DBM) is a corresponding optimized preset parameter vector EP, and the process is as shown in
In step 205, a fully-connected neural network DNN as shown in
An application stage can include steps as below.
In step a201, a preset parameter vector is input to the input end of the trained fully-connected neural network DNN obtained in step 205 in the training stage. At this moment, a vector obtained at the output end is an optimized parameter vector (i.e., the optimized preset parameter vector) of the ultrasound imaging system.
Embodiment 3: this technical solution is specifically divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 301, presets of parameters of the ultrasound imaging system are randomly selected from an ultrasound apparatus, and one ultrasound image is acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameter used by acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 2000 images) and a preset parameter sample set OP (e.g., including 2000 groups of preset parameters).
In step 302, an optimized image sample set EI (e.g., including 2000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 303, a fully-connected automatic encoder DNN-AutoEncoder is trained by using the preset parameter sample set OP, as shown in
The fully-connected automatic encoder includes a fully-connected encoder and a fully-connected decoder, which are cascaded with each other. The fully-connected encoder is configured to compress high-dimension input information to a low-dimension space, and the fully-connected decoder is configured to convert the compressed low-dimension space information into the original high-dimension space.
In step 304, a convolution type automatic encoder CNN-AutoEncoder is trained by using the image sample set OI, as shown in
The convolution-type automatic encoder includes a convolution-type encoder and a convolution-type decoder, which are cascaded with each other. The convolution-type encoder is configured to compress high-dimension input information to a low-dimension space, and the convolution-type decoder is configured to convert the compressed low-dimension space information into the original high-dimension space.
In step 305, the OI is input into the convolution-type encoder of the CNN-AutoEncoder trained in step 304 to obtain an output MI, and the OP is input into the fully-connected encoder of the DNN-AutoEncoder in step 303 to obtain an output MP. A fully-connected neural network DNN-T is trained by using the MI as an input and using the MP as an output, as shown in
In step 306, a neural network system as shown in
The convolution-type encoder part of the CNN-AutoEncoder is connected with the DNN-T, and the DNN-T is connected with the fully-connected decoder part of the DNN-AutoEncoder.
The sample set EI is input to the convolution-type encoder end of the neural network system CNN-AutoEncoder, and an optimized preset parameter sample set EP is obtained at the output end of the fully-connected decoder of the neural network system DNN-AutoEncoder.
In step 307, a fully-connected neural network DNN as shown in
An application stage can include steps as below.
In step a301, a preset parameter vector is input to the DNN obtained in step 307 in the training stage, and an optimized parameter vector of the ultrasound imaging system is obtained at the output end.
Embodiment 4: this technical solution is specifically divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 401, presets of parameters of the ultrasound imaging system are randomly selected from an ultrasound apparatus, and one ultrasound image is acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameter used by acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 2000 images) and a preset parameter sample set OP (e.g., including 2000 groups of preset parameters);
In step 402, an optimized image sample set EI (e.g., including 2000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 403, a DCGAN as shown in
The DCGAN includes a generator network and a discriminator network. An OI sample is input into the generator network, and an image corresponding to the OI sample is generated through the generator network. Then the discriminator network performs consistency comparison on the image generated by the generator network and a corresponding EI sample.
In step 404, a fully-connected automatic encoder DNN-AutoEncoder is trained by using the preset parameter sample set OP, as shown in
The fully-connected automatic encoder includes a fully-connected encoder and a fully-connected decoder, which are cascaded with each other. The fully-connected encoder is configured to compress high-dimension input information to a low-dimension space, and the fully-connected decoder is configured to convert the compressed low-dimension space information into the original high-dimension space.
I step 405, a convolution-type automatic encoder CNN-AutoEncoder is trained by using the image sample set OI, as shown in
The convolution-type automatic encoder includes a convolution-type encoder and a convolution-type decoder, which are cascaded with each other. The convolution-type encoder is configured to compress high-dimension input information to a low-dimension space, and the convolution-type decoder is configured to convert the compressed low-dimension space information into the original high-dimension space.
In step 406, the OI is input into the convolution-type encoder of the CNN-AutoEncoder trained in step 405 to obtain an output MI, and the OP is input into the fully-connected encoder of the DNN-AutoEncoder in step 404 to obtain an output MP. The fully-connected neural network DNN-T is trained by using the MI as an input and using the MP as an output, as shown in
An application stage can include steps as below.
In step a401, a neural network system as shown in
The generator network of the DCGAN is connected with the convolution type encoder of the CNN-AutoEncoder. The convolution type encoder of the CNN-AutoEncoder is connected with the DNN-T. The DNN-T is connected with the fully-connected encoder of the DNN-AutoEncoder.
In step a402, an original ultrasound image is input to the neural network system as shown in
Embodiment 5: this technical solution is specifically divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 501, presets of parameters of the ultrasound imaging system are randomly selected from an ultrasound apparatus, and one ultrasound image is acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameter used by acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 2000 images) and a preset parameter sample set OP (e.g., including 2000 groups of preset parameters).
In step 502, an optimized image sample set EI (e.g., including 2000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 503, a generator network part of a DCGAN is used.
In step 504, the generator network of the DCGAN is set to include N convolution layers, and an OI sample and an EI sample are input to the generator network in sequence to obtain outputs OIO and EIO of the nth layer, wherein n is more than or equal to 1 and less than or equal to N;
In step 505, the OIO and EIO obtained in step 504 are both set to be m matrixes, and the m matrixes are all vectorized and then are respectively combined into a matrix MO (corresponding to the OIO) and a matrix ME (corresponding to the EIO).
In step 506, the generator network of the DCGAN is trained by taking loss=1/m(OIO−EIO){circumflex over ( )}2+(MO{circumflex over ( )}2−ME{circumflex over ( )}2){circumflex over ( )}2 as an optimization target till the network is converged, as shown in
In step 507, a multi-mode DBM as shown in
The multi-mode DBM includes a convolutional DBM, a common DBM, and a shared hidden layer for connecting the convolutional DBM to the common DBM. The OI is input to the convolutional DBM, and the OP is input to the common DBM. The shared hidden layer establishes a connection between OI information and OP information.
An application stage can include steps as below.
In step a501, a neural network system as shown in
The generator network of the DCGAN is connected with the convolution DBM in the multi-mode DBM.
In step a502, an ultrasound image is input to the generator network of the DCGAN in the neural network system as shown in
Embodiment 6: this technical solution is specifically divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 601, presets of parameters of the ultrasound imaging system are randomly selected from an ultrasound apparatus, and one ultrasound image is acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameter used by acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 2000 images) and a preset parameter sample set OP (e.g., including 2000 groups of preset parameters).
In step 602, an optimized image sample set EI (e.g., including 2000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 603, a convolutional network part of a VGG network is used, wherein the VGG network is one of network structures of Deep Learning.
In step 604, the convolutional network of the VGG is set to include N layers, and an OI sample and an EI sample are input to the convolutional network in sequence to obtain outputs OIO and EIO of the nth layer, wherein n is more than or equal to 1 and less than or equal to N.
In step 605, the OIO and EIO obtained in step 604 are both set to be m matrixes, and the m matrixes are all vectorized, and then the vectorized matrixes are arrayed in rows and are respectively combined into a matrix MO (corresponding to the OIO) and a matrix ME (corresponding to the EIO), as shown in
In step 606, a deconvolution network is trained by taking the OP as an input and taking (OIO−EIO){circumflex over ( )}2 and (MO{circumflex over ( )}2−ME{circumflex over ( )}2){circumflex over ( )}2 as outputs, as shown in
An application stage can include steps as below.
In step a601, an ultrasound system preset parameter vector is input as an input to the deconvolution network trained in step 606. A preset parameter value of the network input end is optimized by keeping a network weight unchanged under the condition that a sum of the two outputs of the deconvolution network is 0 till the network is converged, wherein during the convergence, the modified preset parameter vector of the network input end is an optimized parameter vector of an ultrasound imaging system (i.e., an optimized preset parameter vector), and the process is as shown in
Embodiment 7: this technical solution is specifically divided into a training stage and an application stage which are performed in sequence. Steps of each of the stages are as follows.
A training stage can include steps as below.
In step 701, presets of parameters of the ultrasound imaging system are randomly selected from an ultrasound apparatus, and one ultrasound image is acquired under the preset condition every time when one preset is selected. The ultrasound image is stored, and the preset parameter used by acquisition of the ultrasound image is recorded at the same time, so as to respectively obtain an image sample set OI (e.g., including 2000 images) and a preset parameter sample set OP (e.g., including 2000 groups of preset parameters).
In step 702, an optimized image sample set EI (e.g., including 2000 images, each of which is consistent with each image content in the OI, but is higher in quality) corresponding to the OI is obtained.
In step 703, a convolutional network part of a LeNet network is used, wherein the LeNet network is one of Deep Learning network structures.
In step 704, an OI sample and an EI sample are input into the convolutional network of the LeNet in sequence to obtain outputs OIO and EIO of the last layer, wherein the obtained OIO and EIO are both m matrixes, as shown in
In step 705, a deconvolution network is trained by using the OP as an input and using a corresponding res=OIO-EIO as an output, as shown in
In an application stage, a preset parameter vector of an ultrasound system is input as an input to the deconvolution network trained in step 705. A preset parameter value of the network input end is optimized by keeping a network weight unchanged under the condition that the output of the deconvolution network is 0 till the network is converged. During the convergence, the modified preset parameter vector of the network input end is an optimized parameter vector of the ultrasound imaging system (i.e., an optimized preset parameter vector), and the process is as shown in
At last, it should be noted that the above specific implementations are merely illustrative of the technical solutions of the present disclosure, and are not intended to be limiting. Although the present disclosure is described in detail with reference to the examples, it should be understood that those of ordinary skill in the art can make modifications or equivalent replacements to the technical solutions of the present disclosure without departing from the spirit and scope of the technical solutions of the present disclosure. These modifications and equivalent replacements shall fall within the scope of claims of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201711224993.0 | Nov 2017 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2018/093561 | 6/29/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/100718 | 5/31/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20030045797 | Christopher et al. | Mar 2003 | A1 |
20100189329 | Mo et al. | Jul 2010 | A1 |
20170143312 | Hedlund et al. | May 2017 | A1 |
20190018933 | Oono | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
102184330 | Sep 2011 | CN |
104572940 | Apr 2015 | CN |
105574820 | May 2016 | CN |
WO 2017122785 | Jul 2017 | WO |
Entry |
---|
Extended European Search Report dated Aug. 4, 2021, in corresponding European application No. 18880759.8, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20200345330 A1 | Nov 2020 | US |