Embodiments of the present disclosure relate to the field of image processing technologies, and specifically, to a training method and apparatus of an image reconstruction model, a device, a medium, and a program product.
A high-quality three-dimensional image can present detailed information clearly. For example, in the medical field, a high-quality three-dimensional medical image is helpful for medical diagnosis and analysis. However, in a process of forming, recording, processing, and transmitting an image, image quality deteriorates due to imperfection of an imaging system, a recording device, a transmission medium, and a processing method.
In a related technology, a deep convolutional neural network can be used to learn mapping relationships between pairs of low-quality images and high-quality images, and a high-quality three-dimensional image can be generated by using the trained neural network based on a low-quality three-dimensional image.
However, when the low-quality three-dimensional image has a plurality of damaged parts, for example, a noise loss and an image loss, in the foregoing method, the damaged parts of the low-quality three-dimensional image are usually reconstructed in sequence. A result is that a reconstruction error in a first phase is propagated to a subsequent phase. As a result, an overall image reconstruction error is large.
The present disclosure provides a training method and apparatus of an image reconstruction model, a device, a medium, and a program product, to obtain an accurate image reconstruction result. The technical solutions are as follows:
According to one aspect of the present disclosure, a training method of an image reconstruction model is provided. The method includes: obtaining a first sample image and at least two types of second sample images, where a second sample image has a single damage type, and image quality of the first sample image is greater than image quality of the second sample image; respectively adding at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images; and fusing the at least two types of the single degradation images, to obtain a multiple degradation image corresponding to the first sample image, where the multiple degradation image has at least two damage types; performing image reconstruction processing on the multiple degradation image, to generate a predicted reconstruction image corresponding to the multiple degradation image; calculating a loss function value based on the second sample images, the single degradation images, the first sample image, and the predicted reconstruction image; and updating a model parameter of the image reconstruction model based on the loss function value.
According to one aspect of the present disclosure, a training apparatus of an image reconstruction model is provided. The apparatus includes: an obtaining module, configured to obtain a first sample image and at least two types of second sample images, where the second sample image is an image having a single damage type, and image quality of the first sample image is greater than image quality of the second sample image; a degradation module, configured to respectively add at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images; a fusion module, configured to fuse the at least two single degradation images, to obtain a multiple degradation image corresponding to the first sample image, where the multiple degradation image is an image having at least two damage types; a reconstruction module, configured to perform image reconstruction processing on the multiple degradation image, to generate a predicted reconstruction image corresponding to the multiple degradation image; a calculation module, configured to calculate a loss function value based on the second sample images, the single degradation images, the first sample image, and the predicted reconstruction image; and an updating module, configured to update a model parameter of the image reconstruction model based on the loss function value.
According to another aspect of the present disclosure, a computer device is provided. The computer device includes a processor and a memory. The memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor to implement the training method of the image reconstruction model according to the foregoing aspect.
According to another aspect of the present disclosure, a non-transitory computer storage medium is provided. The computer-readable storage medium stores at least one computer program, and the at least one computer program is loaded and executed by a processor to implement the training method of the image reconstruction model according to the foregoing aspect.
A beneficial effect brought by the technical solutions provided in the present disclosure at least includes: the first sample image and at least two second sample images are obtained; the at least two damage features corresponding to the second sample images are respectively added to the first sample image, to generate the at least two single degradation images; the at least two single degradation images are fused, to obtain the multiple degradation image corresponding to the first sample image; then image reconstruction processing is performed on the multiple degradation image, to generate the predicted reconstruction image corresponding to the multiple degradation image; and a computer device calculates the loss function value based on the second sample images, the single degradation images, the first sample image, and the predicted reconstruction image; and updates the model parameter of the image reconstruction model based on the loss function value. According to the training method of the image reconstruction model provided in the present disclosure, image damage of a plurality of damage types is simultaneously performed on the first sample image, to obtain the multiple degradation image corresponding to the first sample image, and the multiple degradation image having the plurality of damage types is reconstructed. When the model obtained through training by using the foregoing method is used, a plurality of damage types of a low-quality image may be simultaneously reconstructed. This avoids an accumulated error caused by reconstructing the damage types of the low-quality image in sequence, and further improves image reconstruction precision of a trained image reconstruction model.
Embodiments of the present disclosure provide a technical solution of a training method of an image reconstruction model. As shown in
For example, the computer device obtains a first sample image 101 and at least two second sample images. The at least two second sample images include at least two of a blur sample image 102, a bias sample image 103, and a noise sample image 104. The second sample image is an image having a single damage type, and image quality of the first sample image is greater than image quality of the second sample image.
In some embodiments, the damage type includes at least one of a blur damage type, a noise damage type, and a bias damage type. This is not specifically limited in this embodiment of the present disclosure.
For example, the first sample image 101 is an image having high-resolution, or that does not affect expression of image content, or that has a small impact on expression of the image content. The blur sample image 102 is an image having blur content in the image. The noise sample image 104 is image content that is unnecessary or has a negative impact on analysis and understanding of image content, that is, image noise, and the image is the noise sample image 104. The bias sample image 103 is an image causing a luminance difference due to bias. The blur damage type, the noise damage type, and the bias damage type of the second sample image all may be set randomly.
For example, the computer device extracts a first feature corresponding to the first sample image 101 by using a first degradation encoder 105, and respectively extracts at least two second features corresponding to the at least two types of the second sample images.
The computer device extracts, based on the first feature and the second feature, a damage feature of the second feature by using a corresponding damage kernel extractor. The computer device adds the damage feature to the first feature of the first sample image 101, to obtain an intermediate first feature, and inputs the intermediate first feature to a first degradation decoder 109 for decoding processing, to obtain a single degradation image corresponding to the first sample image.
For example, the computer device extracts a feature of the first sample image 101 by using the first degradation encoder 105, to obtain the first feature. The computer device extracts features of the blur sample image 102, the noise sample image 104, and the bias sample image 103 respectively by using the first degradation encoder 105, to obtain a blur sample feature, a noise sample feature, and a bias sample feature respectively.
The computer device inputs the first feature and the blur sample feature to a blur kernel extractor 106 for feature extraction, to obtain a blur damage feature of the blur sample feature. The computer device inputs the first feature and the bias sample feature to a bias kernel extractor 107 for feature extraction, to obtain a bias damage feature of the bias sample feature. The computer device inputs the first feature and the noise sample feature to a noise kernel extractor 108 for feature extraction, to obtain a noise damage feature of the noise sample feature.
The computer device fuses the first feature corresponding to the first sample image 101 and the blur damage feature, to generate an intermediate first blur feature. The computer device performs decoding processing on the intermediate first blur feature by using the first degradation decoder 109, to generate a blur degradation image 110 corresponding to the first sample image 101.
The computer device fuses the first feature corresponding to the first sample image 101 and the bias damage feature, to generate an intermediate first bias feature. The computer device performs decoding processing on the intermediate first bias feature by using the first degradation decoder 109, to generate a bias degradation image 111 corresponding to the first sample image.
The computer device fuses the first feature corresponding to the first sample image 101 and the noise damage feature, to generate an intermediate first noise feature. The computer device performs decoding processing on the intermediate first noise feature by using the first degradation decoder 109, to generate a noise degradation image 112 corresponding to the first sample image.
For example, the computer device obtains at least two third features corresponding to the single degradation images by using a second degradation encoder 113, and fuses the third features, to obtain a degradation fusion feature. The computer device performs decoding processing on the degradation fusion feature by using a second degradation decoder 114, to generate a multiple degradation image 115 corresponding to the first sample image 101.
For example, the computer device performs feature extraction on the blur degradation image 110, the bias degradation image 111, and the noise degradation image 112 respectively by using the second degradation encoder 113, and performs feature fusion on a feature corresponding to the blur degradation image 110, a feature corresponding to the bias degradation image 111, and a feature corresponding to the noise degradation image 112, to obtain the degradation fusion feature. The computer device performs decoding processing on the degradation fusion feature by using the second degradation decoder 114, to generate the multiple degradation image 115 corresponding to the first sample image 101.
The computer device performs image reconstruction processing on the multiple degradation image based on a reconstruction encoder 116 and a reconstruction decoder 117 in a reconstruction network layer of the image reconstruction model, to generate a predicted reconstruction image 118 corresponding to the multiple degradation image 115.
For example, the computer device calculates a first loss function value based on the second feature corresponding to the second sample image and the third feature corresponding to the single degradation image. The first loss function value includes a first blur loss function value, a first bias loss function value, and a first noise loss function value.
For example, the computer device calculates the first blur loss function value based on the second feature corresponding to the blur sample image 102 and the third feature corresponding to the blur degradation image 110. The computer device calculates the first bias loss function value based on the second feature corresponding to the bias sample image 103 and the third feature corresponding to the bias degradation image 111. The computer device calculates the first noise loss function value based on the second feature corresponding to the noise sample image 104 and the third feature corresponding to the noise degradation image 112. Each first loss function value is used to measure a similarity between one of the second sample images and one of the single degradation images corresponding to the one second sample image.
For example, the computer device calculates a second loss function value based on the first feature corresponding to the first sample image 101 and a fourth feature corresponding to the predicted reconstruction image 118. The second loss function value is used to measure authenticity of the predicted reconstruction image.
For example, the computer device calculates a third loss function value based on a structural feature corresponding to the multiple degradation image 115 and a structural feature corresponding to the first sample image 101. The structural feature corresponding to the multiple degradation image 115 is a structural feature of a non-content part of the multiple degradation image 115. The third loss function value is used to measure a similarity between a non-content part in the multiple degradation image 115 and a non-content part in the first sample image 101.
For example, the computer device calculates a fourth loss function value based on a content feature and a texture feature that correspond to the first sample image 101 and a content feature and a texture feature that correspond to the predicted reconstruction image 118.
The fourth loss function value is used to measure a similarity between the first sample image and the predicted reconstruction image.
For example, the computer device updates a model parameter of the image reconstruction model based on a sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value.
In conclusion, the method provided in this embodiment, the damage feature corresponding to each second sample image of third second sample images are added respectively to the first sample image in the degradation network layer by obtaining the first sample image and the third second sample images, to generate third single degradation images. The third single degradation images are fused, to obtain the multiple degradation image corresponding to the first sample image. Then, image reconstruction processing is performed on the multiple degradation image in the reconstruction network layer, to generate the predicted reconstruction image corresponding to the multiple degradation image. The computer device calculates the loss function value based on the third second sample images, third single degradation images, the first sample image, and the predicted reconstruction image; and updates the model parameter of the image reconstruction model based on the loss function value. According to the training method of the image reconstruction model provided in the present disclosure, the image damage of a plurality of damage types is simultaneously performed on the first sample image, to obtain the multiple degradation image corresponding to the first sample image, and the multiple degradation image having the plurality of damage types is reconstructed through the reconstruction network layer. When the model obtained through training by using the foregoing method is used, a plurality of damage types of a low-quality image may be simultaneously reconstructed. This avoids an accumulated error caused by reconstructing the damage types of the low-quality image in sequence, and further improves image reconstruction precision of a trained image reconstruction model.
The terminal 100 may be an electronic device such as a mobile phone, a tablet computer, an in-vehicle terminal (in-vehicle infotainment), a wearable device, a personal computer (PC), an intelligent voice interaction device, a smart home appliance, an aerial vehicle, or an unmanned vending terminal. A client running a target application may be installed in the terminal 100. The target application may be an application for image reconstruction, or may be another application that provides an image reconstruction function. This is not limited in the present disclosure. In addition, a form of the target application is not limited in the present disclosure, and includes but is not limited to an application (App), an applet, and the like that are installed in the terminal 100, or may be in a form of a web page.
The server 200 may be an independent physical server, or a server cluster or a distributed system composed of a plurality of physical servers, or may alternatively be a cloud server that provides a cloud computing service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), and a basic cloud computing service such as big data and an artificial intelligence platform. The server 200 may be a back-end server of the foregoing target application, and is configured to provide a background service for a client of the target application.
A cloud technology (Cloud technology) is a hosting technology that unifies a series of resources such as hardware, software, and a network in a wide area network or a local area network, to implement data computing, storage, processing, and sharing. A general term for a network technology, an information technology, an integration technology, a management platform technology, and an application technology that are applied based on a cloud computing business model. The cloud technology can form resource pools and be used on demand. A cloud computing technology is an important support. A background service of a technical network system needs a large quantity of computing and storage resources, such as a video website, an image website, and more portal websites. With the development and application of the internet industry, each item may have identification flag and needs to be transmitted to the background system for logic processing. Data of different levels is processed separately. Various industry data needs powerful system support, and this be implemented only through cloud computing.
In some embodiments, the server may be alternatively implemented as a node in a blockchain system. Blockchain (Blockchain) is a new application mode of a computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and an encryption algorithm. The blockchain is essentially a decentralized database, and is a string of a data block generated in a cryptographic manner. Each data block contains information about a batch of network transactions, which is used to verify validity of the information (anti-counterfeiting) and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, and an application service layer.
The terminal 100 and the server 200 may communicate with each other by using a network, for example, a wired or wireless network.
According to a training method of an image reconstruction model provided in embodiments of the present disclosure, operations may be performed by a computer device, and the computer device is an electronic device having a data calculation, processing, and storage capability. A solution implementation environment shown in
Operation 302: Obtain a first sample image and at least two types of second sample images.
The second sample image is an image having a single damage type, and image quality of the first sample image is greater than image quality of the second sample image. The single damage type means that only one damage type is included. The damage type includes image blur, image bias, image noise, and the like. In some embodiments, the second sample image is obtained by performing an image degradation operation on the first sample image. In this case, the first sample image and the second sample image are different only in the image quality. In some embodiments, the second sample image is obtained by performing an image quality enhancement operation on the first sample image. In this case, the first sample image and the second sample image are different only in the image quality.
In some embodiments, the first sample image and the second sample image are obtained by photographing a same object by using different photographing parameters. In this case, the first sample image and the second sample image are different only in the image quality. The first sample image is obtained through photographing by using a correct photographing parameter, and the second sample image is obtained through photographing by using an incorrect photographing parameter. The correct photographing parameter and the incorrect photographing parameter respectively correspond to a high-quality image and a low-quality image.
For example, the first sample image is a high-resolution image, and the second sample image is a low-resolution image. The image in embodiments of the present disclosure may be a biological or non-biological internal tissue image that cannot be directly seen by human eyes and that is obtained in a non-intrusive manner. For example, in the biomedicine field, the image in embodiments of the present disclosure may be a biological image (for example, a medical image). The biological image is an image that is of an internal tissue of an organism or a part of the organism (for example, a human body or a part of the human body) and that is obtained in the non-intrusive manner for medicine or medical research. In an example, for the medical field, the image in embodiments of the present disclosure may be an image of a heart and a lung, a liver, a stomach, a large intestine and a small intestine, a human brain, a bone, a blood vessel, or the like; or may be an image of a non-human organ such as a tumor. In addition, the image in embodiments of the present disclosure may be an image generated based on an imaging technology such as an X-ray (X-ray) technology, a computed tomography (CT) technology, a positron emission tomography (PET) technology, a nuclear magnetic resonance imaging (NMRI) technology, or medical ultrasonography (Medical Ultrasonography). In addition, the image in embodiments of the present disclosure may alternatively be a what you see is what you get image generated by using a visual imaging technology, for example, an image photographed by using a camera lens (for example, a camera lens of a camera or a camera lens of a terminal).
Operation 304: Respectively add at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images.
A feature is a corresponding (essential) trait or characteristic that distinguishes one type of object from another type of object, or a set of the traits or the characteristics. In one embodiment, the computer device may perform feature extraction on an image by using a machine learning model. The damage feature is a feature corresponding to a damaged part of the second sample image, for example, a feature corresponding to a blur part of the second sample image, and a feature corresponding to a noise part of the second sample image.
For example, the computer device extracts the at least two damage features corresponding to the second sample images, and respectively adds the damage feature corresponding to each second sample image to the first sample image, to generate the single degradation image, so that the generated single degradation image includes a damage feature that is the same as or similar to that of the second sample image. The single degradation image is an image obtained by adding the single damage type to the first sample image. For example, the computer device extracts a blur damage feature corresponding to the second sample image, and adds the blur damage feature to the first sample image, to obtain a blur degradation image corresponding to the first sample image.
Operation 306: Fuse the at least two types of the single degradation images, to obtain a multiple degradation image corresponding to the first sample image.
The multiple degradation image is an image having at least two damage types. For example, the computer device fuses the at least two single degradation images, to obtain an image having a plurality of damage types. For example, the single degradation image is a blur degradation image, a bias degradation image, and a noise degradation image. The computer device fuses the blur degradation image, the bias degradation image, and the noise degradation image, to generate the multiple degradation image, so that the generated multiple degradation image has the same or similar blur damage feature, noise damage feature, and bias damage feature.
Operation 308: Perform image reconstruction processing on the multiple degradation image, to generate a predicted reconstruction image corresponding to the multiple degradation image.
The image reconstruction processing means processing a damage feature of the multiple degradation image, for example, to reduce or remove the blur damage feature, the noise damage feature, and the bias damage feature of the multiple degradation image. The predicted reconstruction image is an image obtained by reducing or removing the damage feature of the multiple degradation image.
For example, the computer device performs reconstruction processing on the multiple degradation image, to reduce or remove the blur damage feature, the noise damage feature, and the bias damage feature of the multiple degradation image, to generate the predicted reconstruction image corresponding to the multiple degradation image.
Operation 310: Calculate a loss function value based on the second sample images, the single degradation images, the first sample image, and the predicted reconstruction image.
The loss function value obtained through calculation based on the second sample image, the single degradation image, the first sample image, and the predicted reconstruction image may be used to measure a training effect of the image reconstruction model. In some embodiments, the loss function value is at least one of a cross entropy, a mean square error, and an absolute difference, but is not limited thereto. This is not limited in embodiments of the present disclosure.
Operation 312: Update a model parameter of the image reconstruction model based on the loss function value.
Updating of the model parameter refers to updating a network parameter in the image reconstruction model, or updating a network parameter of each network module in the model, or updating a network parameter of each network layer in the model. However, this is not limited thereto. This is not limited in embodiments of the present disclosure.
In some embodiments, the model parameter of the image reconstruction model is adjusted based on the loss function value of the image reconstruction model until the image reconstruction model or a training system of the image reconstruction model meets a training stop condition, to obtain a trained image reconstruction model. In some embodiments, before the image reconstruction model or the training system of the image reconstruction model meets the training stop condition, a model parameter of another learning model in the training system of the image reconstruction model is also continuously adjusted based on a training loss.
In conclusion, according to the method provided in this embodiment, the first sample image and the at least two second sample images are obtained. The computer device respectively adds the at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images. The computer device fuses the at least two single degradation images, to obtain the multiple degradation image corresponding to the first sample image; and performs the image reconstruction processing on the multiple degradation image, to generate the predicted reconstruction image corresponding to the multiple degradation image. The computer device calculates the loss function value based on the second sample images, the single degradation images, the first sample image, and the predicted reconstruction image. The computer device updates the model parameter of the image reconstruction model based on the loss function value. According to the training method of the image reconstruction model provided in the present disclosure, the image damage of a plurality of damage types is simultaneously performed on the first sample image, to obtain the multiple degradation image corresponding to the first sample image, and the multiple degradation image having the plurality of damage types is reconstructed through a reconstruction network layer. When the model obtained through training by using the foregoing method is used, a plurality of damage types of a low-quality image may be simultaneously reconstructed. This avoids an accumulated error caused by reconstructing the damage types of the low-quality image in sequence, and further improves image reconstruction precision of the trained image reconstruction model.
An embodiment of the present disclosure provides an image reconstruction model. The image reconstruction model includes a degradation network layer and a reconstruction network layer.
A computer device obtains a first sample image and at least two types of second sample images; respectively adds, through the degradation network layer, at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images; and fuses the at least two single degradation images, to obtain a multiple degradation image corresponding to the first sample image. The computer device performs image reconstruction processing on the multiple degradation image through the reconstruction network layer, to generate a predicted reconstruction image corresponding to the multiple degradation image.
Based on the image reconstruction model, the following training method of the image reconstruction model is provided.
Operation 402: Obtain a first sample image and at least two types of second sample images.
The first sample image is an image having a high-resolution, or having a damage type but does not affect expression of image content, or having a damage type but having a small impact on expression of image content. The second sample image is an image having a single damage type, and image quality of the first sample image is greater than image quality of the second sample image.
In some embodiments, the damage type includes at least one of a blur damage type, a noise damage type, and a bias damage type. This is not limited in this embodiment of the present disclosure.
The second sample image may be any one of a blur sample image, a noise sample image, or a bias sample image. The blur sample image is an image having blur content in the image. The noise sample image is image content that is unnecessary or has a negative impact on analysis and understanding of image content, that is, image noise, and the image is the noise sample image. The bias sample image is an image causing a luminance difference due to bias. For example, an artifact (that is, image noise) generated due to interference of a metal object usually occurs in a medical image. This may affect determining of a doctor.
Operation 404: Respectively add at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images.
The damage feature is a feature corresponding to a damaged part of the second sample image, for example, a feature corresponding to a blur part of the second sample image, and a feature corresponding to a noise part of the second sample image.
The single degradation image is an image obtained by adding the single damage type to the first sample image.
For example, the computer device obtains a first feature corresponding to the first sample image, and respectively obtains at least two second features corresponding to the at least two types of the second sample images. The computer device obtains, based on the first feature and the one second feature, a damage feature corresponding to one of the second sample images. The computer device adds the damage feature to the first feature of the first sample image, to obtain one of the single degradation images corresponding to the first sample image. The first feature represents an image feature of the first sample image, and the second feature represents an image feature of one of the second sample images.
For example, the image reconstruction model includes a degradation network layer, and the degradation network layer includes a first degradation encoder, a damage kernel extractor, and a first degradation decoder.
The computer device extracts the first feature corresponding to the first sample image by using the first degradation encoder, and respectively extracts at least two second features corresponding to the at least two types of the second sample images.
The computer device determines the damage feature by comparing the first feature and one of the second features, and performs decoupling by using the damage kernel extractor, to obtain, from the one second feature, the damage feature corresponding to the one of the second sample images. The computer device adds the damage feature to the first feature of the first sample image, to obtain an intermediate first feature; and inputs the intermediate first feature to the first degradation decoder for decoding processing, to obtain one of the single degradation images corresponding to the first sample image.
For example, the second sample image is a blur sample, a noise sample image, and a bias sample image. The computer device extracts a feature of the first sample image by using the first degradation encoder, to obtain the first feature. The computer device extracts features of the blur sample image, the noise sample image, and the bias sample image respectively by using the first degradation encoder, to obtain a blur sample feature, a noise sample feature, and a bias sample feature respectively.
The computer device inputs the first feature and the blur sample feature to a blur kernel extractor for feature extraction, to obtain a blur damage feature of the blur sample feature. The computer device inputs the first feature and the bias sample feature to a bias kernel extractor for feature extraction, to obtain a bias damage feature of the bias sample feature. The computer device inputs the first feature and the noise sample feature to a noise kernel extractor for feature extraction, to obtain a noise damage feature of the noise sample feature.
The computer device fuses the first feature corresponding to the first sample image and the blur damage feature, to generate an intermediate first blur feature. The computer device performs decoding processing on the intermediate first blur feature by using the first degradation decoder, to generate a blur degradation image corresponding to the first sample image.
The computer device fuses the first feature corresponding to the first sample image and the bias damage feature, to generate an intermediate first bias feature. The computer device performs decoding processing on the intermediate first bias feature by using the first degradation decoder, to generate a bias degradation image corresponding to the first sample image.
The computer device fuses the first feature corresponding to the first sample image and the noise damage feature, to generate an intermediate first noise feature. The computer device performs decoding processing on the intermediate first noise feature by using the first degradation decoder, to generate a noise degradation image corresponding to the first sample image.
Operation 406: Obtain at least two third features corresponding to the single degradation images, and fuse the third features, to obtain the multiple degradation image corresponding to the first sample image.
The multiple degradation image is an image having at least two damage types.
The third feature represents an image feature of one of the single degradation images.
For example, the degradation network layer of the image reconstruction model further includes a second degradation encoder and a second degradation decoder. The computer device obtains the at least two third features corresponding to the single degradation images by using the second degradation encoder, and fuses the third features, to obtain a degradation fusion feature. The computer device performs decoding processing on the degradation fusion feature by using the second degradation decoder, to generate the multiple degradation image corresponding to the first sample image.
For example, the single degradation image is a blur degradation image, a bias degradation image, and a noise degradation image. The computer device performs feature extraction on the blur degradation image, the bias degradation image, and the noise degradation image respectively by using the second degradation encoder, and performs feature fusion on a feature corresponding to the blur degradation image, a feature corresponding to the bias degradation image, and a feature corresponding to the noise degradation image, to obtain a degradation fusion feature. The computer device performs decoding processing on the degradation fusion feature by using the second degradation decoder, to generate the multiple degradation image corresponding to the first sample image.
Operation 408: Perform image reconstruction processing on the multiple degradation image, to generate a predicted reconstruction image corresponding to the multiple degradation image.
The image reconstruction processing means processing a damage feature of the multiple degradation image, to reduce or remove the blur damage feature, the noise damage feature, and the bias damage feature of the multiple degradation image.
The predicted reconstruction image is an image obtained by reducing or removing the damage feature of the multiple degradation image.
For example, the image reconstruction model includes a reconstruction network layer, and the reconstruction network layer includes a reconstruction encoder and a reconstruction decoder. The computer device inputs the multiple degradation image to the reconstruction encoder for feature extraction, to obtain an image reconstruction feature. The computer device performs decoding processing on the image reconstruction feature by using the reconstruction decoder, to generate the predicted reconstruction image corresponding to the multiple degradation image.
Operation 410: Calculate first loss function values based on the second features corresponding to the second sample images and the third features corresponding to the single degradation images, and calculate a second loss function value based on the first feature corresponding to the first sample image and a fourth feature corresponding to the predicted reconstruction image.
The fourth represents an image feature of the predicted reconstruction image.
The first loss function value and the second loss function value that are obtained through calculation based on the second sample image, the single degradation image, the first sample image, and the predicted reconstruction image may be used to measure a training effect of the image reconstruction model.
In some embodiments, the loss function value is at least one of a cross entropy, a mean square error, and an absolute difference, but is not limited thereto. This is not limited in embodiments of the present disclosure.
The first loss function value is used to measure a similarity between the second sample image and the single degradation image corresponding to the second sample image.
For example, the computer device calculates the first loss function value based on the second feature corresponding to the second sample image and the third feature corresponding to the single degradation image. The first loss function value includes a first blur loss function value, a first bias loss function value, and a first noise loss function value.
In some embodiments, the computer device calculates an ith first loss function value of the first loss function values based on a second feature corresponding to an ith second sample image of the at least two second sample images and a third feature corresponding to an ith single degradation image of the at least two single degradation images, where i is a positive integer.
For example, as shown in
The computer device fuses the first feature corresponding to the first sample image 501 and the blur damage feature, to generate an intermediate first blur feature. The computer device performs decoding processing on the intermediate first blur feature by using a first degradation decoder 509, to generate a blur degradation image 510 corresponding to the first sample image 501. The computer device fuses the first feature corresponding to the first sample image 501 and the bias damage feature, to generate an intermediate first bias feature. The computer device performs decoding processing on the intermediate first bias feature by using the first degradation decoder 509, to generate a bias degradation image 511 corresponding to the first sample image 501. The computer device fuses the first feature corresponding to the first sample image 501 and the noise damage feature, to generate an intermediate first noise feature. The computer device performs decoding processing on the intermediate first noise feature by using the first degradation decoder 509, to generate a noise degradation image 512 corresponding to the first sample image 501.
The computer device calculates a first blur loss function value 513 based on a second feature corresponding to the blur sample image 502 and the third feature corresponding to the blur degradation image 510.
The computer device calculates a first bias loss function value 514 based on a second feature corresponding to the bias sample image 503 and a third feature corresponding to the bias degradation image 511.
The computer device calculates a first noise loss function value 515 based on a second feature corresponding to the noise sample image 504 and a third feature corresponding to the noise degradation image 512.
For example, a calculation formula of the first loss function value may be represented as:
In the formula, de is the first loss function value, N is a quantity of data groups, R is a quantity of damage types, r represents the damage type, x is the first sample image, y is the second sample image, K is the damage kernel extractor, ψ is the first degradation decoder, and E is a Charbonnier loss function.
The second loss function value is used to measure authenticity of the predicted reconstruction image.
For example, the computer device calculates the second loss function value based on the first feature corresponding to the first sample image and the fourth feature corresponding to the predicted reconstruction image.
To make a generated image closer to a real image, in embodiments of the present disclosure, a generative adversarial idea is used, and the first sample image and the predicted reconstruction image are input to a discriminator for discrimination, to obtain a discrimination result. The discriminator is configured to discriminate the first sample image and the predicted reconstruction image. An adversarial loss is determined based on the discrimination result, that is, the second loss function value is determined. Finally, when the discriminator cannot distinguish whether the given image is the first sample image or the predicted reconstruction image, that is, the predicted reconstruction image is close to the first sample image, training is completed.
For example, a calculation formula of the second loss function value may be represented as:
In the formula, up is the second loss function value, E is the Charbonnier loss function, Dup is the discriminator, x is the first sample image, ŷ is the multiple degradation image, and is Gup a reconstruction network layer.
In one embodiment, the loss function value further includes a third loss function value and a fourth loss function value.
The third loss function value is used to measure a similarity between a non-content part in the multiple degradation image and a non-content part in the first sample image.
For example, the third loss function value is calculated based on a structural feature corresponding to the multiple degradation image and a structural feature corresponding to the first sample image.
For example, a calculation formula of the third loss function value may be represented as:
In the formula, cde is the third loss function value, xi is an ith first sample image, ŷi is the ith multiple degradation image, and Tdel is a feature representation of an lth layer of the image.
The fourth loss function value is used to measure a similarity between the first sample image and the predicted reconstruction image.
For example, the fourth loss function value is calculated based on a content feature and a texture feature that correspond to the first sample image and a content feature and a texture feature that correspond to the predicted reconstruction image. In some embodiments, the content feature is a pixel value of each pixel in the image, for example, luminance and brightness of each pixel.
For example, a calculation formula of the fourth loss function value may be represented as:
In the formula, TS is the fourth loss function value, xi is the ith first sample image, {circumflex over (x)}i is an ith predicted reconstruction image, Gup is the reconstruction network layer, cup is a content loss function value, texture is a texture loss function value, and λ is a weight value. For example, λ is 0.9.
Operation 412: Update a model parameter of the image reconstruction model based on a sum of the first loss function value and the second loss function value.
Updating of the model parameter refers to updating a network parameter in the image reconstruction model, or updating a network parameter of each network module in the model, or updating a network parameter of each network layer in the model. However, this is not limited thereto. This is not limited in embodiments of the present disclosure.
In one embodiment, the computer device updates the model parameter of the image reconstruction model based on a sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value.
For example, a linear combination constructed based on the sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value may be represented as:
In the formula, & is the loss function value, de is the first loss function value, cde is the third loss function value, up is the second loss function value, TS is the fourth loss function value, and α and β are weight factors. For example, α=400, and β=100.
The model parameter of the image reconstruction model includes at least one of a network parameter of the first degradation encoder, a network parameter of the damage kernel extractor, a network parameter of the first degradation decoder, a network parameter of the second degradation encoder, and a network parameter of the second degradation decoder, a network parameter of the reconstruction encoder, and a network parameter of the reconstruction decoder.
When the loss function value is obtained, the computer device updates, based on the loss function value, the network parameter of the first degradation encoder, the network parameter of the damage kernel extractor, the network parameter of the first degradation decoder, the network parameter of the second degradation encoder, and the network parameter of the second degradation decoder, the network parameter of the reconstruction encoder, and the network parameter of the reconstruction decoder that are of the image reconstruction model, to obtain an updated first degradation encoder, an updated damage kernel extractor, an updated first degradation decoder, an updated second degradation encoder, an updated second degradation decoder, an updated reconstruction encoder, and an updated reconstruction decoder, and to obtain the trained image reconstruction model.
In some embodiments, updating of the model parameter of the image reconstruction model includes that network parameters of all network modules in the image reconstruction model are updated, or network parameters of some network modules of the image reconstruction model are fixed, and network parameters of only remaining network modules are updated. For example, when the model parameter of the image reconstruction model is updated, the network parameter of the first degradation encoder, network parameter of the second degradation encoder, and the network parameter of the reconstruction encoder that are of the image reconstruction model are fixed, and only the network parameter of the damage kernel extractor, the network parameter of the first degradation decoder, the network parameter of the second degradation decoder, and the network parameter of the reconstruction decoder are updated.
In conclusion, according to the method provided in this embodiment, the first sample image and the at least two second sample images are obtained. The computer device respectively adds the at least two damage features corresponding to the second sample images to the first sample image, to generate at least two types of single degradation images. The computer device fuses the at least two single degradation images, to obtain the multiple degradation image corresponding to the first sample image; and performs image reconstruction processing on the multiple degradation image, to generate the predicted reconstruction image corresponding to the multiple degradation image. The computer device calculates the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value based on the second sample images, the single degradation images, the first sample image, and the predicted reconstruction image. The computer device updates the model parameter of the image reconstruction model based on a sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value. According to the training method of the image reconstruction model provided in the present disclosure, the image damage of a plurality of damage types is simultaneously performed on the first sample image, to obtain the multiple degradation image corresponding to the first sample image, and the multiple degradation image having the plurality of damage types is reconstructed through the reconstruction network layer. When the model obtained through training by using the foregoing method is used, a plurality of damage types of a low-quality image may be simultaneously reconstructed. This avoids an accumulated error caused by reconstructing the damage types of the low-quality image in sequence, and further improves image reconstruction precision of the trained image reconstruction model.
In the medical field, a medical image is an important auxiliary tool for medical auxiliary diagnosis. Image reconstruction is performed on the medical image, and detailed information in the medical image is displayed more clearly based on an image reconstruction result, so that medical personnel can better assist in the medical diagnosis. As shown in
For example, in the reference solution 1, denoising processing, deblurring processing, and debiasing processing are performed on the multiple degradation image by selecting different processing sequences, to finally obtain six groups of image reconstruction results. The six groups of image reconstruction results in the reference solution 1 are compared with a first sample image, and a predicted reconstruction image obtained based on the solution provided in the present disclosure is compared with the first sample image, to obtain that the solution provided in the present disclosure can generate a more realistic image reconstruction result having a higher visual effect for the medical image.
In the field of image defect repair, a high-quality image can reflect the more realistic view, and provide the higher visual effect. As shown in
For example, for different reference solutions, denoising processing, deblurring processing, and debiasing processing are performed on the multiple degradation image by selecting different processing sequences, to finally obtain six groups of image reconstruction results for each reference solution. The six groups of image reconstruction results in different reference solutions are compared with the first sample image, and the predicted reconstruction image obtained based on the solution provided in the present disclosure is compared with the first sample image, to obtain that the solution provided in the present disclosure can achieve a better defect repair effect for a defective image.
In the medical field, a medical image is an important auxiliary tool for medical auxiliary diagnosis. Image reconstruction is performed on the medical image, and detailed information in the medical image is displayed more clearly based on an image reconstruction result, so that medical personnel can better assist in the medical diagnosis. As shown in
Image degradation refers to that in a process of forming, recording, processing, and transmitting an image, image quality deteriorates due to imperfection of an imaging system, a recording device, transmission media, and a processing method. This phenomenon refers to as image degradation.
In this embodiment of the present disclosure, image degradation processing performed on the first sample image 801 is to simultaneously add a plurality of damage types to the first sample image 801 simultaneously. For example, random blurring processing, random noising processing, and random bias processing are added to the first sample image 801, that is, a plurality of types of defects are added to the first sample image 801.
When image reconstruction is performed on the multiple degradation image 804, a reconstruction network layer 805 simultaneously reconstructs the plurality of types of defects of the multiple degradation image 804, that is, deblurring, denoising, and debiasing. The reconstruction network layer 805 reconstructs the multiple degradation image 804 into a high-quality image, that is, a predicted reconstruction image 806.
In the image reconstruction processing, accuracy of the image reconstruction affects accuracy of information display in the medical image, and further affects accuracy of the medical diagnosis performed by medical personnel based on display information. Therefore, in a medical image-assisted diagnosis scenario, the image reconstruction model obtained by using the training method of the image reconstruction model provided in the present disclosure can improve the accuracy of the image reconstruction performed on the medical image, display more clearly detailed information in the medical image, and improve accuracy of medical-assisted diagnosis.
The training method of the image reconstruction model in the present disclosure may be implemented based on the image reconstruction model. The solution includes a generation phase of an image reconstruction model and an image reconstruction phase.
The device 910 for generating an image reconstruction model and the image reconstruction device 920 may be a computer device. For example, the computer device may be a fixed computer device such as a personal computer or a server, or the computer device may be a mobile computer device such as a tablet computer or an e-book reader.
In some embodiments, the device 910 for generating an image reconstruction model and the image reconstruction device 920 may be a same device, or the device 910 for generating an image reconstruction model and the image reconstruction device 920 may be different devices. In addition, when the device 910 for generating an image reconstruction model and the image reconstruction device 920 are different devices, the device 910 for generating an image reconstruction model and the image reconstruction device 920 may be devices of a same type. For example, both the device 910 for generating an image reconstruction model and the image reconstruction device 920 may be a server. Alternatively, the device 910 for generating an image reconstruction model and the image reconstruction device 920 may be different types of devices. For example, the image reconstruction device 920 may be the personal computer or a terminal, and the device 910 for generating an image reconstruction model may be the server. Specific types of the device 910 for generating an image reconstruction model and the image reconstruction device 920 are not limited in this embodiment of the present disclosure.
The training method of the image reconstruction model is described in the foregoing embodiments, and the following describes the image reconstruction method.
Operation 1002: Obtain a first image.
The first image is an image having a plurality of damage types.
A method of obtaining the first image includes at least one of the following situations:
1. The computer device receives the first image. For example, the terminal is a terminal that initiates image scanning, the terminal scans a picture, and sends the first image to the server after completing the scanning.
2. The computer device obtains the first image from a stored database, for example, obtains at least one first image from an MNIST segmentation data set or a disclosed brain MRI data set. The foregoing method of obtaining the first image is merely an example. This is not limited in this embodiment of the present disclosure.
Operation 1004: Perform image reconstruction processing on the first image based on a trained reconstruction network layer, to obtain a first reconstruction image.
The computer device performs image reconstruction processing on the first image based on the trained reconstruction network layer. To be specific, the reconstruction network layer simultaneously reconstructs a plurality of types of defects of the first image, that is, deblurring, denoising, and debiasing. The reconstruction network layer reconstructs the first image into a high-quality image, that is, the first reconstruction image.
Operation 1006: Output the first reconstruction image.
The computer device outputs the first reconstruction image.
To verify an effect of the image reconstruction model obtained through training in the solutions provided in embodiments of the present disclosure, comparison effects between the solutions and the reference solutions that are provided in the present disclosure are compared by designing a comparison experiment. The MNIST data set is used in the experiment. The MNIST data set includes 60,000 training samples and 10,000 test examples. All images are in a size of 28×28. All images are pre-registered. All images are divided into 90% for training and 10% for testing, and 20% of the training images are retained as a validation set of two data sets.
Reference solutions include a reference solution 1, a reference solution 2, and a reference solution 3. In the reference solution, denoising (DN) processing, deblurring (DB) processing, and N4 bias correction are performed on a low-quality image, but processing sequence is different. Evaluation indicators used are respectively a peak signal to noise ratio (PSNR) and a structural similarity (SSIM), and are used to evaluate an image reconstruction effect.
Experiment results are shown in Table 1. It can be learned from Table 1 that, in the reconstruction result of the solution provided in the present disclosure, a PSNR implemented on the MNIST data set is 35.52 dB, the SSIM is 0.9482, and performance on the data set is better than performance of the reference solution, and has high applicability and stability.
In conclusion, according to the method provided in this embodiment, the first image is obtained, and image reconstruction processing is performed on the first image based on the trained reconstruction network layer, to obtain a high-quality first reconstruction image. In the present disclosure, an accurate image reconstruction result may be obtained based on the trained reconstruction network layer.
When a first front end 1101 receives a first image that needs to be reconstructed, the first image is an image having a plurality of damage types. The first front end 1101 uploads the first image to a computer device 1102 for image reconstruction processing. For an image reconstruction processing process of the computer device 1102 on the first image, refer to the description in the foregoing embodiments. Details are not described herein again.
After the computer device 1102 performs image reconstruction processing on the first image, the computer device 1102 outputs an image reconstruction result to a second front end 1103.
In some embodiments, the first front end 1101 and the second front end 1103 may be a same front end, or may be different front ends. This is not limited in this embodiment of the present disclosure.
In one embodiment, the degradation module 1202 is further configured to: obtain a first feature corresponding to the first sample image; respectively obtain at least two second features corresponding to the at least two types of the second sample images; obtain, based on the first feature and the one second feature, a damage feature corresponding to one of the second sample images; and add the damage feature to the first feature of the first sample image, to obtain one of the single degradation images corresponding to the first sample image.
The image reconstruction model includes a degradation network layer, and the degradation network layer includes a first degradation encoder, a damage kernel extractor, and a first degradation decoder.
In one embodiment, the degradation module 1202 is further configured to: extract the first feature corresponding to the first sample image by using the first degradation encoder; respectively extract the at least two second features corresponding to the at least two types of the second sample images by using the first degradation encoder; determine the damage feature by comparing the first feature and the second feature; perform decoupling by using the damage kernel extractor, to obtain, from the one second feature, the damage feature corresponding to the one of the second sample images; add the damage feature to the first feature of the first sample image, to obtain an intermediate first feature; and input the intermediate first feature to the first degradation decoder for decoding processing, to obtain one of the single degradation images corresponding to the first sample image.
In one embodiment, the fusion module 1203 is further configured to: obtain at least two third features corresponding to the single degradation images; and fuse the third feature, to obtain the multiple degradation image corresponding to the first sample image.
A degradation network layer of the image reconstruction model further includes a second degradation encoder and a second degradation decoder.
In one embodiment, the fusion module 1203 is further configured to: obtain the at least two third features corresponding to the single degradation images by using the second degradation encoder; fuse the third features, to obtain a degradation fusion feature; and perform decoding processing on the degradation fusion feature by using the second degradation decoder, to generate the multiple degradation image corresponding to the first sample image.
The image reconstruction model includes a reconstruction network layer, and the reconstruction network layer includes a reconstruction encoder and a reconstruction decoder.
In one embodiment, the reconstruction module 1204 is further configured to: input the multiple degradation image to the reconstruction encoder for feature extraction, to obtain an image reconstruction feature; and perform decoding processing on the image reconstruction feature by using the reconstruction decoder, to generate the predicted reconstruction image corresponding to the multiple degradation image.
The loss function value includes a first loss function value and a second loss function value, the first loss function value is used to measure a similarity between the second sample image and the single degradation image corresponding to the second sample image, and the second loss function value is used to measure authenticity of the predicted reconstruction image.
In one embodiment, the calculation module 1205 is further configured to calculate the first loss function value based on a second feature corresponding to the second sample image and a third feature corresponding to the single degradation image.
In one embodiment, the calculation module 1205 is further configured to calculate the second loss function value based on a first feature corresponding to the first sample image and a fourth feature corresponding to the predicted reconstruction image.
In one embodiment, the updating module 1206 is further configured to update a model parameter of the image reconstruction model based on a sum of the first loss function value and the second loss function value.
In one embodiment, the calculation module 1205 is further configured to calculate the first loss function value based on a second feature corresponding to an ith second sample image of the at least two second sample images and a third feature corresponding to an ith single degradation image of the at least two single degradation images, where i is a positive integer.
The loss function value further includes a third loss function value and a fourth loss function value, the third loss function value is used to measure a similarity between a non-content part in the multiple degradation image and a non-content part in the first sample image, and the fourth loss function value is used to measure a similarity between the first sample image and the predicted reconstruction image.
In one embodiment, the calculation module 1205 is further configured to calculate the third loss function value based on a structural feature corresponding to the multiple degradation image and a structural feature corresponding to the first sample image.
In one embodiment, the calculation module 1205 is further configured to calculate the fourth loss function value based on a content feature and a texture feature that correspond to the first sample image and a content feature and a texture feature that correspond to the predicted reconstruction image.
In one embodiment, the updating module 1206 is further configured to update the model parameter of the image reconstruction model based on a sum of the first loss function value, the second loss function value, the third loss function value, and the fourth loss function value.
The damage type includes at least one of a blur damage type, a noise damage type, and a bias damage type.
In one embodiment, the obtaining module 1201 is further configured to obtain a first image, where the first image is an image having a plurality of damage types.
In one embodiment, the reconstruction module 1204 is further configured to: perform image reconstruction processing on the first image based on a trained reconstruction network layer, to obtain a first reconstruction image, where the first reconstruction image is an image obtained by removing the plurality of damage types from the first image; and output the first reconstruction image.
The term module (and other similar terms such as submodule, unit, subunit, etc.) in this disclosure may refer to a software module, a hardware module, or a combination thereof. A software module (e.g., computer program) may be developed using a computer programming language. A hardware module may be implemented using processing circuitry and/or memory. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. Moreover, each module can be part of an overall module that includes the functionalities of the module.
The mass storage device 1306 is connected to the central processing unit 1301 through a mass storage controller (not shown) connected to the system bus 1305. The mass storage device 1306 and an associated computer-readable medium provide non-volatile storage for the image computer device 1300. To be specific, the mass storage device 1306 may include the computer-readable medium (not shown) such as a hard disk or a CD-ROM (CD-ROM) drive.
Without a loss of generality, the computer-readable medium may include a computer storage medium and a communication medium. The computer storage medium includes a volatile and non-volatile, removable, and non-removable medium implemented by using any method or technology for storing information such as computer-readable instructions, a data structure, a program module, or another data. The computer storage medium includes the RAM, the ROM, an erasable programmable read only memory (EPROM), an electronic-erasable read-only memory (EEPROM) flash memory, or another solid-state storage technology, the CD-ROM, a digital versatile disc (DVD), or another optical storage, a tape cartridge, a tape, disk storage, or another magnetic storage device. Certainly, a person skilled in the art may learn that the computer storage medium is not limited to the foregoing several types. The system memory 1304 and the mass storage device 1306 may be collectively referred to as the memory.
In accordance with various embodiments of this disclosure, the image computer device 1300 may also be connected to a remote computer on a network over a network, such as the internet, to run. To be specific, the image computer device 1300 may be connected to a network 1308 by using a network interface unit 1307 connected to the system bus 1305, or may be connected to another type of network or a remote computer system (not shown) by using the network interface unit 1307.
The memory further includes at least one computer program, the at least one computer program is stored in the memory, and the central processing unit 1301 executes the at least one program to implement all or some of operations of the training method of the image reconstruction model shown in the foregoing embodiments.
Embodiments of the present disclosure further provide a computer device. The computer device includes a processor and a memory. The memory stores at least one program, and the at least one program is loaded and executed by the processor to implement the training method of the image reconstruction model provided in the foregoing method embodiments.
Embodiments of the present disclosure further provide a computer-readable storage medium. The storage medium stores at least one computer program, and the at least one computer program is loaded and executed by a processor to implement the training method of the image reconstruction model provided in the foregoing method embodiments.
Embodiments of the present disclosure further provide a computer program product. The computer program product includes a computer program, and the computer program is stored in a computer-readable storage medium. The computer program is read and executed by a processor of a computer device from the computer-readable storage medium, to enable the computer device to execute the computer program to implement the training method of the image reconstruction model provided in the foregoing method embodiments.
In a specific implementation of the present disclosure, when foregoing embodiments of the present disclosure are applied to a specific product or technology, user permission or consent need to be obtained for related data such as data, historical data, and user data processing related to a user identity or a feature, such as a profile, collection, use, and processing of the related data need to comply with the laws, regulations, and standards of related countries and regions.
Number | Date | Country | Kind |
---|---|---|---|
202210508810.2 | May 2022 | CN | national |
This application is a continuation application of PCT Patent Application No. PCT/CN2023/082436, filed on Mar. 20, 2023, which claims priority to Chinese Patent Application No. 202210508810.2, filed on May 10, 2022 and entitled “TRAINING METHOD AND APPARATUS OF IMAGE RECONSTRUCTION MODEL, MEDIUM, AND PROGRAM PRODUCT”, both of which are incorporated herein by reference in entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/082436 | Mar 2023 | WO |
Child | 18787991 | US |