Embodiments of the disclosure relate to the field of image processing technologies, and in particular, to a viewpoint image processing method and a related device.
Multi-viewpoint image data is image or video data that is obtained from a plurality of viewing angles by using a plurality of video or image capture devices. For example, during shooting of virtual reality video content, a plurality of cameras may be used to shoot videos at different locations to obtain multi-viewpoint image data. Then, image processing is performed on the multi-viewpoint image data by a computer to obtain a virtual viewpoint image, so as to create virtual reality video experience for a user.
Due to limitations of pixels of an existing image sensor or hardware costs, an existing shooting device cannot achieve both relatively high spatial resolution (resolution of a single viewpoint image) and angular resolution (a quantity of viewpoint images). Therefore, to resolve this problem, an effective way is to obtain multi-viewpoint image data with relatively high spatial resolution and a relatively small quantity of viewpoint images through shooting and synthesize a virtual viewpoint image by using a viewpoint image synthesis technology, to reconstruct free viewpoint image data with high spatial resolution.
In a conventional technology, methods for synthesizing a virtual viewpoint image include, for example, solutions shown in
It should be noted that, the existing viewpoint image is an image actually shot at a location in space by using a video or image capture device (for example, a camera, a video camera, or an image sensor). Particularly, a viewpoint image may also be referred to as a viewpoint. In addition, the virtual viewpoint image is an image obtained through virtual synthesis based-calculation without being actually shot by a video or image capture device.
In the solution shown in
In the solution shown in
In conclusion, how to improve synthesis efficiency while ensuring synthesis quality of a virtual viewpoint image is a technical problem that urgently needs to be resolved by a person skilled in the art.
Embodiments of this application disclose a viewpoint image processing method and a related device, to improve quality of a synthesized virtual viewpoint and improve synthesis efficiency.
According to a first aspect, an embodiment of this application discloses a viewpoint image processing method. The method includes:
obtaining at least two first viewpoint images, where the at least two first viewpoint images include images respectively captured at at least two viewpoint locations;
inputting the at least two first viewpoint images and location information of a to-be-synthesized second viewpoint image into a virtual viewpoint synthesis network, where the virtual viewpoint synthesis network is a network for synthesizing the second viewpoint image based on an adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images, the location information represents a viewpoint location of the second viewpoint image, the second viewpoint image is in a target area, and the target area includes an area formed by the at least two first viewpoint images; and obtaining the second viewpoint image through calculation by using the virtual viewpoint synthesis network.
The first viewpoint image may be an existing viewpoint image, and the second viewpoint image may be a virtual viewpoint image. It should be noted that, the existing viewpoint image is an image actually shot at a location in space by using a video or image capture device (for example, a camera, a video camera, or an image sensor). In one embodiment, a viewpoint image may also be referred to as a viewpoint. In addition, the virtual viewpoint image is an image obtained through virtual synthesis-based calculation without being actually shot by a video or image capture device.
Based on the problems in the conventional technology that are described in
In one embodiment, the obtaining the second viewpoint image through calculation by using the virtual viewpoint synthesis network includes:
obtaining a geometric feature matrix between the at least two first viewpoint images, where the geometric feature matrix is a matrix used to represent information about a geometric location relationship between pixels of the at least two first viewpoint images;
generating the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on the geometric feature matrix and the location information;
generating at least two to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the at least two first viewpoint images; and
synthesizing the second viewpoint image by using the at least two to-be-processed virtual composite pixel matrices.
In one embodiment, the obtaining a geometric feature matrix between the at least two first viewpoint images includes:
extracting a feature from each of the at least two first viewpoint images to obtain at least two feature matrices;
performing a cross-correlation operation on every two of the at least two feature matrices to obtain one or more feature matrices after the operation; and
when one feature matrix after the operation is obtained, using the feature matrix after the operation as the geometric feature matrix; or when a plurality of feature matrices after the operation are obtained, obtaining the geometric feature matrix through calculation based on the plurality of feature matrices after the operation.
In this embodiment of this application, information about a geometric location relationship between every two of the plurality of existing viewpoint images is represented as one geometric feature matrix, the virtual viewpoint may be synthesized by using effective information in all the existing viewpoint images in the geometric feature matrix, so as to achieve a better synthesis effect.
In one embodiment, the obtaining a geometric feature matrix between the at least two first viewpoint images includes:
extracting a pixel from each of the at least two first viewpoint images to obtain at least two pixel matrices;
composing the at least two pixel matrices into a hybrid pixel matrix; and
inputting the hybrid pixel matrix into a first preset convolutional neural network model to obtain the geometric feature matrix.
In one embodiment, the location information of the to-be-synthesized second viewpoint image is coordinate values, and the generating the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on the geometric feature matrix and the location information of the to-be-synthesized second viewpoint image includes:
extending the coordinate values into a location matrix whose quantities of rows and columns are the same as those of the geometric feature matrix;
composing the location matrix and the geometric feature matrix into a hybrid information matrix;
inputting the hybrid information matrix into each of at least two second preset convolutional neural network models, where the at least two second preset convolutional neural network models have a same structure and different parameters; and
determining the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on output results of the at least two second preset convolutional neural network models.
In this embodiment of this application, pixel interpolation and neighboring pixel sampling in the conventional technology are integrated by using an adaptive convolution kernel, to implicitly resolve an occlusion problem, so as to generate a higher-quality virtual viewpoint. The adaptive convolution kernel can be automatically adjusted based on location information of a to-be-synthesized viewpoint image to synthesize a corresponding virtual viewpoint according to a requirement, thereby improving flexibility of the conventional technology.
In one embodiment, the generating at least two to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the at least two first viewpoint images includes:
performing convolution on the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images and a pixel matrix with the pixel as a center in a one-to-one correspondence to obtain a virtual composite pixel corresponding to a pixel location of the pixel, where a quantity of rows of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel, and a quantity of columns of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel; and
composing the obtained virtual composite pixels into the at least two virtual composite pixel matrices.
In this embodiment of this application, the preliminary virtual composite pixel matrices representing the to-be-synthesized viewpoint image are first generated based on the plurality of existing viewpoint images and the adaptive convolution kernels, and then the final virtual viewpoint image is synthesized based on the generated preliminary virtual composite pixel matrices. In this way, quality of the synthesized viewpoint image can be improved.
According to a second aspect, an embodiment of this application provides a viewpoint image processing device. The device includes a processor, a transceiver, and a memory, where the memory is configured to store a computer program and/or data, and the processor is configured to execute the computer program stored in the memory, so that the device performs the following operations:
obtaining at least two first viewpoint images, where the at least two first viewpoint images include images respectively captured at at least two viewpoint locations;
inputting the at least two first viewpoint images and location information of a to-be-synthesized second viewpoint image into a virtual viewpoint synthesis network, where the virtual viewpoint synthesis network is a network for synthesizing the second viewpoint image based on an adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images, the location information represents a viewpoint location of the second viewpoint image, the second viewpoint image is in a target area, and the target area includes an area formed by the at least two first viewpoint images; and
obtaining the second viewpoint image through calculation by using the virtual viewpoint synthesis network.
In one embodiment, the obtaining the second viewpoint image through calculation by using the virtual viewpoint synthesis network includes:
obtaining a geometric feature matrix between the at least two first viewpoint images, where the geometric feature matrix is a matrix used to represent information about a geometric location relationship between pixels of the at least two first viewpoint images;
generating the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on the geometric feature matrix and the location information;
generating at least two to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the at least two first viewpoint images; and
synthesizing the second viewpoint image by using the at least two to-be-processed virtual composite pixel matrices.
In one embodiment, the obtaining a geometric feature matrix between the at least two first viewpoint images includes:
extracting a feature from each of the at least two first viewpoint images to obtain at least two feature matrices;
performing a cross-correlation operation on every two of the at least two feature matrices to obtain one or more feature matrices after the operation; and
when one feature matrix after the operation is obtained, using the feature matrix after the operation as the geometric feature matrix; or when a plurality of feature matrices after the operation are obtained, obtaining the geometric feature matrix through calculation based on the plurality of feature matrices after the operation.
In one embodiment, the obtaining a geometric feature matrix between the at least two first viewpoint images includes:
extracting a pixel from each of the at least two first viewpoint images to obtain at least two pixel matrices;
composing the at least two pixel matrices into a hybrid pixel matrix; and
inputting the hybrid pixel matrix into a first preset convolutional neural network model to obtain the geometric feature matrix.
In one embodiment, the location information of the to-be-synthesized second viewpoint image is coordinate values, and the generating the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on the geometric feature matrix and the location information of the to-be-synthesized second viewpoint image includes:
extending the coordinate values into a location matrix whose quantities of rows and columns are the same as those of the geometric feature matrix;
composing the location matrix and the geometric feature matrix into a hybrid information matrix;
inputting the hybrid information matrix into each of at least two second preset convolutional neural network models, where the at least two second preset convolutional neural network models have a same structure and different parameters; and
determining the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on output results of the at least two second preset convolutional neural network models.
In one embodiment, the generating at least two to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the at least two first viewpoint images includes:
performing convolution on the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images and a pixel matrix with the pixel as a center in a one-to-one correspondence to obtain a virtual composite pixel corresponding to a pixel location of the pixel, where a quantity of rows of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel, and a quantity of columns of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel; and
composing the obtained virtual composite pixels into the at least two virtual composite pixel matrices.
According to a third aspect, an embodiment of this application provides a viewpoint image processing device. The device includes:
a first obtaining unit, configured to obtain at least two first viewpoint images, where the at least two first viewpoint images include images respectively captured at at least two viewpoint locations;
an input unit, configured to input the at least two first viewpoint images and location information of a to-be-synthesized second viewpoint image into a virtual viewpoint synthesis network, where the virtual viewpoint synthesis network is a network for synthesizing the second viewpoint image based on an adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images, the location information represents a viewpoint location of the second viewpoint image, the second viewpoint image is in a target area, and the target area includes an area formed by the at least two first viewpoint images; and
a calculation unit, configured to obtain the second viewpoint image through calculation by using the virtual viewpoint synthesis network.
In one embodiment, the calculation unit includes:
a second obtaining unit, configured to obtain a geometric feature matrix between the at least two first viewpoint images, where the geometric feature matrix is a matrix used to represent information about a geometric location relationship between pixels of the at least two first viewpoint images;
a first generation unit, configured to generate the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on the geometric feature matrix and the location information;
a second generation unit, configured to generate at least two to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the at least two first viewpoint images; and
a synthesis unit, configured to synthesize the second viewpoint image by using the at least two to-be-processed virtual composite pixel matrices.
In one embodiment, the second obtaining unit includes:
an extraction unit, configured to extract a feature from each of the at least two first viewpoint images to obtain at least two feature matrices;
a cross-correlation operation unit, configured to perform a cross-correlation operation on every two of the at least two feature matrices to obtain one or more feature matrices after the operation; and
a calculation unit, configured to: when one feature matrix after the operation is obtained, use the feature matrix after the operation as the geometric feature matrix; or when a plurality of feature matrices after the operation are obtained, obtain the geometric feature matrix through calculation based on the plurality of feature matrices after the operation.
In one embodiment, the second obtaining unit includes:
an extraction unit, configured to extract a pixel from each of the at least two first viewpoint images to obtain at least two pixel matrices;
a first composition unit, configured to compose the at least two pixel matrices into a hybrid pixel matrix; and
a first input unit, configured to input the hybrid pixel matrix into a first preset convolutional neural network model to obtain the geometric feature matrix.
In one embodiment, the location information of the to-be-synthesized second viewpoint image is coordinate values, and the first generation unit includes: an extension unit, configured to extend the coordinate values into a location matrix whose quantities of rows and columns are the same as those of the geometric feature matrix;
a second composition unit, configured to compose the location matrix and the geometric feature matrix into a hybrid information matrix;
a second input unit, configured to input the hybrid information matrix into each of at least two second preset convolutional neural network models, where the at least two second preset convolutional neural network models have a same structure and different parameters; and
a determining unit, configured to determine the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on output results of the at least two second preset convolutional neural network models.
In one embodiment, the second generation unit includes:
a convolution unit, configured to perform convolution on the adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images and a pixel matrix with the pixel as a center in a one-to-one correspondence to obtain a virtual composite pixel corresponding to a pixel location of the pixel, where a quantity of rows of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel, and a quantity of columns of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel; and
a third composition unit, configured to compose the obtained virtual composite pixels into the at least two virtual composite pixel matrices.
According to a fourth aspect, an embodiment of this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program is executed by a processor to implement the method according to any one of the embodiments of the first aspect.
According to a fifth aspect, an embodiment of this application provides a chip. The chip includes a central processing unit, a neural network processor, and a memory, and the chip is configured to perform the method according to any one of the embodiments of the first aspect.
According to a sixth aspect, an embodiment of the disclosure provides a computer program. When the computer program is run on a computer, the computer is enabled to perform the methods described in the first aspect.
According to a seventh aspect, an embodiment of this application further provides a method for training a virtual viewpoint synthesis network. The viewpoint image processing method according to any one of the embodiments of the first aspect can be implemented by using the virtual viewpoint synthesis network. The training method includes:
obtaining a plurality of existing viewpoint images and location information of a to-be-synthesized virtual viewpoint image;
obtaining a pixel matrix of each of the plurality of existing viewpoint images;
obtaining a geometric feature matrix through calculation based on the obtained pixel matrices;
generating, based on the geometric feature matrix and the location information of the to-be-synthesized virtual viewpoint, a plurality of to-be-processed virtual composite pixel matrices whose quantity is the same as that of the plurality of existing viewpoint images;
synthesizing the virtual viewpoint image based on the plurality of to-be-processed virtual composite pixel matrices;
calculating, by using a loss function, a loss value between the synthesized virtual viewpoint image and an image that is actually shot at a location of the to-be-synthesized virtual viewpoint image, and adaptively adjusting parameters of convolutional neural network models in a representation network, a generation network, and a correction network based on the loss value; and
continually repeating operations of the foregoing operations until a loss value between a virtual viewpoint image finally output by the entire virtual viewpoint synthesis network and an actual image at a location corresponding to the virtual viewpoint image is less than a threshold.
To sum up, based on the problems in the conventional technology that are described in
The following briefly describes the accompanying drawings for embodiments of the embodiments of this application.
The following describes the embodiments of the disclosure with reference to the accompanying drawings in the embodiments of the disclosure.
A viewpoint image processing method provided in the embodiments of this application can be applied to scenarios such as virtual viewpoint synthesis of light field video content, viewpoint image synthesis of video content shot by a plurality of image or video capture devices, and video frame interpolation.
In a application scenario, after an image or video capture device array shoots light field videos, a server obtains the videos, encodes the videos, and transmits encoded videos to a terminal such as a VR display helmet; and the VR display helmet synthesizes, based on a location that is of a to-be-synthesized viewpoint image and that is obtained by a location sensor in the helmet, the corresponding viewpoint image according to the viewpoint image processing method provided in the embodiments of this application, and then displays the synthesized viewpoint image to a user for viewing. For details thereof, refer to
Alternatively, after an image or video capture device array shoots light field videos, a server obtains the videos and also obtains location information that is of a to-be-synthesized viewpoint image and that is sent by a terminal, synthesizes the corresponding viewpoint according to the viewpoint image processing method provided in the embodiments of this application, encodes the synthesized viewpoint image, and transmits an encoded viewpoint image to the terminal. Then, the terminal decodes the encoded viewpoint image to display the synthesized viewpoint image to a user for viewing. For details thereof, refer to
In the foregoing application scenario, the virtual viewpoint image is synthesized according to this solution, so as to improve synthesis efficiency and quality of the synthesized viewpoint image.
In another application scenario, the viewpoint image processing method provided in the embodiments of this application is applied to video frame interpolation. By using the viewpoint image processing method provided in the embodiments of this application, a video with a high frame rate can be synthesized by using a video with a low frame rate. In one embodiment, a video with a low frame rate is obtained, two video frames (that is, frame images) on which video interpolation needs to be performed are specified, and then a plurality of virtual frames between the two video frames are synthesized by using the viewpoint image processing method provided in the embodiments of this application, so as to obtain a video output with a high frame rate. For details thereof, refer to
In the foregoing application scenario, compared with depth information-based interpolation, in the viewpoint image processing method provided in the embodiments of this application, a more natural object edge and less noise can be obtained. In addition, a generated adaptive convolution kernel (which is described in detail below and is not described in detail temporarily herein) in the embodiments of this application can be automatically adapted based on location information of a virtual video frame, so that any frame between two video frames can be synthesized. This resolves an existing problem that only a virtual frame at a middle location between two video frames can be synthesized based on adaptive convolution, and improves video interpolation efficiency.
The following describes, from a model training side and a model application side, the method provided in this application.
A method for training a virtual viewpoint synthesis network provided in the embodiments of this application relates to computer vision processing, may be applied to data processing methods such as data training, machine learning, and deep learning, and is used for performing symbolic and formal intelligent information modeling, extraction, preprocessing, training, and the like on training data (for example, an image pixel matrix in this application), to finally obtain a trained virtual viewpoint synthesis network. In addition, the trained virtual viewpoint synthesis network may be used in the viewpoint image processing method provided in the embodiments of this application. To-be-input data (for example, pixel matrices of at least two existing viewpoint images and location information of a to-be-synthesized virtual viewpoint image in this application) is input into the trained virtual viewpoint synthesis network to obtain output data (for example, a virtual viewpoint image in this application). It should be noted that the method for training the virtual viewpoint synthesis network and the viewpoint image processing method that are provided in the embodiments of this application are embodiments generated based on a same idea, and may also be understood as two parts of a system, or two stages of an entire process, for example, a model training stage and a model application stage.
It should be noted that, the existing viewpoint image is an image actually shot at a location in space by using a video or image capture device (for example, a camera, a video camera, or an image sensor). In addition, the virtual viewpoint image is an image obtained through virtual synthesis-based calculation without being actually shot by a video or image capture device.
Because the embodiments of this application relate to applications of a large quantity of neural networks, for ease of understanding, the following first describes related terms and concepts such as “neural network” in the embodiments of this application.
(1) Neural Network
The neural network may include a neuron. The neuron may be an operation unit that uses xs and an intercept of 1 as input. Output of the operation unit may be as follows:
s=1, 2, . . . , and n, n is a natural number greater than 1, Ws is a weight of xs, and b is a bias of a neuron. f represents an activation function (activation function) of the neuron, where the activation function is used to introduce a non-linear characteristic into the neural network, to convert an input signal in the neuron into an output signal. The output signal of the activation function may be used as an input of a next convolution layer. The activation function may be a sigmoid function. The neural network is a network formed by joining many single neurons together. In one embodiment, an output of a neuron may be an input of another neuron. Input of each neuron may be connected to a local receptive field of a previous layer to extract a feature of the local receptive field. The local receptive field may be a region including several neurons.
(2) Deep Neural Network
The deep neural network (DNN) is also referred to as a multi-layer neural network, and may be understood as a neural network having many hidden layers. There is no special measurement criterion for the “many” herein. Based on locations of different layers in the DNN, a neural network in the DNN may be divided into three types: an input layer, a hidden layer, and an output layer. Generally, the first layer is the input layer, the last layer is the output layer, and the middle layer is the hidden layer. Layers are fully connected. In one embodiment, any neuron at an ith layer is necessarily connected to any neuron at an (i+1)th layer. The DNN appears to be quite complex, but an operation of each layer is not complex actually. To put it simply, the DNN is represented by the following linear relation expression: {right arrow over (y)}=α(W{right arrow over (x)}+{right arrow over (b)}), where {right arrow over (x)} is an input vector, {right arrow over (y)} is an output vector, {right arrow over (b)} is an offset vector, W is a weight matrix (also referred to as a coefficient), and α( ) is an activation function. At each layer, the output vector {right arrow over (y)} is obtained by performing such a simple operation on the input vector {right arrow over (x)}. Because the DNN includes a large quantity of layers, there are a large quantity of coefficients W and a large quantity of offset vectors {right arrow over (b)}. Definitions of the parameters in the DNN are as follows: The coefficient W is used as an example. It is assumed that in a DNN with three layers, a linear coefficient from the fourth neuron at the second layer to the second neuron at the third layer is defined as W′24. The superscript 3 represents a layer at which the coefficient W is located, and the subscript corresponds to an output third-layer index 2 and an input second-layer index 4. In conclusion, a coefficient from the kth neuron at the (L−1)th layer to the jth neuron at the Lth layer is defined as WjkL. It should be noted that there is no parameter W at the input layer. In the deep neural network, more hidden layers make the network more capable of describing a complex case in the real world. Theoretically, a model with more parameters has higher complexity and a larger “capacity”. It indicates that the model can complete a more complex learning task. Training the deep neural network is a process of learning a weight matrix, and a final objective of the training is to obtain a weight matrix of all layers of a trained deep neural network (a weight matrix formed by vectors W at many layers).
(3) Convolutional Neural Network
The convolutional neural network (CNN) is a deep neural network with a convolutional structure. The convolutional neural network includes a feature extractor consisting of a convolutional layer and a sub sampling layer. The feature extractor may be considered as a filter. A convolution process may be considered as performing convolution by using a trainable filter and an input image or a convolution feature map. The convolutional layer is a neuron layer that performs convolution processing on an input signal in the convolutional neural network. At the convolutional layer of the convolutional neural network, one neuron may be connected to only a part of neurons at a neighboring layer. A convolutional layer generally includes several feature maps, and each feature map may include some neurons arranged in a rectangle. Neurons of a same feature map share a weight, and the shared weight herein is a convolution kernel. Sharing the weight may be understood as that a manner of extracting image information is unrelated to a location. The principles implied herein are that statistical information of a part of an image is the same as that of another part. In one embodiment, image information that is learned in a part can also be used in another part. Therefore, same learned image information can be used for all locations in the image. At a same convolutional layer, a plurality of convolution kernels may be used to extract different image information. Usually, a larger quantity of convolution kernels indicates richer image information reflected by a convolution operation.
The convolution kernel may be initialized in a form of a matrix of a random size. In a training process of the convolutional neural network, an appropriate weight may be obtained for the convolution kernel through learning. In addition, sharing the weight is advantageous because connections between layers of the convolutional neural network are reduced, and a possibility of overfitting is reduced.
(4) Recurrent Neural Network
A recurrent neural network (RNN) is used to process sequence data. In a conventional neural network model, from an input layer to a hidden layer and then to an output layer, the layers are fully connected, but nodes at each layer are not connected. Such a common neural network resolves many problems, but is still incapable of resolving many other problems. For example, to predict a next word in a sentence, a previous word usually needs to be used, because adjacent words in the sentence are not independent. A reason why the RNN is referred to as a recurrent neural network is that current output of a sequence is related to previous output. A particular representation form is that the network memorizes previous information and applies the previous information to calculation of the current output. In one embodiment, nodes in the hidden layer are no longer unconnected, but are connected, and input for the hidden layer includes not only output of the input layer but also output of the hidden layer at a previous moment. Theoretically, the RNN can process sequence data of any length. Training of the RNN is the same as training of a conventional CNN or DNN. An error back propagation algorithm is used, but there is a difference: If the RNN is expanded, a parameter such as W of the RNN is shared, and this is different from that in the conventional neural network described above by using an example. In addition, during use of a gradient descent algorithm, an output in each operation depends not only on a network in the current operation, but also on a network status in several previous operations. The learning algorithm is referred to as a backpropagation through time (BPTT) algorithm.
A reason why the recurrent neural network is required when there is already the convolutional neural network is simple. In the convolutional neural network, there is a premise that elements are independent of each other, and input and output are also independent, such as a cat and a dog. However, many elements are interconnected in the real world. For example, stocks change over time. For another example, a person says: I like traveling, a most favorite place is Yunnan, and I will go there in the future if there is a chance. If there is a blank to be filled herein, people should know that “Yunnan” is to be filled in. This is because people can make an inference from a context, but how can a machine do this? The RNN emerges. The RNN is designed to enable a machine to have a capability to remember like human beings. Therefore, output of the RNN depends on current input information and historical memory information.
(5) Loss Function
In a process of training the deep neural network, because it is expected that an output of the deep neural network is as much as possible close to a predicted value that is actually expected, a predicted value of a current network and a target value that is actually expected may be compared, and then a weight vector of each layer of the neural network is updated based on a difference between the predicted value and the target value (certainly, there is usually an initialization process before the first update, and in one embodiment, parameters are preconfigured for all layers of the deep neural network). For example, if the predicted value of the network is large, the weight vector is adjusted to decrease the predicted value, and adjustment is continuously performed, until the deep neural network can predict the target value that is actually expected or a value that is very close to the target value that is actually expected. Therefore, “how to obtain, through comparison, the difference between the predicted value and the target value” needs to be predefined. This is the loss function or an objective function. The loss function and the objective function are important equations used to measure the difference between the predicted value and the target value. The loss function is used as an example. A higher output value (loss) of the loss function indicates a larger difference. Therefore, training of the deep neural network is a process of minimizing the loss as much as possible.
(6) Back Propagation Algorithm
The convolutional neural network may correct a value of a parameter in an initial super-resolution model in a training process according to an error back propagation (back propagation, BP) algorithm, so that an error loss of reconstructing the super-resolution model becomes smaller. In one embodiment, an input signal is transferred forward until an error loss occurs at an output, and the parameter in the initial super-resolution model is updated based on back propagation error loss information, to make the error loss converge. The back propagation algorithm is an error-loss-centered back propagation motion intended to obtain a parameter, such as a weight matrix, of an optimal super-resolution model.
(7) Pixel Value
A pixel value of an image may be a red green blue (RGB) color value, and the pixel value may be a long integer representing a color. For example, the pixel value is 256*Red+100*Green+76Blue, where Blue represents a blue component, Green represents a green component, and Red represents a red component. In each color component, a smaller value indicates lower brightness, and a larger value indicates higher brightness. For a grayscale image, a pixel value may be a grayscale value.
The following describes a system architecture provided in the embodiments of this application.
Refer to
Then, the training data is stored in a database 730. A training device 720 obtains a target model/rule 701 (which is explained as follows: 701 herein is a model obtained through training at the training stage described above, and may be a network used for virtual viewpoint synthesis, or the like) through training based on the training data maintained in the database 730. With reference to Embodiment 1, the following describes in more detail how the training device 720 obtains the target model/rule 701 based on the training data. The target model/rule 701 can be used to implement the viewpoint image processing method provided in the embodiments of this application. In one embodiment, related preprocessing is performed on at least two existing viewpoint images and location information of a to-be-synthesized virtual viewpoint image, and preprocessed viewpoint images and location information are input into the target model/rule 701 to obtain a virtual viewpoint image. The target model/rule 701 in this embodiment of this application may be a virtual viewpoint synthesis network. It should be noted that, in actual application, the training data maintained in the database 730 may not all be captured by the data collection device 760, or may be received and obtained from another device. It should be further noted that the training device 720 may not necessarily train the target model/rule 701 completely based on the training data maintained in the database 730, or may obtain training data from a cloud or another place to perform model training. The foregoing description should not be construed as a limitation on the embodiments of this application.
It should be noted that, the existing viewpoint image is an image actually shot at a location in space by using a video or image capture device (for example, a camera, a video camera, or an image sensor). In one embodiment, a viewpoint image may also be referred to as a viewpoint.
In addition, the virtual viewpoint image is an image obtained through virtual synthesis-based calculation without being actually shot by a video or image capture device.
The target model/rule 701 obtained by the training device 720 through training may be applied to different systems or devices, for example, applied to an execution device 710 shown in
A preprocessing module 713 is configured to perform preprocessing based on the input data (for example, the plurality of existing viewpoint images) received by the I/O interface 712. In this embodiment of this application, the preprocessing module 713 may be configured to obtain pixel matrices of the plurality of existing viewpoint images, or the like.
In a process in which the execution device 710 performs preprocessing on the input data or the calculation module 711 of the execution device 710 performs related processing such as calculation, the execution device 710 may invoke data, a computer program, and the like in a data storage system 750 for corresponding processing, and may also store data, instructions, and the like obtained through corresponding processing into the data storage system 750.
Finally, the I/O interface 712 returns a processing result, for example, the virtual viewpoint image obtained in the foregoing to the client device 740, to provide the virtual viewpoint image to the user.
It should be noted that the training device 720 may generate corresponding target models/rules 701 for different objectives or different tasks based on different training data. The corresponding target models/rules 701 may be used to implement the foregoing objectives or complete the foregoing tasks, to provide a desired result for the user.
In a case shown in
It should be noted that
As shown in
As described in the foregoing basic concepts, the convolutional neural network is a deep neural network with a convolutional structure, and is a deep learning architecture. In the deep learning architecture, multi-layer learning is performed at different abstract levels according to a machine learning algorithm. As a deep learning architecture, the CNN is a feed-forward artificial neural network, and each neuron in the feed-forward artificial neural network can respond to an image input into the feed-forward artificial neural network.
As shown in
Convolutional layer/Pooling layer 820:
Convolutional Layer:
As shown in
The following describes internal working principles of the convolutional layer by using the convolutional layer 821 as an example.
The convolutional layer 821 may include a plurality of convolution operators. The convolution operator is also referred to as a kernel. During image processing, the convolution operator functions as a filter that extracts particular information from an input image matrix. The convolution operator may essentially be a weight matrix, and the weight matrix is usually predefined. In a process of performing a convolution operation on an image, the weight matrix usually processes pixels at a granularity level of one pixel (or two pixels, depending on a value of a stride (stride)) in a horizontal direction on an input image, to extract a particular feature from the image. A size of the weight matrix should be related to a size of the image. It should be noted that a depth dimension of the weight matrix is the same as a depth dimension of the input image. During a convolution operation, the weight matrix extends to an entire depth of the input image. Therefore, a convolutional output of a single depth dimension is generated through convolution with a single weight matrix. However, in most cases, a single weight matrix is not used, but a plurality of weight matrices with a same size (rows×columns×channel quantity), namely, a plurality of same-type matrices, are applied. Outputs of the weight matrices are superimposed to form a depth dimension of a convolutional image. The dimension herein may be understood as being determined based on the foregoing “plurality”. Different weight matrices may be used to extract different features from the image. For example, one weight matrix is used to extract edge information of the image, another weight matrix is used to extract a particular color of the image, and a further weight matrix is used to blur unneeded noise in the image. Sizes of the plurality of weight matrices (row quantity×column quantity×channel quantity) are the same, so that sizes of feature maps extracted by using the plurality of weight matrices with a same size are also the same. Then, the plurality of extracted feature maps with a same size are combined to form an output of a convolution operation.
Weight values in these weight matrices need to be obtained through a lot of training during actual application. Each weight matrix formed by using the weight values obtained through training may be used to extract information from an input image, to enable the convolutional neural network 800 to perform correct prediction.
It should be noted that, a size of a three-dimensional matrix in this embodiment of this application is usually represented by (row quantity×column quantity×channel quantity). Meanings of rows and columns herein are the same as those of rows and columns in a two-dimensional matrix, and the channel quantity herein is a quantity of two-dimensional matrices each including rows and columns. For example, if a size of a three-dimensional matrix is 3*4*5, it indicates that the matrix has three rows, four columns, and five channels. In this case, the five channels herein indicate that the three-dimensional matrix includes five two-dimensional matrices each with three rows and four columns.
When the convolutional neural network 800 has a plurality of convolutional layers, an initial convolutional layer (for example, the layer 821) usually extracts more general features, where the general features may also be referred to as low-level features. As a depth of the convolutional neural network 800 increases, a deeper convolutional layer (for example, the layer 826) extracts more complex features, such as high-level semantic features. Higher-level semantic features are more applicable to a problem to be resolved.
Pooling Layer:
Because a quantity of training parameters usually needs to be reduced, a pooling layer usually needs to be periodically introduced after a convolutional layer. In one embodiment, for the layers 821 to 826 in the layer 820 shown in
Neural Network Layer 830:
After processing is performed by the convolutional layer/pooling layer 820, the convolutional neural network 800 still cannot output required output information. As described above, at the convolutional layer/pooling layer 820, only a feature is extracted, and parameters resulting from an input image are reduced. However, to generate final output information (required class information or other related information), the convolutional neural network 800 needs to use the neural network layer 830 to generate an output of one required class or outputs of a group of required classes. Therefore, the neural network layer 830 may include a plurality of hidden layers (831, 832, . . . , and 83n shown in
At the neural network layer 830, the plurality of hidden layers are followed by the output layer 840, namely, the last layer of the entire convolutional neural network 800. The output layer 840 has a loss function similar to a categorical cross entropy, and the loss function is configured to calculate a prediction error. Once forward propagation (for example, propagation in a direction from 810 to 840 in
It should be noted that the convolutional neural network 800 shown in
The following describes a hardware structure of a chip provided in an embodiment of this application.
The neural network processor 90 may be any processor suitable for large-scale exclusive OR operation processing, for example, a neural-network processing unit (NPU), a tensor processing unit (TPU), or a graphics processing unit (GPU). The NPU is used as an example. The NPU may be mounted, as a coprocessor, onto a host CPU, and the host CPU allocates a task to the NPU. A core part of the NPU is an operation circuit 903. The operation circuit 903 is controlled by a controller 904 to extract matrix data from memories (901 and 902) and perform multiplication and addition.
In some embodiments, the operation circuit 903 includes a plurality of processing units (process engine, PE) inside. In some embodiments, the operation circuit 903 is a two-dimensional systolic array. The operation circuit 903 may be a one-dimensional systolic array or another electronic circuit that can perform mathematical operations such as multiplication and addition. In some embodiments, the operation circuit 903 is a general-purpose matrix processor.
For example, it is assumed that there are an input matrix A, a weight matrix B, and an output matrix C. The operation circuit 903 obtains weight data of the matrix B from the weight memory 902, and buffers the weight data on each PE in the operation circuit 903. The operation circuit 903 obtains input data of the matrix A from the input memory 901, performs a matrix operation based on the input data of the matrix A and the weight data of the matrix B, to obtain a partial result or a final result of the matrix, and stores the partial result or the final result into an accumulator 908.
A unified memory 906 is configured to store input data and output data. The weight data is directly transferred to the weight memory 902 by using a direct memory access controller (DMAC) 905. The input data is also transferred to the unified memory 906 by using the DMAC.
A bus interface unit (BIU) 910 is used for interaction between the DMAC and an instruction fetch buffer 909. The bus interface unit 901 is further used by the instruction fetch buffer 909 to obtain an instruction from an external memory. The bus interface unit 901 is further used by the storage unit access controller 905 to obtain original data of the input matrix A or the weight matrix B from the external memory.
The DMAC is mainly configured to transfer input data in an external memory DDR to the unified memory 906, or transfer the weight data to the weight memory 902, or transfer the input data to the input memory 901.
A vector calculation unit 907 includes a plurality of operation processing units, and if required, performs further processing such as vector multiplication, vector addition, an exponential operation, a logarithmic operation, or value comparison on an output of the operation circuit 903. The vector calculation unit 907 is mainly configured for calculation at a non-convolutional layer or a fully connected layer (FC) of the neural network, and may perform calculation in pooling, normalization, and the like. For example, the vector calculation unit 907 may apply a non-linear function to the output of the operation circuit 903, for example, to a vector of an accumulated value, so as to generate an activation value. In some embodiments, the vector calculation unit 907 generates a normalized value, a combined value, or both.
In some embodiments, the vector calculation unit 907 stores a processed vector into the unified memory 906. In some embodiments, a vector processed by the vector calculation unit 907 can be used as an activation input to the operation circuit 903, for example, for use at a subsequent layer in the neural network. As shown in
The instruction fetch buffer 909 connected to the controller 904 is configured to store instructions used by the controller 904.
The unified memory 906, the input memory 901, the weight memory 902, and the instruction fetch buffer 909 are all on-chip memories. The external memory is independent of the hardware architecture of the NPU.
Operations at various layers in the convolutional neural network shown in
The following describes the method for training the virtual viewpoint synthesis network. The method includes, but is not limited to, the following operations.
Operation 1001: Obtain a plurality of existing viewpoint images and location information of a to-be-synthesized virtual viewpoint image.
In one embodiment, the plurality of existing viewpoint images are images used as training features. For a description of the images used as training features, refer to the description of the training data in the descriptions corresponding to
Operation 1002: Obtain a pixel matrix of each of the plurality of existing viewpoint images.
In one embodiment, the pixel matrix may be a three-dimensional pixel matrix including pixel information of three channels R, G, and B. Alternatively, if an image is a grayscale image, an obtained pixel matrix may be a two-dimensional pixel matrix.
Operation 1003: Obtain a geometric feature matrix through calculation based on the plurality of pixel matrices.
In one embodiment, the plurality of obtained pixel matrices are input into the representation network, and a geometric feature matrix representing information about a geometric location relationship between the plurality of existing viewpoint images is output. The representation network includes a convolutional neural network model for feature extraction and a feature calculation model. First, a feature matrix corresponding to each existing viewpoint is extracted based on the input pixel matrices by using the convolutional neural network model for feature extraction. Each extracted feature matrix may include one or more of the following feature information of a corresponding viewpoint image: a color feature, a texture feature, a shape feature, and a spatial relationship feature. Then, a cross-correlation operation is performed on every two of the plurality of feature matrices by using the calculation model, to obtain a plurality of matrices after the cross-correlation operation, and then the plurality of matrices after the cross-correlation operation are added up to obtain the geometric feature matrix.
Alternatively, the obtaining a geometric feature matrix through calculation based on the plurality of pixel matrices may be implemented in the following manner.
The plurality of obtained pixel matrices are composed into a three-dimensional hybrid pixel matrix according to a preset order (a particular composition manner is described in detail below and is not described in detail temporarily herein). Then, the hybrid pixel matrix is input into the representation network to output the geometric feature matrix. The representation network may include a convolutional neural network model used to calculate and output the geometric feature matrix based on the hybrid pixel matrix.
Operation 1004: Generate, based on the geometric feature matrix and the location information of the to-be-synthesized virtual viewpoint, a plurality of to-be-processed virtual composite pixel matrices whose quantity is the same as that of the plurality of existing viewpoint images.
In one embodiment, the location information of the to-be-synthesized virtual viewpoint image (for example, coordinate values of two-dimensional coordinates or three-dimensional coordinates) is extended into a matrix with same rows and columns as the geometric feature matrix, the matrix is connected before or after the geometric feature matrix to form a hybrid information matrix, and then the hybrid information matrix is input into the generation network to obtain the plurality of to-be-processed virtual composite pixel matrices. The generation network includes a plurality of adaptive convolution kernel generation models (the models may be convolutional neural network models) whose quantity is the same as that of the plurality of existing viewpoint images and a convolution calculation model. The hybrid information matrix is input into each of the plurality of adaptive convolution kernel generation models to generate an adaptive convolution kernel corresponding to each pixel in the pixel matrices of the plurality of existing viewpoint images. Then, a convolution calculation module performs convolution on the adaptive convolution kernel corresponding to each pixel and a pixel block (the pixel block is a pixel matrix and has a same size as a convolution kernel corresponding to the pixel block) with the pixel corresponding to the adaptive convolution kernel as a center, to obtain a virtual composite pixel corresponding to a pixel location of the pixel. Then, all virtual composite pixels corresponding to each image in the plurality of existing viewpoint images are composed into one virtual composite pixel matrix according to a pixel order of the image, to obtain the plurality of to-be-processed virtual composite pixel matrices. Each virtual composite pixel matrix represents one to-be-processed virtual viewpoint image.
Operation 1005: Synthesize the virtual viewpoint image based on the plurality of to-be-processed virtual composite pixel matrices.
In one embodiment, the plurality of to-be-processed virtual composite pixel matrices are composed into a hybrid virtual composite pixel matrix, and then the hybrid virtual composite pixel matrix is input into the correction network to obtain the virtual viewpoint image. The correction network includes a correction convolutional neural network, and the neural network is used to synthesize the final virtual viewpoint image based on the input hybrid virtual composite pixel matrix and output the final virtual viewpoint image.
Operation 1006: Calculate, by using a loss function, a loss value between the synthesized virtual viewpoint image and an image that is actually shot at a location of the to-be-synthesized virtual viewpoint image, and adaptively adjust parameters of convolutional neural network models in the representation network, the generation network, and the correction network based on the loss value.
Operation 1007: Continually repeat operations of operation 1001 to operation 1006 until a loss value between a virtual viewpoint image finally output by the entire virtual viewpoint synthesis network and an actual image at a location corresponding to the virtual viewpoint image is less than a threshold, where that the loss value is less than the threshold indicates that the virtual viewpoint synthesis network has been successfully trained.
In some embodiments, in an entire training process, operation 1002, operation 1003, and the obtaining a plurality of existing viewpoint images in operation 1001 may be performed only once. Then, different location information (the location information is information about locations in an area defined by a polygon formed by the plurality of existing viewpoint images, or information about locations on a straight line between two existing viewpoint images; in addition, virtual viewpoint images generated at the locations are in an area formed by the plurality of existing viewpoint images, or in other words, content represented by the synthesized virtual viewpoint images is not beyond a range of content represented by the plurality of existing viewpoint images) is input to obtain the virtual viewpoint images at the different locations, the obtained virtual viewpoint images are compared with actual images at the corresponding locations, and then parameters of convolutional neural networks in the entire virtual viewpoint synthesis network are corrected slowly until the loss value between the finally output virtual viewpoint image and the actual image at the location corresponding to the virtual viewpoint image is less than the threshold.
The method 1000 may be performed by the training device 720 shown in
In one embodiment, the method 1000 may be performed by a CPU, may be jointly performed by a CPU and a GPU, or may be jointly performed by a CPU and another processor suitable for neural network calculation. Selection of a processor is determined based on an actual case. This is not limited in this application.
Operation 1201: Obtain a preset quantity of first viewpoint images.
In one embodiment, the first viewpoint image is an existing viewpoint image.
In one embodiment, if synthesis of a virtual viewpoint image is completed on a server side (which includes a cloud server, and may be the server or the cloud side in the description of
In one embodiment, the preset quantity of existing viewpoint images may be existing viewpoint images whose quantity is any integer greater than or equal to 2.
In one embodiment, the preset quantity of existing viewpoint images are images captured by the preset quantity of video or image capture devices at a same moment.
In one embodiment, the preset quantity of spatial locations may be referred to as existing viewpoint locations, or in other words, locations of the capture devices in space during shooting of the existing viewpoint images are referred to as existing viewpoint locations. Each of the preset quantity of video or image capture devices is at one existing viewpoint location.
In one embodiment, the preset quantity of existing viewpoint locations may be on a same plane.
In one embodiment, the preset quantity of existing viewpoint locations may be arranged in a matrix form on a same plane. For example, if there are four existing viewpoint locations, the four existing viewpoint locations may be arranged in a form of a rectangle, and each existing viewpoint location is used as a vertex of the rectangle.
In one embodiment, the preset quantity of obtained first viewpoint images may be determined and obtained based on location information that is of a to-be-synthesized virtual viewpoint image and that is sent by a terminal. In one embodiment, the terminal collects the location information of the to-be-synthesized virtual viewpoint image, and sends the location information to a server; and the server learns, through analysis-based on the location information, that the virtual viewpoint image can be synthesized based on the preset quantity of first viewpoint images. Therefore, the server sends shooting instructions to the corresponding preset quantity of capture devices; and the preset quantity of capture devices shoot corresponding images according to the instructions, and send the corresponding images to the server.
In one embodiment, the to-be-synthesized virtual viewpoint image may be an image that is virtually synthesized based on the preset quantity of existing viewpoint images through calculation.
In one embodiment, if the terminal is a VR display helmet, the location information of the virtual viewpoint image may be obtained by using a location sensor in the VR display helmet device. In one embodiment, when a user starts the VR helmet, the helmet automatically sets a coordinate origin and establishes a default coordinate system. The helmet may be moved up and down as the head of the user moves. Each time the helmet is moved to a location, the location sensor in the helmet obtains information (coordinate information) about the corresponding location as location information of a to-be-synthesized virtual viewpoint image.
In one embodiment, if the terminal is a computer or a touchscreen device, the location information of the to-be-synthesized virtual viewpoint image may be determined in response to a drag operation or a click operation on a mouse, a stylus, or the like, or the location information of the to-be-synthesized virtual viewpoint image may be determined in response to a movement direction of a mouse, a stylus, or the like.
In one embodiment, a location of the to-be-synthesized virtual viewpoint image may be information about any location in a closed range formed by the preset quantity of existing viewpoint image locations. For example, assuming that the preset quantity of existing viewpoint images are four viewpoint images and existing viewpoint locations corresponding to the four viewpoint images are successively connected through a line to form a polygon (for example, a rectangle), the location of the to-be-synthesized virtual viewpoint image may be a location in the polygon. In addition, the virtual viewpoint image generated at the location is in an area formed by the preset quantity of first viewpoint images, or in other words, content represented by the synthesized virtual viewpoint image is not beyond a range of content represented by the preset quantity of first viewpoint images.
Operation 1202: Obtain a geometric feature matrix between the preset quantity of existing viewpoint images.
In one embodiment, the geometric feature matrix may be used to represent information about a geometric location relationship between the preset quantity of existing viewpoint images. In one embodiment, the geometric feature matrix may be used to represent information about a geometric location relationship between pixels of the preset quantity of existing viewpoint images.
In one embodiment, the information about the geometric location relationship may include information about a location offset relationship between pixels of any two of the preset quantity of existing viewpoint images, or may include a weight of a location offset (or a value of the location offset) between pixels of any two of the preset quantity of existing viewpoint images. For ease of understanding, an example is used for description: Assuming that there are two viewpoint images A and B, there is a pixel a in an overlapping part between the two images, and the viewpoint image A is used as a reference, a weight of a location offset of the pixel a in the viewpoint image B relative to the pixel a in the viewpoint image A is 5. This may indicate that a value of the location offset of the pixel a in the viewpoint image B relative to the pixel a in the viewpoint image A is five unit distances.
In one embodiment, the information about the geometric location relationship may include direction information of a location offset between pixels of any two of the preset quantity of existing viewpoint images. For ease of understanding, an example is used for description: Assuming that there are two viewpoint images A and B, there is a pixel a in an overlapping part between the two images, and the viewpoint image A is used as a reference, direction information of a location offset of the pixel a in the viewpoint image B relative to the pixel a in the viewpoint image A may include, for example, offsetting to the left, offsetting to the right, offsetting upward, offsetting downward, offsetting to the lower left, offsetting to the upper right, offsetting to the upper left, or offsetting to the lower right.
It should be noted that the foregoing two examples are merely used for description. Information about a geometric location relationship is determined based on a particular case. This is not limited in this solution.
The following describes three implementation embodiments of obtaining a geometric feature matrix between the preset quantity of existing viewpoint images.
Implementation 1:
The obtaining a geometric feature matrix between the preset quantity of existing viewpoint images includes: extracting a feature from each of the preset quantity of existing viewpoint images to obtain the preset quantity of feature matrices, where the feature includes one or more of a color feature, a texture feature, a shape feature, and a spatial relationship feature of each of the preset quantity of existing viewpoint images; performing a cross-correlation operation on every two of the preset quantity of feature matrices to obtain one or more feature matrices after the operation; and when one feature matrix after the operation is obtained, using the feature matrix after the operation as the geometric feature matrix; or when a plurality of feature matrices after the operation are obtained, obtaining the geometric feature matrix through calculation based on the plurality of feature matrices after the operation.
In one embodiment, the image features corresponding to the preset quantity of existing viewpoint images may be extracted by using a trained feature extraction model. In one embodiment, the preset quantity of existing viewpoint images are input into the feature extraction model to obtain feature matrices corresponding to the viewpoint images. Each of the preset quantity of feature matrices may include one or more of pixel information, a color feature, a texture feature, a shape feature, and a spatial relationship feature of a corresponding image. The feature matrices are obtained after performing a series of processing such as down-sampling and convolution on original images by using the feature extraction model.
Then, a cross-correlation operation is performed on every two of the preset quantity of obtained feature matrices to obtain one or more feature matrices after the cross-correlation operation that have a same size. In one embodiment, when the preset quantity is 2, only one feature matrix after the cross-correlation operation is obtained after the cross-correlation operation is performed on the extracted feature matrices. In this case, the feature matrix after the cross-correlation operation is the geometric feature matrix. When the preset quantity is greater than 2, a plurality of feature matrices after the cross-correlation operation may be obtained after the cross-correlation operation is performed on the extracted feature matrices, and then the plurality of feature matrices after the cross-correlation operation are added up to obtain the geometric feature matrix.
In one embodiment, the feature extraction model may be a pre-trained convolutional neural network (CNN) used to extract the features of the images.
In one embodiment, a feature matrix that is corresponding to each viewpoint image and that is obtained by using the feature extraction model may be a three-dimensional matrix. Assuming that a size of an input image is W*H*C, a size of an obtained feature matrix may be (W/4)*(H/4)*C0 (the size of the matrix obtained herein is merely given as an example; alternatively, the obtained matrix may be a matrix of another size, and this is not limited herein). C0 is a size of a third dimension, and may be referred to as a channel quantity. Each channel represents a feature of the image. For example, assuming that extracted features are pixel information, a color feature, a texture feature, a shape feature, and a spatial relationship feature of the image, a channel quantity C0 of an obtained pixel matrix of the image is 5. The five channels respectively represent the pixel information, the color feature, the texture feature, the shape feature, and the spatial relationship feature of the image. Certainly, the extracted features may alternatively be other image features, and the description is provided herein by merely using the example.
In one embodiment, the input image may refer to a pixel matrix representing the image. The pixel matrix is a three-dimensional pixel matrix, and may represent pixels of three color channels R, G, and B. In this case, C is 3. W and H are determined based on a size of an actual image, and this is not limited herein.
In one embodiment, if the input image is a grayscale image, the size of the input image may be W*H. A pixel matrix representing the image may be a two-dimensional pixel matrix, and may represent pixels of the grayscale image.
It should be noted that, in the two cases in which the input image is a grayscale image and the input image is not a grayscale image, parameters of the feature extraction model may be different, and are determined based on training results.
In one embodiment, sizes of the plurality of feature matrices after the cross-correlation operation may be (W/4)*(H/4)*C1, and C1=n*n. During selection of n, a maximum offset between pixels in two images that are in the preset quantity of input images and whose shooting locations are farthest from each other needs to be considered.
For example, referring to
In
In one embodiment, because the sizes of the plurality of feature matrices after the cross-correlation operation may be (W/4)*(H/4)*C1, a size of the geometric feature matrix obtained by adding up the plurality of feature matrices after the cross-correlation operation is also (W/4)*(H/4)*C1.
In one embodiment, for ease of understanding of the process of performing a cross-correlation operation on every two of the feature matrices, the following provides a description by using an example. It is assumed that two matrices on which a cross-correlation operation is performed are a matrix A and a matrix B. A subscript [i][j][k] in the two matrices represents a subscript of an element in an ith row, a jth column, and a kth channel. It is assumed that sizes of the matrix A and the matrix B are m1*m2*m3. This indicates that the two matrices each have m1 rows, m2 columns, and m3 channels. In addition, it is assumed that the offset n is 3. In this case, m3 elements whose subscripts are [i][j][1], [i][j][2], [i][j][3], . . . , and [i][j][m3] are selected from the matrix A to form a first vector. 3*3*m3 elements with the following subscripts are selected from the matrix B to forma first matrix: [i−1][j−1][1], [i−1][j−1][2], [i−1][j−1][3], . . . , and [i−1][j−1][m3]; [i−1][j][1], [i−1][j][2], [i−1][j][3], . . . , and [i−1][j][m3]; [i−1][j+1][1], [i−1][j+1][2], [i−1][j+1][3], . . . , and [i−1][j+1][m3]; [i][j−1][1], [i][j−1][2], [i][j−1][3], . . . , and [i][j−1][m3]; [i][j][1], [i][j][2], [i][j][3], . . . , and [i][j][m3]; [i][j+1][1], [i][j+1][2], [i][j+1][3], . . . , and [i][j+1][m3]; [i+1][j−1][1], [i+1][j−1][2], [i+1][j−1][3], . . . , and [i+1][j−1][m3]; [i+1][j][1], [i+1][j][2], [i+1][j][3], . . . , and [i+1][j][m3]; and [i+1][j+1][1], [i+1][j+1][2], [i+1][j+1][3], . . . , and [i+1][j+1][m3]. In other words, the first matrix is a matrix whose size is 3*3*m3 and that uses an element whose subscript is [i][j][(1+m3)/2] a center. Then, a cross-correlation operation is performed on the first vector and the first matrix to obtain a vector including 3*3 elements, where i is greater than or equal to 1 and less than or equal to m1, and j is greater than or equal to 1 and less than or equal to m2. Through calculation, m1*m2 vectors whose sizes are 3*3 are finally obtained. Then, the m1*m2 vectors are composed, according to an order corresponding to original elements of the matrix A, into a feature matrix whose size is m1*m2*9 and that is obtained after the cross-correlation operation.
It should be noted that, when an element whose subscript is [i][j][1] is an element at an edge of the matrix, when the first matrix including 3*3*m3 elements cannot be obtained from the matrix B, an element that cannot be obtained may be supplemented by 0, or may be supplemented by an element corresponding to an outermost edge of the matrix B, so that these elements can be composed into the first matrix including 3*3*m3 elements.
Certainly, the process of performing a cross-correlation operation is described herein by merely using the example. The foregoing parameters may be determined according to a particular case. This is not limited in this solution.
Implementation 2:
The obtaining a geometric feature matrix between the preset quantity of existing viewpoint images includes: extracting a pixel from each of the preset quantity of existing viewpoint images to obtain at least two pixel matrices; composing the obtained at least two pixel matrices into a hybrid pixel matrix; and inputting the hybrid pixel matrix into a first preset convolutional neural network model to obtain the geometric feature matrix.
In one embodiment, the first convolutional neural network model is a trained machine learning model for obtaining the geometric characteristic matrix through calculation based on the hybrid pixel matrix.
In one embodiment, in this embodiment of this application, if the preset quantity of existing viewpoint images are color images, a pixel matrix corresponding to three color channels R, G, and B of each of the preset quantity of viewpoint images may be obtained first by using a pixel reading function or program, and then the preset quantity of obtained pixel matrices are composed into a three-dimensional hybrid pixel matrix according to an order.
In one embodiment, the composing the preset quantity of obtained pixel matrices into a three-dimensional hybrid pixel matrix according to an order includes: connecting the preset quantity of obtained pixel matrices in a head-to-tail manner to form a three-dimensional hybrid pixel matrix whose channel quantity is three times the preset quantity. For ease of understanding, an example is used below for description.
For example, if there are four viewpoint images A, B, C, and D, and one three-dimensional pixel matrix including pixels of three channels R, G, and B (a first channel, a second channel, and a third channel in the three-dimensional matrix respectively represent pixel values of the channels R, G, and B) may be obtained for each viewpoint image, four three-dimensional pixel matrices may be obtained. It is assumed that the four three-dimensional pixel matrices are a matrix 1, a matrix 2, a matrix 3, and a matrix 4. In this case, according to an order of the matrix 1, the matrix 2, the matrix 3, and the matrix 4, the matrix 2 may be added after a third channel of the matrix 1 to obtain a matrix with six channels, the matrix 3 may be added after a third channel of the matrix 2 to obtain a matrix with nine channels, and then the matrix 4 may be added after a third channel of the matrix 3 to obtain a matrix with 12 channels. The matrix with 12 channels is a finally obtained three-dimensional hybrid pixel matrix. Alternatively, the four three-dimensional pixel matrices may be composed into the hybrid pixel matrix according to an order of the matrix 1, the matrix 3, the matrix 2, and the matrix 4, or may be composed into the hybrid pixel matrix according to an order of the matrix 2, the matrix 1, the matrix 3, and the matrix 4; or the like. A particular order of the four three-dimensional pixel matrices may be determined according to an actual case, and this is not limited in this solution.
It should be noted that, for the hybrid pixel matrices obtained according to the foregoing different orders, structures and parameters of the corresponding first convolutional neural network model may be different.
In one embodiment, in this embodiment of this application, if the preset quantity of existing viewpoint images are grayscale images, a grayscale value matrix of each of the preset quantity of viewpoint images may be obtained first by using a grayscale value reading function or program. The grayscale value matrix is a pixel matrix with one channel. Then, the preset quantity of obtained grayscale value matrices are composed into a three-dimensional hybrid pixel matrix according to an order.
In one embodiment, the composing the preset quantity of obtained grayscale value matrices into a three-dimensional hybrid pixel matrix according to an order includes: connecting the preset quantity of obtained grayscale value matrices in a head-to-tail manner to form a three-dimensional hybrid pixel matrix whose channel quantity is the preset quantity. For ease of understanding, an example is used below for description.
For example, if there are four viewpoint images A, B, C, and D, and one grayscale value matrix may be obtained for each viewpoint image, four grayscale value matrices may be obtained. It is assumed that the four grayscale matrices are a matrix 11, a matrix 21, a matrix 31, and a matrix 41. In this case, according to an order of the matrix 11, the matrix 21, the matrix 31, and the matrix 41, the matrix 21 may be added after the matrix 11 to obtain a matrix with two channels, the matrix 31 may be added after the matrix 21 to obtain a matrix with three channels, and then the matrix 41 is added after the matrix 31 to obtain a matrix with four channels. The matrix with four channels is a finally obtained three-dimensional hybrid pixel matrix. Alternatively, the four grayscale matrices may be composed into the hybrid pixel matrix according to an order of the matrix 11, the matrix 31, the matrix 21, and the matrix 41, or may be composed into the hybrid pixel matrix according to an order of the matrix 21, the matrix 11, the matrix 31, and the matrix 41; or the like. A particular order of the four grayscale matrices may be determined according to an actual case, and this is not limited in this solution.
It should be noted that, for the hybrid pixel matrices obtained according to the foregoing different orders, structures and parameters of the corresponding first convolutional neural network model may be different.
Implementation 3:
The obtaining a geometric feature matrix between the preset quantity of existing viewpoint images includes: extracting a feature from each of the preset quantity of existing viewpoint images to obtain the preset quantity of feature matrices, where the feature includes one or more of a color feature, a texture feature, a shape feature, and a spatial relationship feature of each of the preset quantity of existing viewpoint images; composing the preset quantity of feature matrices into a hybrid feature matrix; and inputting the hybrid feature matrix into a preset convolutional neural network model to obtain the geometric feature matrix.
In this embodiment of this application, the feature matrix is used to replace the pixel matrix in Implementation 2. For a implementation thereof, refer to the descriptions in Implementation 2. Details are not described herein again.
It should be noted that, a structure and a parameter of the preset convolutional neural network model in Implementation 3 may be different from those of the first preset convolutional neural network model in Implementation 2, and a structure and parameter thereof are determined based an actual training status.
Operation 1203: Generate an adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images based on the geometric feature matrix and location information of a to-be-synthesized second viewpoint image.
In one embodiment, the to-be-synthesized second viewpoint image is a to-be-synthesized virtual viewpoint image.
In one embodiment, if synthesis of a virtual viewpoint image is completed on a server side, the location information of the to-be-synthesized virtual viewpoint image may be collected by a terminal and then sent to the server by the terminal. If synthesis of a virtual viewpoint image is completed at a terminal, the location information of the to-be-synthesized virtual viewpoint image may be directly collected by the terminal, and then used for synthesis of the virtual viewpoint image.
For a description of the location information of the to-be-synthesized virtual viewpoint image, refer to the corresponding description of operation 1201. Details are not described herein again.
In one embodiment, after the geometric feature matrix and the location information of the to-be-synthesized virtual viewpoint image are obtained, the two pieces of information are input into a pre-trained convolutional neural network to obtain the adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images.
In one embodiment, the obtained adaptive convolution kernel corresponding to each pixel may be a matrix whose size is n*n, and n may be the maximum offset n in
The following describes a embodiment of the generating an adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images based on the geometric feature matrix and location information of a to-be-synthesized second viewpoint image.
The location information of the to-be-synthesized virtual viewpoint image is two-dimensional coordinates or three-dimensional coordinates, and the determining an adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images based on the geometric feature matrix and location information of a to-be-synthesized virtual viewpoint image includes: extending the location information of the to-be-synthesized virtual viewpoint image into a location matrix whose quantities of rows and columns are the same as those of the geometric feature matrix; composing the location matrix and the geometric feature matrix into a hybrid information matrix; inputting the hybrid information matrix into each of the preset quantity of second preset convolutional neural network models, where the preset quantity of second preset convolutional neural network models have a same structure and different parameters; and obtaining the adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images based on output results of the preset quantity of second preset convolution neural network models.
For ease of understanding of this embodiment of this application, an example is used below for description.
Assuming that a size of the geometric feature matrix is (W/4)*(H/4)*C1, and coordinates of a location of the to-be-synthesized virtual viewpoint image are (x0, y0), the coordinates are extended into a three-dimensional matrix based on the coordinate values. In one embodiment, x0 is extended into (W/4)*(H/4) x0, and (W/4)*(H/4) x0 are on a same dimension plane; y0 is extended into (W/4)*(H/4) y0, and (W/4)*(H/4) y0 are on a same dimension plane; and x0 and y0 of the two dimensions may be combined together to form a location matrix whose size is (W/4)*(H/4)*2. Then, the location matrix is added to the geometric feature matrix to form a three-dimensional hybrid information matrix whose size is (W/4)*(H/4)*(C1+2).
In one embodiment, the location information of the to-be-synthesized virtual viewpoint image may be three-dimensional coordinates. For example, the location information of the to-be-synthesized virtual viewpoint image may be (x0, y0, z0). Assuming that a size of the geometric feature matrix is (W/4)*(H/4)*C1, the coordinates are extended into a three-dimensional matrix based on the coordinate values. In one embodiment, x0 is extended into (W/4)*(H/4) x0, and (W/4)*(H/4) x0 are on a same channel plane; y0 is extended into (W/4)*(H/4) y0, and (W/4)*(H/4) y0 are on a same channel plane; z0 is extended into (W/4)*(H/4) z0, and (W/4)*(H/4) z0 are on a same channel plane; and x0, y0, and z0 of the three channels are combined together to form a location matrix whose size is (W/4)*(H/4)*3. Then, the location matrix is added to the geometric feature matrix to form a three-dimensional hybrid information matrix whose size is (W/4)*(H/4)*(C1+3).
In one embodiment, the location matrix may be added to any location of the geometric feature matrix. For example, the location matrix may be added before, behind, or in the middle of the geometric feature matrix. However, considering a meaning of convolution processing, the location matrix is usually added before or behind the geometric feature matrix. It should be noted that, parameters of the second preset convolutional neural network model when the location matrix is added before and behind the geometric feature matrix are different, and particular parameters thereof are determined based on training results.
Assuming that the preset quantity of existing viewpoint images are four viewpoint images A, B, C, and D, the preset quantity of second preset convolutional neural network models are four second preset convolutional neural network models. The four second preset convolutional neural network models are a model A, a model B, a model C, and a model D. The model A is pre-trained and is used to obtain, based on the hybrid information matrix through calculation, an adaptive convolution kernel corresponding to each pixel of the viewpoint image A, the model B is pre-trained and is used to obtain, based on the hybrid information matrix through calculation, an adaptive convolution kernel corresponding to each pixel of the viewpoint image B, the model C is pre-trained and is used to obtain, based on the hybrid information matrix through calculation, an adaptive convolution kernel corresponding to each pixel of the viewpoint image C, and the model D is pre-trained and is used to obtain, based on the hybrid information matrix through calculation, an adaptive convolution kernel corresponding to each pixel of the viewpoint image D. Therefore, after the hybrid information matrix is obtained, the hybrid information matrix is input into each of the model A, the model B, the model C, and the model D, and the adaptive convolution kernel corresponding to each pixel of the viewpoint image A, the adaptive convolution kernel corresponding to each pixel of the viewpoint image B, the adaptive convolution kernel corresponding to each pixel of the viewpoint image C, and the adaptive convolution kernel corresponding to each pixel of the viewpoint image D are respectively output from the four models correspondingly.
In one embodiment, the model A, the model B, the model C, and the model D have a same convolutional neural network structure but have different parameters. Particular parameters thereof are determined based on training results.
Operation 1204: Generate the preset quantity of to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the preset quantity of existing viewpoint images.
In one embodiment, assuming that there are four existing viewpoint images A, B, C, and D, convolution is performed on an adaptive convolution kernel corresponding to each viewpoint image and pixels of the viewpoint image, to obtain a to-be-processed virtual composite pixel matrix. In one embodiment, convolution is performed on an adaptive convolution kernel corresponding to the viewpoint image A and pixels of the viewpoint image A, to obtain a to-be-processed virtual composite pixel matrix A; convolution is performed on an adaptive convolution kernel corresponding to the viewpoint image B and pixels of the viewpoint image B, to obtain a to-be-processed virtual composite pixel matrix B; convolution is performed on an adaptive convolution kernel corresponding to the viewpoint image C and pixels of the viewpoint image C, to obtain a to-be-processed virtual composite pixel matrix C; and convolution is performed on an adaptive convolution kernel corresponding to the viewpoint image D and pixels of the viewpoint image D, to obtain a to-be-processed virtual composite pixel matrix D.
The following describes a embodiment of the generating the preset quantity of to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the preset quantity of existing viewpoint images.
The generating the preset quantity of to-be-processed virtual composite pixel matrices based on the adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images and the pixels of the preset quantity of existing viewpoint images includes: performing convolution on the adaptive convolution kernel corresponding to each pixel of the preset quantity of existing viewpoint images and a pixel matrix with the pixel as a center in a one-to-one correspondence to obtain a virtual composite pixel corresponding to a pixel location of the pixel, where a quantity of rows of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel, and a quantity of columns of the pixel matrix is the same as that of the adaptive convolution kernel corresponding to the pixel; and composing the virtual composite pixels corresponding to all the pixels into the preset quantity of virtual composite pixel matrices.
For ease of understanding of this embodiment of this application, an example is used below for description.
It is assumed that the preset quantity of existing viewpoint images are four viewpoint images A, B, C, and D, and that sizes of pixel matrices of the four viewpoint images are all W*H*3, or in other words, there are W*H*3 pixels in a pixel matrix of each viewpoint image. Each of the W*H*3 pixels corresponds to one adaptive convolution kernel. Herein, 3 indicates that the pixel matrix has three channels, and the three channels are three color channels: R, G, and B.
In this embodiment of this application, convolution is performed on an adaptive convolution kernel corresponding to each pixel in the pixel matrices of the four viewpoint images and a pixel matrix that uses the pixel as a center and whose size is the same as that of the adaptive convolution kernel corresponding to the pixel, to obtain a virtual composite pixel corresponding to a location of each pixel in the pixel matrices of the four viewpoint images. Virtual composite pixels corresponding to locations of all pixels of each viewpoint image may be composed into one virtual composite pixel matrix. In this case, four virtual composite pixel matrices may be formed based on the four viewpoint images. The sizes of the four virtual composite pixel matrices are the same as those of the existing viewpoint images, or in other words, the sizes of the four virtual composite pixel matrices are W*H*3. Each virtual composite pixel matrix represents one to-be-processed virtual viewpoint image. In this case, the four virtual composite pixel matrices represent four to-be-processed virtual viewpoint images. The to-be-processed virtual viewpoint images are preliminarily obtained virtual viewpoint images, and need to be further processed to obtain a final virtual viewpoint image that needs to be synthesized.
For ease of understanding of operation 1203 and operation 1204, refer to
Operation 1205: Synthesize the second viewpoint image based on the preset quantity of to-be-processed virtual composite pixel matrices.
In one embodiment, a convolutional neural network model is pre-trained to synthesize the second viewpoint image based on the preset quantity of to-be-processed virtual composite pixel matrices. In one embodiment, the preset quantity of to-be-processed virtual composite pixel matrices are first composed into a hybrid virtual composite pixel matrix, and then the hybrid virtual composite pixel matrix is input into the trained model to obtain the final virtual viewpoint image corresponding to the location information of the to-be-synthesized virtual viewpoint image.
In one embodiment, in this embodiment of this application, assuming that the preset quantity is 4, the four obtained to-be-processed virtual composite pixel matrices may be composed into a composite matrix whose size is W*H*12, and then the composite matrix is input into the trained model to output the virtual viewpoint image corresponding to the location information of the to-be-synthesized virtual viewpoint image.
In one embodiment, assuming that the four virtual composite pixel matrices are a matrix A, a matrix B, a matrix C, and a matrix D, during composing of the four virtual composite pixel matrices into a composite matrix whose size is W*H*12, the four virtual composite pixel matrices may be composed into the composite matrix whose size is W*H*12 according to an order of the matrix A, the matrix B, the matrix C, and the matrix D, where each matrix is arranged according to an order of channels R, G, and B. Alternatively, the four virtual composite pixel matrices may be composed into the composite matrix whose size is W*H*12 according to an order of the matrix B, the matrix A, the matrix C, and the matrix D, where each matrix is arranged according to an order of channels R, G, and B. Alternatively, the four virtual composite pixel matrices may be composed into the composite matrix whose size is W*H*12 according to an order of the matrix A, the matrix B, the matrix C, and the matrix D, where each matrix is arranged according to an order of channels G, R, and B; or the like. An order in which the four virtual composite pixel matrices are composed into the composite matrix is described herein by merely using the examples. Another order may alternatively be used, and any order in which the four virtual composite pixel matrices can be composed into the composite matrix whose size is W*H*12 falls within a protection range.
In conclusion, an entire system is an end-to-end deep neural network, and all operations (convolution, cross-correlation, adaptive convolution, and the like) in the network are differentiable. Therefore, an optimization algorithm based on gradient descent, such as an Adam algorithm, may be used for optimization. An objective function used during training is a cost function, such as MAE, that can be used to measure a pixel error. For a group of existing viewpoint images, a representation network in the embodiments of the disclosure needs to be run only once, and then a virtual viewpoint image may be generated based on a cross cost volume generated by the representation network and a location of the virtual viewpoint image that needs to be synthesized.
In one embodiment, the method in
Embodiment 1 can be understood as a training stage of the virtual viewpoint synthesis network (a stage performed by the training device 720 shown in
In the foregoing, the viewpoint image processing method provided in the embodiments of this application is described mainly from a perspective of a viewpoint image processing device (that is, a server or a terminal that performs the method described in
In the embodiments of this application, the viewpoint image processing device may be divided into functional modules based on the foregoing method examples. For example, functional modules may be obtained through division based on corresponding functions, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module. It should be noted that division into the modules in the embodiments of this application is an example, and is merely a logical function division. During actual implementation, another division manner may be used.
When the functional modules are obtained through division in correspondence to the functions,
In one embodiment, the viewpoint image processing device 1500 may further include a storage unit, configured to store a computer program or data. In a possible manner, the processing unit 1502 may invoke the computer program or data in the storage unit, so that the viewpoint image processing device 1500 obtains at least two first viewpoint images, where the at least two first viewpoint images include images respectively captured at at least two viewpoint locations; obtains a geometric feature matrix between the at least two first viewpoint images, where the geometric feature matrix is a matrix used to represent information about a geometric location relationship between pixels of the at least two first viewpoint images; generates an adaptive convolution kernel corresponding to each pixel of the at least two first viewpoint images based on the geometric feature matrix and location information of a to-be-synthesized second viewpoint image, where the location information represents a viewpoint location of the second viewpoint image, the second viewpoint image is in a target area, and the target area includes an area formed by the at least two first viewpoint images; generates at least two to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the at least two first viewpoint images; and synthesizes the second viewpoint image by using the at least two to-be-processed virtual composite pixel matrices.
The memory 1601 may be a read-only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). The memory 1601 may store a program. When the program stored in the memory 1601 is executed by the processor 1602, the processor 1602 and the communications interface 1603 are configured to perform the operations of the method for training a virtual viewpoint synthesis network in the embodiments of this application.
The processor 1602 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), a graphics processing unit (GPU), or one or more integrated circuits, and is configured to execute a related program, to implement the functions that need to be performed by the units in the device for training the virtual viewpoint synthesis network in this embodiment of this application, or perform the method for training the virtual viewpoint synthesis network in the method embodiment of this application.
The processor 1602 may alternatively be an integrated circuit chip and has a signal processing capability. In an implementation embodiment process, the operations of the method for training the virtual viewpoint synthesis network in this application may be completed by using a hardware integrated logic circuit in the processor 1602 or instructions in a form of software. The processor 1602 may alternatively be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The methods, the operations, and logic block diagrams that are disclosed in the embodiments of this application may be implemented or performed. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Operations of the methods disclosed with reference to the embodiments of this application may be directly executed and accomplished by using a hardware decoding processor, or may be executed and accomplished by using a combination of hardware and software modules in a decoding processor. The software module may be located in a mature storage medium in the art, for example, a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory 1601. The processor 1602 reads information in the memory 1601, and implements, by using hardware of the processor 1602, a function that needs to be implemented by a unit included in the device for training a virtual viewpoint synthesis network in the embodiments of this application, or performs the method for training a virtual viewpoint synthesis network in the method embodiment of this application.
The communications interface 1603 uses a transceiver apparatus, for example, but not limited to, a transceiver, to implement communication between the device 1600 and another device or a communications network. For example, training data (for example, the existing viewpoint images described in Embodiment 1 of this application) may be obtained through the communications interface 1603.
The bus 1604 may include a path for information transfer between various components (for example, the memory 1601, the processor 1602, and the communications interface 1603) of the device 1600.
The memory 1701 may be a read-only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). The memory 1701 may store a program. When the program stored in the memory 1701 is executed by the processor 1702, the processor 1702 and the communications interface 1703 are configured to perform the operations of the viewpoint image processing method in the embodiments of this application.
The processor 1702 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), a graphics processing unit (GPU), or one or more integrated circuits. The processor is configured to execute a related program, to implement functions that need to be implemented by the units in the viewpoint image processing device in the embodiments of this application, or perform the viewpoint image processing method in the method embodiment of this application.
The processor 1702 may alternatively be an integrated circuit chip and has a signal processing capability. In an implementation embodiment process, the operations of the viewpoint image processing method in this application may be completed by using a hardware integrated logic circuit in the processor 1702 or instructions in a form of software. The processor 1702 may alternatively be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The methods, the operations, and logic block diagrams that are disclosed in the embodiments of this application may be implemented or performed. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Operations of the methods disclosed with reference to the embodiments of this application may be directly executed and accomplished by using a hardware decoding processor, or may be executed and accomplished by using a combination of hardware and software modules in a decoding processor. The software module may be located in a mature storage medium in the art, for example, a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, or a register. The storage medium is located in the memory 1701. The processor 1702 reads information in the memory 1701, and completes, in combination with hardware of the processor 1702, functions that need to be performed by the units included in the viewpoint image processing device in this embodiment of this application, or performs the viewpoint image processing method in the method embodiments of this application.
The communications interface 1703 uses a transceiver apparatus, for example, but not limited to, a transceiver, to implement communication between the device 1700 and another device or a communications network. For example, training data (for example, the X image in Embodiment 2 of this application) may be obtained by using the communications interface 1703.
The bus 1704 may include a path for information transfer between various components (for example, the memory 1701, the processor 1702, and the communications interface 1703) of the device 1700.
It should be understood that, the transceiver unit 1501 in the viewpoint image processing device 1500 is equivalent to the communications interface 1703 in the viewpoint image processing device 1700, and the processing unit 1502 in the viewpoint image processing device 1500 may be equivalent to the processor 1702. In addition, a virtual device is combined with a physical device herein.
It should be noted that although only the memory, the processor, and the communications interface of each of the devices 1600 and 1700 shown in
It can be understood that the device 1600 is equivalent to the training device 720 in
In conclusion, based on the problems in the conventional technology that are described in
In conclusion, the foregoing descriptions are merely example embodiments of the disclosure, but are not intended to limit the protection scope of the embodiments of the disclosure. Any modification, equivalent replacement, or improvement made within the spirit and principle of the embodiments of the disclosure shall fall within the protection scope of the embodiments of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201910901219.1 | Sep 2019 | CN | national |
This application is a continuation of International Application No. PCT/CN2020/095157, filed on Jun. 9, 2020, which claims priority to Chinese Patent Application No. 201910901219.1, filed on Sep. 23, 2019. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/095157 | Jun 2020 | US |
Child | 17701355 | US |