METHOD AND APPARATUS OF EMBEDDING IMAGE IN VIDEO, AND METHOD AND APPARATUS OF ACQUIRING PLANE PREDICTION MODEL

Information

  • Patent Application
  • 20230368324
  • Publication Number
    20230368324
  • Date Filed
    May 08, 2021
    3 years ago
  • Date Published
    November 16, 2023
    6 months ago
Abstract
Provided are a method and apparatus of embedding an image in video, and a method and apparatus of acquiring a plane prediction model, relating to the field of image processing. The method includes inputting a video frame image of a video into a plane prediction model to obtain a predicted plane mask of the video frame image, the plane prediction model being obtained by training a deep learning model using training images with labels having plane detection frames and plane masks; embedding the image to be embedded into the predicted plane mask of the video frame image.
Description
TECHNICAL FIELD

This disclosure relates to the field of image processing, and particularly, to a method of embedding an image in a video, A method and an apparatuses of acquiring a plane prediction model.


BACKGROUND

Advertisement in a video is one of effective propaganda means.


Inserting an advertisement video in a video is to choose a moment in the original video and insert the prepared advertisement video into the original video. When the advertisement video is inserted and played, a user cannot watch the original video at all, which affects watching experience of the user.


Posting an advertisement image in a video is to post the advertisement image in a corner region of each frame of video image and when the user watches the original video, pop up one advertisement image in the corner of a video playing interface. The user can watch the original video while the advertisement image is played, but the popped-up advertisement image may block key content of the original video, and the advertisement image and the video are fused unnaturally.


Embedding an advertisement image in a video is to embed the advertisement image into a certain position in a video frame image and fuse the advertisement image into the video. In some related art, the video is detected, and if targets such as specific objects or already existing advertisements are found therein, these targets are replaced with the advertisement image. There are also some related art in which an advertisement placement position is labelled in one video frame image, and for other video frame images, the advertisement placement position is tracked using feature point matching, and the advertisement is placed in the tracked position.


Inventors have found that in the related art of embedding the advertisement image in the video, there are many restrictions on finding the embedding position of the advertisement image, as a result, most often a suitable embedding position cannot be found in the video. For example, it is difficult to find a specific replaceable target in the video, or it is impossible to track a pre-labelled advertisement placement position in the video, such that the advertisement image is hard to be embedded into the video.


SUMMARY

According to some embodiments of the present disclosure, there is provided a method of embedding an image in a video, comprising:

  • inputting a video frame image of a video into a plane prediction model, and acquiring a predicted plane mask of the video frame image, wherein the plane prediction model is obtained by training a deep learning model using training images with labels of a plane detection frame and a plane mask; and
  • embedding the image to be embedded into the predicted plane mask of the video frame image.


In some embodiments, the plane prediction model is obtained by training the deep learning model using the training images with the labels of the plane detection frame and the plane mask as well as labeling information of 4 key points in the plane mask; the acquiring step comprises: after inputting the video frame image of the video into the plane prediction model, acquiring the predicted plane mask of the video frame image as well as the 4 key points in the plane mask; and the embedding the image to be embedded into the predicted plane mask of the video frame image comprises: aligning 4 vertexes of the image to be embedded with the 4 key points in the predicted plane mask of the video frame image, and embedding the image to be embedded into a position region corresponding to the 4 key points in the predicted plane mask of the video frame image.


In some embodiments, the labelling information of the 4 key points in the plane mask in each training image is obtained by:

  • converting the plane mask of the each training image from a pixel coordinate system to a plane coordinate system;
  • determining a boundary line of the plane mask in the plane coordinate system;
  • determining an inscribed rectangle of the plane mask in the plane coordinate system based on the boundary line of the plane mask; and
  • converting 4 vertices of the inscribed rectangle of the plane mask from the plane coordinate system to the pixel coordinate system.


In some embodiments, the converting the plane mask of the each training image from a pixel coordinate system to a plane coordinate system comprises:

  • converting the plane mask of the each training image from the pixel coordinate system to a world coordinate system; and
  • converting the plane mask of the each training image from the world coordinate system to the plane coordinate system.


In some embodiments, the determining a boundary line of the plane mask in the plane coordinate system comprises:

  • performing edge detection on the plane mask in the plane coordinate system;
  • performing Hough line detection on the plane mask in the plane coordinate system based on a detected edge of the plane mask;
  • determining a probability that each detected line is the boundary line of the plane mask; and
  • determining one boundary line of the plane mask in the plane coordinate system from the detected lines based on the probability.


In some embodiments, the determining a probability that each detected line is the boundary line of the plane mask comprises:


determining the probability that each detected line is the boundary line of the plane mask according to difference information of symmetrical regions on both sides of the each detected line, wherein the greater the difference between the symmetrical regions on the both sides of a line, the greater the probability that the line is the boundary line of the plane mask.


In some embodiments, the determining one boundary line of the plane mask in the plane coordinate system from the detected lines comprises:

  • choosing a pair of lines having a perpendicular relation or a parallel relation from the detected lines;
  • under the condition that the pair of lines is found, determining a line with a greatest probability in the pair of lines with a greatest sum of probabilities as the one boundary line of the plane mask in the plane coordinate system; and
  • under the condition that the pair of lines is not found, determining a line with a greatest probability as the one boundary line of the plane mask in the plane coordinate system.


In some embodiments, the determining a boundary line of the plane mask in the plane coordinate system further comprises at least one of:

  • performing median filtering on the plane mask in the plane coordinate system before the edge detection; or
  • merging the detected lines based on a slope of each line after the Hough line detection.


In some embodiments, the determining an inscribed rectangle of the plane mask in the plane coordinate system comprises: determining an inscribed rectangle of the plane mask in the plane coordinate system, wherein the inscribed rectangle is parallel to the boundary line and comprises a maximum inscribed square.


In some embodiments, the embedding the image to be embedded into the predicted plane mask of the video frame image comprises:

  • determining a transformation matrix from the image to be embedded to the predicted plane mask of the video frame image according to a mapping relation between the 4 vertexes of the image to be embedded and the 4 key points in the plane mask of the predicted video frame image; and
  • transforming each foreground point of the image to be embedded into the position region corresponding to the 4 key points in the predicted plane mask of the video frame image based on the transformation matrix.


In some embodiments, the deep learning model uses a loss function that is determined based on the 4 key points in the labelling information and the predicted 4 key points after the alignment operation is performed,

  • wherein performing the alignment operation on the predicted 4 key points comprises:
  • determining a transformation ratio based on the 4 key points in the labelling information and the predicted 4 key points;
  • performing size transformation on the predicted 4 key points according to the transformation ratio;
  • determining first position transformation information based on the 4 key points in the labelling information;
  • determining second position transformation information based on the predicted 4 key points; and
  • respectively adding the first position transformation information to the predicted 4 key points after the size transformation and subtracting the second position transformation information, to finish the alignment operation on the predicted 4 key points.


In some embodiments, the deep learning model comprises region-based convolutional neural networks.


In some embodiments, the image to be embedded comprises an enterprise identification image and a product image.


According to some embodiments of the present disclosure, there is provided a method of acquiring a plane prediction model, comprising:

  • labelling a plane detection frame, a plane mask in each training image and 4 key points in the plane mask;
  • training a deep learning model using the training images with labels of the plane detection frame and the plane mask as well as labelling information of the 4 key points in the plane mask; and
  • determining the trained deep learning model as the plane prediction model.


In some embodiments, the labelling 4 key points in the plane mask in each training image comprises:

  • converting the plane mask of the each training image from a pixel coordinate system to a plane coordinate system;
  • determining a boundary line of the plane mask in the plane coordinate system;
  • determining an inscribed rectangle of the plane mask in the plane coordinate system based on the boundary line of the plane mask; and
  • converting 4 vertices of the inscribed rectangle of the plane mask from the plane coordinate system to the pixel coordinate system.


According to some embodiments of the present disclosure, there is provided an apparatus of embedding an image in a video, comprising: a memory; and a processor coupled to the memory, wherein the processor is configured to perform the method of embedding an image in a video according to any of the embodiments based on instructions stored in the memory.


According to some embodiments of the present disclosure, there is provided an apparatus of acquiring a plane prediction model, comprising: a memory; and a processor coupled to the memory, wherein the processor is configured to perform the method of acquiring a plane prediction model according to any of the embodiments based on instructions stored in the memory.


According to some embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium stored a computer program which implements the method of embedding an image in a video according to any of the embodiments or the method of acquiring a plane prediction model according to any of the embodiments when executed by a processor.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings that need to be used in the description of the embodiments or related art will be briefly described below. The present disclosure can be more clearly understood from the following detailed description, which proceeds with reference to the accompanying drawings.


It is apparent that the drawings in the following description are merely some embodiments of the present disclosure, and for one of ordinary skill in the art, other drawings can be derived from these drawings without paying out creative labor.



FIG. 1 illustrates a flow diagram of a method of acquiring a plane prediction model according to some embodiments of the present disclosure.



FIG. 2 illustrates a flow diagram of a method of acquiring a plane prediction model according to other embodiments of the present disclosure.



FIG. 3 illustrates a schematic diagram of a deep learning model according to some embodiments of the present disclosure.



FIG. 4 illustrates a flow diagram of labelling 4 key points in a plane mask in a training image according to some embodiments of the present disclosure.



FIG. 5 illustrates a schematic diagram of three coordinate systems according to some embodiments of the present disclosure.



FIG. 6 illustrates a flow diagram of a method of embedding an image in a video according to some embodiments of the present disclosure.



FIG. 7 illustrates a flow diagram of a method of embedding an image in a video according to other embodiments of the present disclosure.



FIG. 8 illustrates a schematic diagram of an apparatus of embedding an image in a video according to some embodiments of the present disclosure.



FIG. 9 illustrates a schematic diagram of an apparatus of acquiring a plane prediction model according to some embodiments of the present disclosure.





DETAILED DESCRIPTION

The technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the drawings in the embodiments of the present disclosure.


Unless specified otherwise, words such as “first”, “second”, etc. in this disclosure are used for distinguishing different objects, and are not used for indicating the meaning of size or time sequence, etc.


In the embodiments of the present disclosure, by means of a plane prediction model, a plane mask widely existing in each video frame image is automatically found, and an image to be embedded is embedded into the plane mask, which not only makes the image automatically and naturally fused into a video, but also makes the image more widely fused into the video. In addition, key points in the plane mask in each video frame image can also be automatically found, and the image to be embedded is embedded into a position region corresponding to the key points, thereby improving the fusion effect of the image and the video.



FIG. 1 illustrates a flow diagram of a method of acquiring a plane prediction model according to some embodiments of the present disclosure. The plane prediction model is capable of predicting a plane mask in an image.


As shown in FIG. 1, the method of this embodiment comprises steps 110 to 130.


In step 110, a plane detection frame and a plane mask in a training image are labelled.


The plane detection frame and the plane mask in the training images can be labelled by itself, or a ready-made dataset of the training images already labelled with the plane detection frame and the plane mask can also be acquired, for example, a PlaneRCNN dataset, which not only can provide the training images with labels of the plane detection frame and the plane mask, but also can provide camera parameters related to the training images, and a rotation/translation matrix from a camera coordinate system to a world coordinate system.


In step 120, a deep learning model is trained using the training image with the labels of the plane detection frame and the plane mask, so that the deep learning model has learning abilities of the plane detection frame and the plane mask of the image.


The deep learning model comprises region-based convolutional neural networks (RCNN), for example, MaskRCNN network, which is a type of RCNN. The deep learning model such as the MaskRCNN network comprises a branch of plane detection frame regression and a branch of plane mask regression of the image. The branch of the plane detection frame regression comprises the plane detection frame regression and can also comprise semantic category regression. Training images with the label of the plane detection frame train the branch of the plane detection frame regression, and training images with the label of the plane mask train the branch of the plane mask regression.


In a training process, a total loss is determined according to a loss between the label of the labelled plane detection frame and the model-predicted plane detection frame and a loss between the label of the labelled plane mask and the model-predicted plane mask, of each training image, parameters of the deep learning model are updated according to the total loss, and the training process is iteratively executed until a training termination condition is met, for example, a preset number of iterations is reached, or the total loss is less than a certain value, and the like.


In step 130, the trained deep learning model is determined as the plane prediction model. The plane prediction model can predict the plane detection frame of the image and the plane mask in the plane detection frame.



FIG. 2 illustrates a flow diagram of a method of acquiring a plane prediction model according to other embodiments of the present disclosure. The plane prediction model can predict not only the plane mask in the image but also 4 key points in the plane mask.


As shown in FIG. 2, the method of this embodiment comprises: step 210 to 230.


In step 210, the plane detection frame and the plane mask in the training image as well as 4 key points in the plane mask are labelled.


The 4 key points in the plane mask can be labelled, for example, in the middle of the plane mask. A method of labelling the 4 key points in the plane mask will be specifically described in a subsequent embodiment of FIG. 4.


In step 220, the deep learning model is trained using the training images with labels of the plane detection frame and the plane mask as well as labelling information of the 4 key points in the plane mask, so that the deep learning model has learning abilities of the plane detection frame, the plane mask of the image and the 4 key points thereof.


The deep learning model comprises RCNN, such as MaskRCNN network, which is one type of RCNN. FIG. 3 illustrates a schematic diagram of a deep learning model. As shown in FIG. 3, the deep learning model such as the MaskRCNN network comprises a branch of plane detection frame regression, a branch of plane mask regression, and a branch of key point regression of the image. The branch of the plane detection frame regression comprises the plane detection frame regression and can also comprise semantic category regression. Training images with the label of the plane detection frame train the branch of the plane detection frame regression, training images with the label of the plane mask train the branch of the plane mask regression, and training images with the labelling information of the 4 key points train the branch of the key point regression. The MaskRCNN network obtains a proposal region from the original image by using a method of RoIAlign (region of interest align).


Since the image embedding position on a plane has a movable property, as long as the image embedding position is on the plane and the image and boundary lines of of the plane are parallel to each other, a result predicted by the deep learning model is considered to be correct, so a loss function adopted by the deep learning model is a loss function after the key points are aligned, i.e., a loss function that is determined based on the 4 key points in the labelling information and the predicted 4 key points after alignment operation is performed. For example, the MaskRCNN network adopts Smooth_L1 loss after the key points are aligned.


Let key point label labelled on the current plane be gt ∈ RN×4×2, and a key point coordinate predicted by the network be pre ∈ RN×4×2, where in a dimension space R, N denotes the number of planes, 4 denotes 4 key points, and 2 denotes an abscissa and ordinate of the plane. If a coordinate of an aligned key point is pre” and a loss of a network key point branch is lossk, lossk is calculated as follows (1 to 6).

  • (1) Based on the 4 key points in the labelling information and the predicted 4 key points, determine a transformation ratio r.
  • r=maxgtmingtmaxpreminpre
  • where max denotes taking a maximum value, and min denotes taking a minimum value.
  • (2) According to the transformation ratio, perform size transformation on the predicted 4 key points, and set the predicted key points after the size transformation to be pre′.
  • pre=preminprer+minpre
  • (3) Based on the 4 key points in the labelling information, determine first position transformation information gtc.
  • gtc=maxgtmingt2+mingt
  • (4) Based on the predicted 4 key points, determine second position transformation information pre′c.
  • prec=maxpreminpre2+minpre
  • (5) Respectively add the first position transformation information to the predicted 4 key points after the size transformation and subtract the second position transformation information to finish the alignment operation on the predicted 4 key points, wherein the aligned key points is set as pre”.
  • pre=pre+gtcprec
  • (6) the loss lossk of the network key point branch is:
  • lossk=14N1jNi1,2,3,4smoothL1gtijpreij.


The alignment operation on the 4 key points is performed, so that a quadrilateral region formed by the 4 key points is in the middle of the plane mask.


In the training process, a total loss is determined according to a loss between the label of the labelled plane detection frame and the model-predicted plane detection frame, a loss between the label of the labelled plane mask and the model-predicted plane mask, and a loss between the labelled 4 key points and the predicted 4 key points after the alignment operation, of each training image, parameters of the deep learning model are updated according to the total loss, and the training process is iteratively executed until a training termination condition is met, for example, a preset number of iterations is reached, or the total loss is less than a certain value, and the like.


In step 230, the trained deep learning model is determined as the plane prediction model. The plane prediction model can predict the plane detection frame of the image, the plane mask in the plane detection frame, and the 4 key points in the plane mask.



FIG. 4 illustrates a flow diagram of labelling 4 key points in a plane mask in a training image according to some embodiments of the present disclosure.


As shown in FIG. 4, the method of this embodiment comprises the following steps.


In step 410, one training image containing a plane is acquired.


Many images have therein contained planes, which are, for example, but not limited to, a desktop, a wall surface, various surfaces of a cabinet, a floor, etc. In FIG. 4, a side surface of one cabinet is shown.


In step 420, a plane mask of the training image in the pixel coordinate system is acquired.


As described above, the plane mask of the training image in the pixel coordinate system can be acquired by means of labelling, or the training image and its plane mask in the pixel coordinate system can be acquired through the ready-made PlaneRCNN dataset.


In step 430, the plane mask of the training image is converted from the pixel coordinate system to the plane coordinate system, which comprises (1-2):


(1) According to the camera parameter related to the training image, and the rotation/translation matrix from the camera coordinate system to the world coordinate system, convert the plane mask of the training image from the pixel coordinate system to the world coordinate system.


A coordinate in the pixel coordinate system is: a coordinate on an image obtained after a camera shoots a scene, and the pixel coordinate system is a two-dimensional coordinate system.


A coordinate of a foreground point in the plane mask of the training image in the pixel coordinate system is set as







S

i
m
g


=



S

i
m
g

1

,

,

S

i
m
g

N



,




and a coordinate of a foreground point in the plane mask of the training image in the world coordinatesystem is set as







S

w
o
r
l
d


=



s

w
o
r
l
d

1

,

,

s

w
o
r
l
d

N



,




where N denotes the number of the foreground points.


(2) Convert the plane mask of the training image from the world coordinate system to the plane coordinate system.


A coordinate in the plane coordinate system is: equivalent to a coordinate on an image obtained after the camera right in front of the plane shoots the plane, and in the plane coordinate system, a depth value of each foreground point on the plane is the same. The plane coordinate system is a two-dimensional coordinate system.



FIG. 5 illustrates schematic diagrams of three coordinate systems. They are a pixel coordinate system, a world coordinate system and a plane coordinate system in this order from left to right.


A coordinate of a foreground point in the plane mask in the plane coordinate system is set as







S

p
l
a
n
e


=



s

p
l
a
n
e

1

,

,

s

p
l
a
n
e

N



.




Two points A = (x1, y1, z1) ∈ Sworld B = (x2, y2, z2) ∈ Sworld in the plane mask are found in the world coordinate system, and then one point C = (x3,y3,z3) on an instance is found in the world coordinate system, so that ACAB. The plane coordinate system is constructed by taking A as an origin, AB as an x-axis and AC as a y-axis.


A coordinate of the point C is calculated.


It is given that a normal of the instance is n = (a, b, c), and an offset is d, A = (x1, y1, z1), B = (x2,y2, z2), and since ACAB and a plane normal where the point C is located is n, the following relation is obtained:












a

x
3

+
b

y
3

+
c

z
3

=
d









x
3



x
1






x
2



x
1



+



y
3



y
1






y
2



y
1



+



z
3



z
1






z
2



z
1



=
0










If the vector AC is in parallel with the x-axis, (x3, y3, z3) = (x1, y1, z1 + 1); otherwise, if the vector AC is not in parallel with the x-axis, X3 = 0







y
3

=


c

x
1




x
2



x
1



+
c

y
1




y
2



y
1



+
c

z
1




z
2



z
1




d




z
2



z
1





c




y
2



y
1




b




z
2



z
1













z3
=




d

a

x
3


b

y
3




c





Therefore, the points A = (x1, y1, z1)B = (x2,y2, z2) C = (x3,y3,z3) in the world coordinate system are obtained.


Since A is the origin in the plane coordinate system and ACAB, the coordinates of the points A, B, C in the plane coordinate system respectively are:







A


=


0
,
0
,
0











B


=






A
B





,
0
,
0











C


=


0
,




A
C





,
0






According to the coordinates of the three points in the world coordinate system and the plane coordinate system, a transformation matrix M between the world coordinate system and the plane coordinate system is obtained, and according to the transformation matrix M,







s

p
l
a
n
e

i





can be calculated by:







s

p
l
a
n
e

i

=
M

s

w
o
r
l
d

i





Therefore,







S

p
l
a
n
e


=



s

p
l
a
n
e

1

,

,

s

p
l
a
n
e

N




,




that is, the coordinate of the foreground point of the plane mask in the plane coordinate system is obtained.


Each plane has its own plane coordinate system, and in the plane coordinate system, an inscribed rectangle (such as a maximum inscribed square) of the plane mask can be more easily found, wherein 4 vertexes are taken as the 4 key points.


In step 440, median filtering on the plane mask in the plane coordinate system is performed, denoted as: mask = MedianFilter(mask), where the mask on a right side of = is the plane mask before the filtering, and the mask on a left side of = is the plane mask after the filtering.


The median filtering is a nonlinear smoothing technique, in which a gray value of each pixel is set as a median of gray values of all pixels within a certain neighborhood window of the point.


In step 450, edge detection on the plane mask is performed in the plane coordinate system, denoted as: edges = Edge(mask).


The edge detection technique can refer to the prior art.


In step 460, based on a detected edge of the plane mask, Hough line detection on the plane mask is performed in the plane coordinate system, denoted as: lines = HoughLineDetect(edges). The Hough line detection method can refer to the prior art.


Further, lines keep_lines in which the number of pixels is greater than a set threshold voteThresh are screened out from the detected lines : keep_lines = {linej|linej(pixel) ≥ voteThresh, linej ∈ lines}, wherein linesj(pixel) denotes the number of pixels contained in a jth line detected on the plane mask.


In step 470, based on a slope of the line, the detected lines are merged, denoted as: merge_lines = MergeLine(keep_lines), wherein MergeLine() denotes that lines with similar slopes are merged into one line.


In step 480, a probability that the detected line is a boundary line of the plane mask is determined, and a pair of lines having a perpendicular relation or a parallel relation is chosen from the detected lines, denoted as: choose_lines = ChooseLine(merge_lines).


According to difference information of symmetrical regions on both sides of the line, the probability that the detected line is the boundary line of the plane mask is determined, denoted as:








s
c
o
r

e
k

=

1

v
a
l
u
e
T
h
r
e
s
h

N




region
1


l
i
n

e
k




region
2


l
i
n

e
k






,




l
i
n

e
k


m
e
r
g
e
_
l
i
n
e
s






where region1(linek) and region2(linek) respectively denote a symmetric region with a fixed width on both sides of the line linek, N is the number of pixels in the region, and valueThresh is a set threshold. The greater the difference between the symmetric regions on the both sides of the line, the greater the probability that the line is the boundary line of the plane mask.


In step 490, under the condition that the pair of lines is found, a line with a greatest probability in a pair of lines with a greatest sum of probabilities is determined as one boundary line of the plane mask in the plane coordinate system; and under the condition that the pair of lines is not found, a line with a greatest probability is determined as one boundary line of the plane mask in the plane coordinate system, thereby obtaining the boundary line of the plane mask in the plane coordinate system, denoted as BestLine = getBestLine(choose_lines). Based on the boundary line, an inscribed rectangle of the plane mask is determined in the plane coordinate system.


In the plane coordinate system, an inscribed rectangle of the plane mask that is parallel to the boundary line is determined, wherein the inscribed rectangle is, for example, a maximum inscribed square, denoted as square = MaxInscribedSquare(mask), wherein square_edgei //BestLine, and four vertexes of the maximum inscribed square are







P

p
l
a
n
e


=



p

p
l
a
n
e

1

,

p

p
l
a
n
e

2

,

p

p
l
a
n
e

3

,

p

p
l
a
n
e

4



.




In step 4100, 4 vertices of the inscribed rectangle of the plane mask are converted from the plane coordinate system to the pixel coordinate system.


As described above, the coordinate







S

i
m
g


=



s

i
m
g

1

,

,

s

i
m
g

N







of the foreground point in the plane mask of the training image in the pixel coordinate system, and the coordinate Splane =









s

p
l
a
n
e

1

,

,

s

p
l
a
n
e

N







of the foreground point of the plane mask in the plane coordinate system are known, thereby obtaining the transformation matrix T of the pixel coordinate system and the plane coordinate system, that is,







s

p
l
a
n
e

i

=
T

s

i
m
g

i

.




Based on the above determined 4 vertexes







p

p
l
a
n
e

i





of the inscribed rectangle inthe plane mask in the plane coordinate system, the coordinates of the 4 key points in the plane mask of the training image in the pixel coordinate system are determined, denoted as:







p

i
m
g

i

=

T


1



P

p
l
a
n
e

i

.




The 4 key points inscribed in the plane mask in the training image are automatically found and used as the training data to train the model, so that the model can predict the 4 key points inscribed in the plane mask in the video frame image, so as to embed the image into the proper position in the video frame image, thereby further improving the fusion effect of the image and the video.



FIG. 6 illustrates a flow diagram of a method of embedding an image in a video according to some embodiments of the present disclosure.


As shown in FIG. 6, the method of this embodiment comprises: step 610 to 620.


In step 610, a video frame image of a video is input into a plane prediction model, and a predicted plane mask of the video frame image is acquired.


The plane prediction model is obtained by training a deep learning model using training images with labels of a plane detection frame and a plane mask, and reference is made to the foregoing embodiments for details.


In step 620, the image to be embedded is embedded into the predicted plane mask of the video frame image.


For example, the image to be embedded is embedded into a position region in the plane mask that is parallel to a boundary line of the plane mask.


Examples of the images to be embedded include, but are not limited to, business identification images, product images, character images, and advertisement images.


The plane mask widely existing in the video frame image are automatically found, and the image to be embedded is embedded into the plane mask, which not only makes the image automatically and naturally fused into the video, but also makes the image more widely fused into the video.



FIG. 7 is a flow diagram illustrating a method of embedding an image in a video according to other embodiments of the present disclosure.


As shown in FIG. 7, the method of this embodiment comprises: steps 710 to 720.


In step 710, a video frame image of a video is input into a plane prediction model, and a predicted plane mask of the video frame image as well as 4 key points in the plane mask are acquired.


The plane prediction model is obtained by training a deep learning model using training images with labels of a plane detection frame and a plane mask as well as labelling information of 4 key points in the plane mask, and reference is made to the foregoing embodiments for details.


In step 720, 4 vertices of the image to be embedded are mapped to the 4 key points in the predicted plane mask of the video frame image, and the image to be embedded is embedded into a position region corresponding to the 4 key points in the predicted plane mask of the video frame image.


Specifically, according to a mapping relation between the 4 vertices (whose coordinates are (0, 0), (w, 0), (0, h), (w, h)) of the image Iad (whose resolution is w × h) to be embedded and the 4 key points pre” in the predicted plane mask of the video frame image Irgb, a transformation matrix M ∈ R3∗3 from the image Iad to be embedded to the predicted plane mask of the video frame image Irgb is determined; and based on the transformation matrix, each foreground point of the image to be embedded is transformed into the position region corresponding to the 4 key points in the predicted plane mask of the video frame image, that is, for each pixel prgb ∈ R1∗2 of the position region formed by the 4 key points on Irgb, by [pad, 1]T = M[prgb, 1]T, a pixel pad ∈ R1∗2 corresponding to prgb is found on Iad, and finally, a pixel value of pad is assigned to prgb.


The plane mask widely existing in each video frame image and the 4 key points in the plane mask are automatically found, and the image to be embedded is embedded into the position region corresponding to the 4 key points in the plane mask, which not only makes the image automatically, naturally, and widely fused into the video, but also improves the fusion effect of the image and the video.



FIG. 8 illustrates a schematic diagram of an apparatus of embedding an image in a video according to some embodiments of the present disclosure.


As shown in FIG. 8, the apparatus 800 of embedding an image in a video of this embodiment comprises: a memory 810 and a processor 820 coupled to the memory 810, wherein the processor 820 is configured to perform the method of embedding an image in a video in any of the embodiments described above based on instructions stored in the memory 810.


The memory 810 can include, for example, a system memory, a fixed non-volatile storage medium, and the like. The system memory has thereon stored, for example, an operating system, an application, a boot loader, and other programs.


The apparatus 800 can also include an input/output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850, as well as the memory 810 and the processor 820, can be connected, for example, through a bus 860. The input/output interface 830 provides a connection interface for an input/output device such as a display, a mouse, a keyboard, and a touch screen. The network interface 840 provides a connection interface for a variety of networking devices. The storage interface 850 provides a connection interface for an external storage device such as a SD card and a USB flash disk.



FIG. 9 illustrates a schematic diagram of an apparatus of acquiring a plane prediction model according to some embodiments of the present disclosure.


As shown in FIG. 9, the apparatus of acquiring a plane prediction model 900 of this embodiment comprises: a memory 910 and a processor 920 coupled to the memory 910, wherein the processor 920 is configured to perform the method of acquiring a plane prediction model in any of the embodiments described above based on instructions stored in the memory 910.


The memory 910 can include, for example, a system memory, a fixed non-volatile storage medium, and the like. The system memory has thereon stored, for example, an operating system, an application program, a boot loader, and other programs.


The apparatus 900 can also comprise an input/output interface 930, a network interface 940, a storage interface 950, and the like. These interfaces 930, 940, 950, as well as the memory 910 and the processor 920 can be connected, for example, through a bus 960. The input/output interface 930 provides a connection interface for an input/output device such as a display, a mouse, a keyboard, and a touch screen. The network interface 940 provides a connection interface for a variety of networking devices. The storage interface 950 provides a connection interface for an external storage device such as a SD card and a USB flash disk.


The apparatus 800 of embedding an image in a video can be different from or the same as the apparatus 900 of acquiring a plane prediction model. For example, the apparatus 800 of embedding an image in a video and the apparatus 900 of acquiring a plane prediction model can be disposed on one computer, or disposed on two computers.


According to some embodiments of the present disclosure, there is provided a non-transitory computer-readable storage stored a computer program which implements the method of embedding an image in a video or the method of acquiring a plane prediction model when executed by a processor.


It should be appreciated by those skilled in the art that, the embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure can take a form of an entire hardware embodiment, an entire software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure can take a form of a computer program product implemented on one or more non-transitory computer-readable storage media (including, but not limited to, disk memories, CD-ROMs, optical memories, etc.) having computer program code embodied therein.


The present disclosure is described with reference to flow diagrams and/or block diagrams of the method, apparatus (system), and computer program product according to the embodiments of the present disclosure. It will be understood that each flow and/or block in the flow diagrams and/or block diagrams, and a combination of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, special-purpose computer, embedded processor, or other programmable data processing devices to produce a machine, such that the instructions, which are executed by the processor of the computer or other programmable data processing devices, create means for implementing the function specified in one or more flows of the flow diagrams and/or one or more blocks of the block diagrams.


These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing devices to work in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in one or more flows of the flow diagrams and/or one or more blocks of the block diagrams.


These computer program instructions can also be loaded onto a computer or other programmable data processing devices, such that a series of operation steps are performed on the computer or other programmable devices to produce a computer-implemented process, and therefore, the instructions executed on the computer or other programmable devices provide steps for implementing the function specified in one or more flows of the flow diagrams and/or one or more blocks of the block diagrams.


The above description is only the preferred embodiments of the present disclosure and not used for limiting the present disclosure, and any modifications, equivalents, improvements and the like that are within the spirit and scope of the present disclosure should be included in the protection scope of the present disclosure.

Claims
  • 1. A method of embedding an image in a video, comprising: inputting a video frame image of a video into a plane prediction model, and acquiring a predicted plane mask of the video frame image, wherein the plane prediction model is obtained by training a deep learning model using training images with labels of a plane detection frame and a plane mask; andembedding the image to be embedded into the predicted plane mask of the video frame image.
  • 2. The method according to claim 1, wherein the plane prediction model is obtained by training the deep learning model using the training images with the labels of the plane detection frame and the plane mask as well as labeling information of 4 key points in the plane mask; the acquiring step comprises: after inputting the video frame image of the video into the plane prediction model, acquiring the predicted plane mask of the video frame image as well as the 4 key points in the plane mask; andthe embedding the image to be embedded into the predicted plane mask of the video frame image comprises: aligning 4 vertexes of the image to be embedded with the 4 key points in the predicted plane mask of the video frame image, and embedding the image to be embedded into a position region corresponding to the 4 key points in the predicted plane mask of the video frame image.
  • 3. The method according to claim 2, wherein the labelling information of the 4 key points in the plane mask in each training image is obtained by: converting the plane mask of the each training image from a pixel coordinate system to a plane coordinate system;determining a boundary line of the plane mask in the plane coordinate system;determining an inscribed rectangle of the plane mask in the plane coordinate system based on the boundary line of the plane mask; andconverting 4 vertices of the inscribed rectangle of the plane mask from the plane coordinate system to the pixel coordinate system.
  • 4. The method according to claim 3, wherein the converting the plane mask of the each training image from a pixel coordinate system to a plane coordinate system comprises: converting the plane mask of the each training image from the pixel coordinate system to a world coordinate system; andconverting the plane mask of the each training image from the world coordinate system to the plane coordinate system.
  • 5. The method according to claim 3, wherein the determining a boundary line of the plane mask in the plane coordinate system comprises: performing edge detection on the plane mask in the plane coordinate system;performing Hough line detection on the plane mask in the plane coordinate system based on a detected edge of the plane mask;determining a probability that each detected line is the boundary line of the plane mask; anddetermining one boundary line of the plane mask in the plane coordinate system from the detected lines based on the probability.
  • 6. The method according to claim 5, wherein the determining a probability that each detected line is the boundary line of the plane mask comprises: determining the probability that each detected line is the boundary line of the plane mask according to difference information of symmetrical regions on both sides of the each detected line, wherein the greater the difference between the symmetrical regions on the both sides of a line, the greater the probability that the line is the boundary line of the plane mask.
  • 7. The method according to claim 5, wherein the determining one boundary line of the plane mask in the plane coordinate system from the detected lines comprises: choosing a pair of lines having a perpendicular relation or a parallel relation from the detected lines;under the condition that the pair of lines is found, determining a line with a greatest probability in the pair of lines with a greatest sum of probabilities as the one boundary line of the plane mask in the plane coordinate system; andunder the condition that the pair of lines is not found, determining a line with a greatest probability as the one boundary line of the plane mask in the plane coordinate system.
  • 8. The method according to claim 5, wherein the determining a boundary line of the plane mask in the plane coordinate system further comprises at least one of: performing median filtering on the plane mask in the plane coordinate system before the edge detection; ormerging the detected lines based on a slope of each line after the Hough line detection.
  • 9. The method according to claim 3, wherein the determining an inscribed rectangle of the plane mask in the plane coordinate system comprises: determining an inscribed rectangle of the plane mask in the plane coordinate system, wherein the inscribed rectangle is parallel to the boundary line and comprises a maximum inscribed square.
  • 10. The method according to claim 2, wherein the embedding the image to be embedded into the predicted plane mask of the video frame image comprises: determining a transformation matrix from the image to be embedded to the predicted plane mask of the video frame image according to a mapping relation between the 4 vertexes of the image to be embedded and the 4 key points in the plane mask of the predicted video frame image; andtransforming each foreground point of the image to be embedded into the position region corresponding to the 4 key points in the predicted plane mask of the video frame image based on the transformation matrix.
  • 11. The method according to claim 2, wherein the deep learning model uses a loss function that is determined based on the 4 key points in the labelling information and the predicted 4 key points after the alignment operation is performed,wherein performing the alignment operation on the predicted 4 key points comprises: determining a transformation ratio based on the 4 key points in the labelling information and the predicted 4 key points;performing size transformation on the predicted 4 key points according to the transformation ratio;determining first position transformation information based on the 4 key points in the labelling information;determining second position transformation information based on the predicted 4 key points; andrespectively adding the first position transformation information to the predicted 4 key points after the size transformation and subtracting the second position transformation information, to finish the alignment operation on the predicted 4 key points.
  • 12. The method according to claim 1, wherein the deep learning model comprises region-based convolutional neural networks; orthe image to be embedded comprises an enterprise identification image and a product image.
  • 13. A method of acquiring a plane prediction model, comprising: labelling a plane detection frame, a plane mask in each training image and 4 key points in the plane mask;training a deep learning model using the training images with labels of the plane detection frame and the plane mask as well as labelling information of the 4 key points in the plane mask; anddetermining the trained deep learning model as the plane prediction model.
  • 14. The method according to claim 13, wherein the labelling 4 key points in the plane mask in each training image comprises: converting the plane mask of the each training image from a pixel coordinate system to a plane coordinate system;determining a boundary line of the plane mask in the plane coordinate system;determining an inscribed rectangle of the plane mask in the plane coordinate system based on the boundary line of the plane mask; andconverting 4 vertices of the inscribed rectangle of the plane mask from the plane coordinate system to the pixel coordinate system.
  • 15. An apparatus of embedding an image in a video, comprising: a memory; anda processor coupled to the memory, wherein the processor is configured to perform the method of embedding an image in a video according to claim 1 based on instructions stored in the memory.
  • 16. An apparatus of acquiring a plane prediction model, comprising: a memory; anda processor coupled to the memory, wherein the processor is configured to perform the method of acquiring a plane prediction model according to claim 13 based on instructions stored in the memory.
  • 17. A non-transitory computer-readable storage medium comprising a stored computer program which implements the method of embedding an image in a video according to claim 1 when executed by a processor.
  • 18. A non-transitory computer-readable storage medium comprising a stored computer program which implements the method of acquiring a plane prediction model according to claim 13 when executed by a processor.
Priority Claims (1)
Number Date Country Kind
202011004707.1 Sep 2020 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a National Stage Application of International Patent Application No. PCT/CN2021/092267, filed on May 8, 2021, which is based on and claims the priority of the Chinese patent application No. 202011004707.1 filed on Sep. 22, 2020, the disclosure of both of which are hereby incorporated as a whole into the present application.

PCT Information
Filing Document Filing Date Country Kind
PCT/CN2021/092267 5/8/2021 WO