The present application is a National Phase of International Application of PCT/CN2019/109483 filed on Sep. 30, 2019 which claims priority to Chinese Patent Application No. 201811549643.6, filed on Dec. 18, 2018 and entitled “Panoramic Video Anti-Shake Method and Portable Terminal”, and the content of which is herein incorporated by reference in their entireties.
The present disclosure relates to the field of panoramic video, and particularly to a panoramic video anti-shake method and a portable terminal.
At present, when a panoramic video is shot, it is usually to hold a panoramic shooting device in hand to shoot. When a mobile shooting is taken, a phenomenon of jitter occurs in the panoramic shooting video due to unstable hands. When a panoramic view angle is acquired, the focus of the original lens is often lost due to the movement or shaking of the camera, which affects the viewing experience of the panoramic video. One of the current solutions is to use a pan-tilt to stabilize the panoramic shooting device to stabilize the pictures taken. However, the disadvantage is that the pan-tilt is more expensive and the volume is generally larger, and the pan-tilt does not completely solve the problem of picture jitter when shooting the video with a handheld panoramic shooting device.
When a panoramic video viewer is watching the panoramic video and only wants to view from angles of view in the vertical and horizontal directions of the panoramic video, it is necessary to keep the angle of view of the video changing in the vertical and horizontal directions and keep the video stable, thus it is necessary to study a panoramic video anti-shake method which can only retains the motion states in the vertical and horizontal directions.
The purpose of the present disclosure is to provide a panoramic video anti-shake method, a computer-readable storage medium and a portable terminal, and is intended to solve the problem of the loss of the original lens focus when the panoramic video perspective is acquired. The method can generate smooth and stable videos in the vertical and horizontal directions, and retain the original shooting angle of the camera.
In the first aspect, the present disclosure provides a panoramic video anti-shake method, which includes:
acquiring a world coordinate of any one reference point in a world coordinate system in real time, and simultaneously acquiring a camera coordinate corresponding to the reference point in a portable terminal and an angular velocity value of a gyroscope in the portable terminal in a current state;
smoothing a motion of the camera by using an extended Kalman filter;
decomposing the smoothed motion, synthesizing a motion of a virtual lens in an ePTZ mode, and calculating a rotation quantity of the virtual lens;
re-projecting an original video according to the rotation quantity of the virtual lens and a rotation matrix by which the camera coordinate is transformed to the world coordinate, to generate a stable video.
In the second aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implements the steps of the above-mentioned panoramic video anti-shake method.
In the third aspect, the present disclosure provides a portable terminal, which includes:
one or more processors;
a memory; and
one or more computer programs, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, wherein the processor, when executing the computer programs, implement the steps of the above-mentioned panoramic video anti-shake method.
In the present disclosure, by decomposing the motion of the camera and synthesizing the motion of the virtual lens in the ePTZ mode in which only the vertical and horizontal motions are retained, the focus of the lens can keep changing in the vertical and/or horizontal direction during play of the panoramic video, while the changes in other directions of the motion of the original camera are filtered out. Therefore, this method can maintain smooth motion of the rendering lens, generate a stable video and retain the original shooting angle of the camera, which has strong robustness to large noise scenes and most sports scenes.
In order to make the objectives, technical solution, and advantages of the present disclosure clearer, the present disclosure will be described in detail with reference to the accompanying drawings and embodiments. It should be appreciated that the specific embodiments described here are only used for explaining the present disclosure, rather than limiting the present disclosure.
In order to illustrate the technical solution of the present disclosure, specific embodiments are used for description below.
Referring to
S101: a world coordinate of any one reference point in a world coordinate system is acquired in real time, and a camera coordinate corresponding to the reference point in a portable terminal and an angular velocity value of a gyroscope in the portable terminal in a current state are simultaneously acquired.
In the embodiment I of the present disclosure, S101 can specifically be as follows.
The world coordinate of the reference point is Pw, and the camera coordinate is Pc, the following relation is specifically included:
Pw=Rw2cPc (1)
In the formula (1),
is a rotation matrix by which the camera coordinate is transformed to the world coordinate; the elements r11 to r33 are elements of the rotation matrix Rw2c,
I is the unit matrix.
The step of acquiring the angular velocity value of the gyroscope in the portable terminal in real time specifically includes: an angular velocity sensor is adopted to read a three-axis angular velocity value as wk.
S102: an extended Kalman filter is utilized to smooth a motion of the camera.
The extended Kalman filter algorithm linearizes the nonlinear system, and then performs Kalman filter. The Kalman filter is a high-efficiency recursive filter which can estimate a state of a dynamic system from a series of measurements that do not completely contain noises.
In the embodiment I of the present disclosure, S102 can specifically be as follows.
The extended Kalman filter algorithm is utilized to establish a state model and an observation model of a motion state of the camera; specifically:
the state model is:
the observation model is:
In the formulas (2) and (3), k is time, wk is an obtained angular velocity, and qk is an obtained observation vector of the rotation quantity; {tilde over (w)}k and {tilde over (q)}k are state values of the angular velocity and rotation quantity, {tilde over (w)}k−1 are {tilde over (q)}k−1 state values of the angular velocity and rotation quantity at time k−1; qk is a quaternion representation of Rw2c−1; wk is the angular velocity value of the gyroscope; wk=lg(qk+1·qk−1, Φ({tilde over (w)}k−1) is a state transition matrix at the time k−1; Φ({tilde over (w)}k−1)=exp([{tilde over (w)}k−1]x), {tilde over (q)}k is the quaternion representation of the estimated smoothed motion of the camera, {tilde over (q)}k is the state value estimated by the value of {tilde over (q)}k−1 at the previous time.
The specific process of updating the prediction includes: at time k, {tilde over (q)}k−1 estimated at the previous time and the observation value {tilde over (q)}k the current time are utilized to update the estimation of the state variable {tilde over (q)}k to obtain an estimated value at the current time. The predicted value {tilde over (q)}k is the rotation quantity of the virtual lens at the k-th time.
S103: the smoothed motion is decomposed, a motion of the virtual lens is synthesized in an ePTZ mode, and the rotation quantity of the virtual lens is calculated.
In the embodiment I of the present disclosure, S103 can specifically be as follows.
The coordinate of the reference point in the virtual lens is P
P
In formula (4), R
the ePTZ mode is a mode in which only the motions in the vertical and horizontal directions are retained; for the virtual lens in the synthesized ePTZ mode, the rotation quantity is set as R
the step of decomposing the smoothed motion, synthesizing the motion of the virtual lens in the ePTZ mode and calculating the rotation quantity of the virtual lens specifically includes:
a focus direction {right arrow over (x)} of the original lens is given, the smoothed viewpoint direction is {right arrow over ({tilde over (x)})}=qk⊗{right arrow over (x)}.
R
{right arrow over (e)}3={right arrow over (e)}1×{right arrow over (e)}up, {right arrow over (e)}2={right arrow over (e)}3×{right arrow over (e)}1, ⊗ represents the vector rotation in the quaternion space, and {right arrow over (e)}up is an upward direction of the virtual lens and can be set as [0,0,1]T.
It should be noted that the Rodrigues formula can be utilized to obtain the rotation matrix R from the quaternion after the unit vector is rotated by an angle θ. Specifically, the quaternion is set as q=(θ,x,y,z)T, then calculation formula of the rotation matrix R is:
S104: the original video is re-projected according to the rotation quantity of the virtual lens and the rotation matrix by which the camera coordinate is transformed to the world coordinate, to generate a stable video.
In the embodiment I of the present disclosure, S104 can specifically be as follows.
A corresponding relationship between a pixel in the original video frame and a pixel in an output video frame is calculated, and then interpolation resampling is performed on the original video frame according to the corresponding relationship to generate the output video frame, and finally a stable video is generated.
The pixel in the original video frame is set as Ps, and the pixel in the corresponding output video frame is Pd, then the corresponding relationship is: Ps=Dc(KcRw2c−1R
Then the step of performing the interpolation resampling on the original video frame Is according to the corresponding relationship and generating the output video frame Id specifically includes:
In formula (5), wi is the interpolation weight, and PsiϵU(Ps,δ) is a neighborhood coordinate of Ps.
In the embodiment II of the present disclosure, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the steps of the panoramic video anti-shake method provided in the embodiment I of the present disclosure.
The computer-readable storage medium can be a non-transitory computer-readable storage medium.
In the embodiment of the present disclosure, by decomposing the motion of the camera and synthesizing the motion of the virtual lens in the ePTZ mode in which only the vertical and horizontal motions are retained, the focus of the lens can keep changing in the vertical direction and/or horizontal direction when the user is watching a panoramic video. Changes of the motion of the original lens in other directions are filtered out to only retain the motion of the panoramic video in the horizontal direction, accordingly this method can maintain the smooth motion of the rendering lens, generate a stable video, and retain the original shooting perspective of the camera, which has strong robustness to large noise scenes and most sports scenes.
Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above-mentioned embodiments can be completed by a program instructing relevant hardware. The program can be stored in a computer-readable storage medium, and the storage medium can include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, etc.
The above are merely the preferred embodiments of the present disclosure and are not intended to limit the present disclosure. Any modification, equivalent replacement and improvement made within the spirit and principle of the present disclosure shall be regarded as the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201811549643.6 | Dec 2018 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2019/109483 | 9/30/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/125132 | 6/25/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20110153287 | Potze | Jun 2011 | A1 |
20120063757 | Yamazaki | Mar 2012 | A1 |
20160212313 | Jeong | Jul 2016 | A1 |
20160301870 | Matsuoka | Oct 2016 | A1 |
20190080495 | Andronikos | Mar 2019 | A1 |
20200057488 | Johnson | Feb 2020 | A1 |
Number | Date | Country |
---|---|---|
101231456 | Jul 2008 | CN |
103426182 | Dec 2013 | CN |
105184738 | Dec 2015 | CN |
105208247 | Dec 2015 | CN |
107040694 | Aug 2017 | CN |
107801014 | Mar 2018 | CN |
108462833 | Aug 2018 | CN |
108933896 | Dec 2018 | CN |
109561253 | Apr 2019 | CN |
2012-060271 | Mar 2012 | JP |
2016-201745 | Dec 2016 | JP |
2017-069920 | Apr 2017 | JP |
2018184423 | Oct 2018 | WO |
Entry |
---|
International Search Report dated Dec. 25, 2019 issued in corresponding Parent Application No. PCT/CN2019/109483 (3 pages). |
Chinese Office Action dated Jan. 15, 2020 issued in corresponding Patent Application No. 201811549643.6 w/English Translation (9 pages). |
Chinese Office Action dated Nov. 5, 2019 issued in corresponding Patent Application No. 201811549643.6 (3 pages). |
Chinese Search Report issued in corresponding Patent Application No. 201811549643.6 (2 pages). |
Jia et al., “ Real-Time 3D Rotation Smoothing for Video Stabilization”, 48th Asilomar Conference On Signals, Systems and Computers, IEEE, Nov. 2, 2014, pp. 673-677. |
Kamali et al., “Stabilizing Omnidirectional Videos Using 3D Structure and Spherical Image Warping”, MVA2011 IAPR Conference on Machine Vision Applications, Jun. 13-15, 2011, Nara, Japan, pp. 177-180. |
European Search Report dated Jul. 21, 2022 issued in corresponding Patent Application No. 19898738.0 (14 pgs). |
Number | Date | Country | |
---|---|---|---|
20220053132 A1 | Feb 2022 | US |