This application claims priority to Chinese Patent Application No. 202111315996.1 filed on Nov. 8, 2021, the disclosure of which is incorporated herein by reference in its entirety.
The present disclosure relates to the technical field of artificial intelligence, further to the technical fields of deep learning and computer vision, and in particular to a video stitching method and apparatus, an electronic device, and a storage medium.
As people pay more and more attention to entertainment and leisure, video mediums such as movies and short videos are gradually closely linked to the lives of the general public. Thus, our society is flooded with more and more demand for video creation. In the process of shooting and creating these video contents, it is often necessary to shoot multiple shots and then edit and splice these shots together.
The present disclosure provides a video stitching method and an apparatus, an electronic device, and a storage medium.
In a first aspect, the present disclosure provides a video stitching method, and the video stitching method includes: inserting an intermediate frame between a last image frame of a first video and a first image frame of a second video; sequentially selecting L image frames in order from back to front from the first video and L image frames in order from front to back from the second video separately, where L is a natural number greater than 1; and stitching together the first video and the second video to form a target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video.
In a second aspect, the present disclosure provides a video stitching apparatus, the video stitching apparatus includes: a frame insertion module, a selection module, and a stitching module.
The frame insertion module is configured to insert an intermediate frame between a last image frame of a first video and a first image frame of a second video.
The selection module is configured to sequentially select L image frames in order from back to front from the first video and L image frames in order from front to back from the second video separately and L is a natural number greater than 1.
The stitching module is configured to stitch together the first video and the second video to form a target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video.
In a third aspect, an embodiment of the present disclosure provides an electronic device, and the electronic device includes: one or more processors; and a memory, which is configured to store one or more programs. The one or more programs are executed by the one or more processors to cause the one or more processors to perform the video stitching method of any embodiment of the present disclosure.
In a fourth aspect, an embodiment of the present disclosure provides a storage medium storing a computer program. The program, when executed by a processor, implements the video stitching method of any embodiment of the present disclosure.
In a fifth aspect, a computer program product is provided. The computer program product is configured to, when executed by a computer device, implement the video stitching method of any embodiment of the present disclosure.
The technical means of the present disclosure solves the technical problem that in the related art video stitching is achieved by manual use of Photoshop (PS) and the manual use of PS requires a large amount of labor and is slow, time-consuming, expensive and costly. According to the technical solution of the present disclosure, the smooth transition between videos can be realized, and the difficulty of video stitching can be greatly reduced. At the same time, the speed of stitching can be increased, and the cost can be reduced.
It is to be understood that the content described in this part is neither intended to identify key or important features of embodiments of the present disclosure nor intended to limit the scope of the present disclosure. Other features of the present disclosure are apparent from the description provided hereinafter.
The drawings are intended to provide a better understanding of the solution and not to limit the present disclosure. In the drawings:
Exemplary embodiments of the present disclosure, including details of embodiments of the present disclosure, are described hereinafter in conjunction with drawings to facilitate understanding. The exemplary embodiments are only illustrative. Therefore, it is to be appreciated by those of ordinary skill in the art that various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the present disclosure. Similarly, description of well-known functions and constructions is omitted hereinafter for clarity and conciseness.
In S101, an intermediate frame is inserted between a last image frame of a first video and a first image frame of a second video.
In this step, the electronic device may insert the intermediate frame between the last image frame of the first video and the first image frame of the second video. Specifically, the electronic device may input the last image frame of the first video and the first image frame of the second video into a pre-constructed image model, and the image model outputs an image frame as the intermediate frame between the last image frame of the first video and the first image frame of the second video.
In S102, L image frames are sequentially selected in order from back to front from the first video and L image frames are sequentially selected in order from front to back from the second video separately. L is a natural number greater than 1.
In this step, the electronic device may sequentially select L image frames in order from back to front from the first video and L image frames in order from front to back from the second video separately. L is a natural number greater than 1. For example, assuming that the first video is a video A and the second video is a video B, when the value of L is 5, the electronic device may select five image frames in order from back to front from the video A, and the five image frames are A_N−4, A_N−3, A_N−2, A_N−1, and A_N respectively. At the same time, five image frames may also be selected in order from front to back from the video B, and the five image frames are B_1, B_2, B_3, B_4, and B_5 respectively.
In S103, the first video and the second video are stitched together to form a target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video.
In this step, the electronic device may stitch together the first video and the second video to form the target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video. Specifically, the electronic device may first insert L−2 image frames between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively, the L−2 image frames inserted between the first video and the intermediate frame are configured as candidate transition frames between the first video and the intermediate frame. At the same time, the electronic device may insert L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, respectively, the L−2 image frames inserted between the second video and the intermediate frame are configured as candidate transition frames between the second video and the intermediate frame. Then, the electronic device may stitch together the first video and the second video to form the target video according to an L-th last image frame of the first video, the intermediate frame, an L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame.
In the existing video stitching technology, after multiple shots are stitched together, the composite video often has a jump at a splice. Especially for shots with a person, even if the placement of a camera does not move, the smooth transition between two shots becomes impossible due to the person's own posture shaking. The existing video stitching method is often achieved by manual use of PS, which requires a large amount of labor and is slow, time-consuming, expensive and costly. According to the video stitching method provided by the embodiment of the present disclosure, the smooth transition between videos can be realized, and the difficulty of video stitching can be greatly reduced. At the same time, the speed of stitching can be increased, and the cost can be reduced.
According to the video stitching method provided by the embodiment of the present disclosure, first, the intermediate frame is inserted between the last image frame of the first video and the first image frame of the second video. Then, L image frames are sequentially selected in order from back to front from the first video and L image frames are sequentially selected in order from front to back from the second video separately, and L is a natural number greater than 1. Then, the first video and the second video are stitched together to form the target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video. That is, in the present disclosure, the first video and the second video can be stitched together smoothly based on the intermediate frame, the L image frames in the first video, and the L image frames in the second video, and a very obvious sign of a jump can be avoided. However, the existing video stitching method is often achieved by manual use of PS, which requires a large amount of labor and is slow, time-consuming, expensive and costly. In the present disclosure, the technical means is adopted that video stitching is achieved based on the intermediate frame, the L image frames in the first video, and the L image frames in the second video, so that the technical problem is solved that in the related art video stitching is achieved by manual use of PS and the manual use of PS requires a large amount of labor and is slow, time-consuming, expensive and costly. According to the technical solution of the present disclosure, the smooth transition between videos can be realized, and the difficulty of video stitching can be greatly reduced. At the same time, the speed of stitching can be increased, and the cost can be reduced. In addition, the technical solution of the embodiment of the present disclosure is simple and convenient to implement, easy to popularize, and applicable to a wider range.
In S201, an intermediate frame is inserted between a last image frame of a first video and a first image frame of a second video.
In S202, L image frames are sequentially selected in order from back to front from the first video and L image frames are sequentially selected in order from front to back from the second video separately. L is a natural number greater than 1.
In S203, L−2 image frames, as candidate transition frames between the first video and the intermediate frame, are inserted between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively.
In this step, an electronic device may insert L−2 image frames between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively, the L−2 image frames inserted between the first video and the intermediate frame are configured as candidate transition frames between the first video and the intermediate frame. For example, assuming that the first video is a video A and the second video is a video B, when the value of L is 5, the electronic device may select five image frames in order from back to front from the video A, and the five image frames are A_N−4, A_N−3, A_N−2, A_N−1, and A_N respectively. At the same time, five image frames may also be selected in order from front to back from the video B, and the five image frames are B_1, B_2, B_3, B_4, and B_5 respectively. In this step, the electronic device may insert three image frames between the intermediate frame and the A_N_1, between the intermediate frame and the A_N_2, and between the intermediate frame and the A_N_3 separately. All the respective three image frames are regarded as candidate transition frames between the first video and the intermediate frame.
In S204, L−2 image frames, as candidate transition frames between the second video and the intermediate frame, are inserted between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, respectively.
In this step, the electronic device may insert L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, respectively, the L−2 image frames inserted between the second video and the intermediate frame are configured as candidate transition frames between the second video and the intermediate frame. For example, assuming that the first video is a video A and the second video is a video B, when the value of L is 5, the electronic device may select five image frames in order from back to front from the video A, and the five image frames are A_N−4, A_N−3, A_N−2, A_N−1, and A-N respectively. At the same time, five image frames may also be selected in order from front to back from the video B, and the five image frames are B_1, B_2, B_3, B_4, and B_5 respectively. In this step, the electronic device may insert three image frames between the intermediate frame and the B_2, between the intermediate frame and the B_3, and between the intermediate frame and the B_4 separately. All the respective three image frames are regarded as candidate transition frames between the second video and the intermediate frame.
In S205, the first video and the second video are stitched together to form a target video according to an L-th last image frame of the first video, the intermediate frame, an L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame.
In this step, the electronic device may stitch together the first video and the second video to form the target video according to the L-th last image frame of the first video, the intermediate frame, the L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame. Specifically, the electronic device may first select one image frame among L−2 image frames between each image frame of (L−1)-th last to second last image frames in the first video and the intermediate frame separately, as a target transition frame corresponding to the respective image frame of (L−1)-th last to second last image frames in the first video. Then, the electronic device may select one image frame among L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame separately, as a target transition frame corresponding to the respective image frame of second to (L−1)-th image frames in the second video. Then, the electronic device may stitch together the first video and the second video to form a target video according to the L-th last image frame of the first video, the intermediate frame, the L-th image frame of the second video, the target transition frame corresponding to the respective image frame of (L−1)-th last to second last image frames in the first video, and the target transition frame corresponding to the respective image frame of second to (L−1)-th image frames in the second video. Preferably, the electronic device may select a first image frame to an (L−2)-th image frame separately among respective L−2 image frames between each image frame of (L−1)-th last to second last image frames in the first video and the intermediate frame, each of the selected first image frame to the (L−2)-th image frame is configured as the target transition frame corresponding to the respective image frame of (L−1)-th last to second last image frames in the first video. At the same time, the electronic device may also select an (L−2)-th image frame to a first image frame separately among respective L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, each of the selected (L−2)-th image frame to the first image frame is configured as target transition frames each corresponding to the respective image frame of second to (L−1)-th image frames in the second video.
According to the video stitching method provided by the embodiment of the present disclosure, first, the intermediate frame is inserted between the last image frame of the first video and the first image frame of the second video. Then, L image frames are sequentially selected in order from back to front from the first video and L image frames are sequentially selected in order from front to back from the second video separately, and L is a natural number greater than 1. Then, the first video and the second video are stitched together to form the target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video. That is, in the present disclosure the first video and the second video can be stitched together smoothly based on the intermediate frame, the L image frames in the first video, and the L image frames in the second video, and a very obvious sign of a jump can be avoided. However, the existing video stitching method is often achieved by manual use of PS, which requires a large amount of labor and is slow, time-consuming, expensive and costly. In the present disclosure, the technical means is adopted that video stitching is achieved based on the intermediate frame, the L image frames in the first video, and the L image frames in the second video, so that the technical problem is solved that in the related art video stitching is achieved by manual use of PS and the manual use of PS requires a large amount of labor and is slow, time-consuming, expensive and costly. According to the technical solution of the present disclosure, the smooth transition between videos can be realized, and the difficulty of video stitching can be greatly reduced. At the same time, the speed of stitching can be increased, and the cost can be reduced. In addition, the technical solution of the embodiment of the present disclosure is simple and convenient to implement, easy to popularize, and applicable to a wider range.
In S401, an intermediate frame is inserted between a last image frame of a first video and a first image frame of a second video.
In S402, L image frames are sequentially selected in order from back to front from the first video and L image frames are sequentially selected in order from front to back from the second video separately. L is a natural number greater than 1.
In S403, M image frames are inserted between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively, as candidate transition frames between the first video and the intermediate frame.
In this step, an electronic device may insert M image frames between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively, the M image frames inserted between the first video and the intermediate frame are configured as candidate transition frames between the first video and the intermediate frame. For example, the electronic device may insert nine image frames between the intermediate frame and an image frame N−1, between the intermediate frame and an image frame N−2, and between the intermediate frame and an image frame N−3 separately. All the respective nine image frames are regarded as candidate transition frames between the first video and the intermediate frame. Meanwhile, nine image frames are inserted between the intermediate frame and an image frame 2, between the intermediate frame and an image frame 3, and between the intermediate frame and an image frame 4 separately. All the respective nine image frames are regarded as candidate transition frames between the second video and the intermediate frame.
In S404, M image frames are inserted between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, respectively, the M image frames inserted between the second video and the intermediate frame are configured as candidate transition frames between the second video and the intermediate frame. M is a natural number greater than 1.
In S405, the first video and the second video are stitched together to form a target video according to an L-th last image frame of the first video, the intermediate frame, an L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame.
In this step, the electronic device may stitch together the first video and the second video to form the target video according to the L-th last image frame of the first video, the intermediate frame, the L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame. Specifically, the present disclosure may perform non-linear sampling based on some non-linear functions, such as sigmoid, cosine, to make the transition curve smoother.
According to the video stitching method provided by the embodiment of the present disclosure, first, the intermediate frame is inserted between the last image frame of the first video and the first image frame of the second video. Then, L image frames are sequentially selected in order from back to front from the first video and L image frames are sequentially selected in order from front to back from the second video separately, and L is a natural number greater than 1. Then, the first video and the second video are stitched together to form the target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video. That is, in the present disclosure the first video and the second video can be stitched together smoothly based on the intermediate frame, the L image frames in the first video, and the L image frames in the second video, and a very obvious sign of a jump can be avoided. However, the existing video stitching method is often achieved by manual use of PS, which requires a large amount of labor and is slow, time-consuming, expensive and costly. In the present disclosure, the technical means is adopted that video stitching is achieved based on the intermediate frame, the L image frames in the first video, and the L image frames in the second video, so that the technical problem is solved that in the related art video stitching is achieved by manual use of PS and the manual use of PS requires a large amount of labor and is slow, time-consuming, expensive and costly. According to the technical solution of the present disclosure, the smooth transition between videos can be realized, and the difficulty of video stitching can be greatly reduced. At the same time, the speed of stitching can be increased, and the cost can be reduced. In addition, the technical solution of the embodiment of the present disclosure is simple and convenient to implement, easy to popularize, and applicable to a wider range.
The frame insertion module 601 is configured to insert an intermediate frame between a last image frame of a first video and a first image frame of a second video.
The selection module 602 is configured to sequentially select L image frames in order from back to front from the first video and L image frames in order from front to back from the second video separately. L is a natural number greater than 1.
The stitching module 603 is configured to stitch together the first video and the second video to form a target video according to the intermediate frame, the L image frames in the first video, and the L image frames in the second video.
Further, the stitching module 603 is specifically configured to insert L−2 image frames between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively, where the L−2 image frames inserted between the first video and the intermediate frame are configured as candidate transition frames between the first video and the intermediate frame; insert L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, respectively, where the L−2 image frames inserted between the second video and the intermediate frame are configured as candidate transition frames between the second video and the intermediate frame; and stitch together the first video and the second video to form a target video according to an L-th last image frame of the first video, the intermediate frame, an L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame.
Further, the stitching module 603 is specifically configured to select one image frame among L−2 image frames between each image frame of (L−1)-th last to second last image frames in the first video and the intermediate frame separately, as a target transition frame corresponding to the respective image frame of (L−1)-th last to second last image frames in the first video; select one image frame among L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame separately, as a target transition frame corresponding to the respective image frame of second to (L−1)-th image frames in the second video; and stitch together the first video and the second video to form a target video according to an L-th last image frame of the first video, the intermediate frame, an L-th image frame of the second video, the target transition frame corresponding to the respective image frame of (L−1)-th last to second last image frames in the first video, and the target transition frame corresponding to the respective image frame of second to (L−1)-th image frames in the second video.
Further, the stitching module 603 is specifically configured to select a first image frame to an (L−2)-th image frame separately among respective L−2 image frames between each image frame of (L−1)-th last to second last image frames in the first video and the intermediate frame, each of the selected first image frame to the (L−2)-th image frame is configured as the target transition frame corresponding to the respective image frame of (L−1)-th last to second last image frames in the first video.
Further, the stitching module 603 is specifically configured to select an (L−2)-th image frame to a first image frame separately among respective L−2 image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, each of the selected (L−2)-th image frame to the first image frame is configured as the target transition frame corresponding to the respective image frame of second to (L−1)-th image frames in the second video.
Further, the stitching module 603 is specifically configured to insert M image frames between each image frame of second last to (L−1)-th last image frames in the first video and the intermediate frame, respectively, the M image frames inserted between the first video and the intermediate frame are configured as candidate transition frames between the first video and the intermediate frame; insert M image frames between each image frame of second to (L−1)-th image frames in the second video and the intermediate frame, respectively, the M image frames inserted between the second video and the intermediate frame are configured as candidate transition frames between the second video and the intermediate frame, where M is a natural number greater than 1; and stitch together the first video and the second video to form a target video according to an L-th last image frame of the first video, the intermediate frame, an L-th image frame of the second video, the candidate transition frames between the first video and the intermediate frame, and the candidate transition frames between the second video and the intermediate frame.
The video stitching apparatus described above can execute the method provided by any embodiment of the present disclosure and has functional modules and beneficial effects corresponding to the executed method. For technical details not described in detail in the embodiment, reference may be made to the video stitching method provided by any embodiment of the present disclosure.
Acquisition, storage, and application on a user's personal information involved in the solution of the present disclosure conform to relevant laws and regulations and do not violate the public policy doctrine.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device, a readable storage medium, and a computer program product.
As shown in
Multiple components in the device 700 are connected to the I/O interface 705. The multiple components include an input unit 706 such as a keyboard or a mouse, an output unit 707 such as various types of displays or speakers, the storage unit 708 such as a magnetic disk or an optical disc, and a communication unit 709 such as a network card, a modem or a wireless communication transceiver. The communication unit 709 allows the device 700 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
The computing unit 701 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Examples of the computing unit 701 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), a special-purpose artificial intelligence (AI) computing chip, a computing unit executing machine learning models and algorithms, a digital signal processor (DSP), and any appropriate processor, controller and microcontroller. The computing unit 701 executes various methods and processing described above, such as the video stitching method. For example, in some embodiments, the video stitching method may be implemented as computer software programs tangibly contained in a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of computer programs may be loaded and/or installed on the device 700 via the ROM 702 and/or the communication unit 709. When the computer programs are loaded to the RAM 703 and executed by the computing unit 701, one or more steps of the above video stitching method may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured, in any other suitable manner (for example, using firmware), to execute the video stitching method.
Herein various embodiments of the systems and techniques described in the preceding may be performed in digital electronic circuitry, integrated circuitry, a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system on chip (SOC), a complex programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. The various embodiments may include implementations in one or more computer programs. The one or more computer programs may be executable and/or interpretable on a programmable system including at least one programmable processor. The programmable processor may be a special-purpose or general-purpose programmable processor for receiving data and instructions from a memory system, at least one input apparatus and at least one output apparatus and for transmitting the data and instructions to the memory system, the at least one input apparatus and the at least one output apparatus.
Program codes for the implementation of the methods of the present disclosure may be written in one programming language or any combination of multiple programming languages. The program codes may be provided for the processor or controller of a general-purpose computer, a special-purpose computer, or another programmable data processing apparatus to enable functions/operations specified in flowcharts and/or block diagrams to be implemented when the program codes are executed by the processor or controller. The program codes may be executed entirely on a machine or may be executed partly on a machine. As a stand-alone software package, the program codes may be executed partly on a machine and partly on a remote machine or may be executed entirely on a remote machine or a server.
In the context of the present disclosure, the machine-readable medium may be a tangible medium that may include or store a program that is used by or used in conjunction with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system, apparatus or device, or any suitable combination thereof. More specific examples of the machine-readable storage medium may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM) or a flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
In order that interaction with a user is provided, the systems and techniques described herein may be implemented on a computer. The computer has a display apparatus (for example, a cathode-ray tube (CRT) or a liquid-crystal display (LCD) monitor) for displaying information to the user and a keyboard and a pointing apparatus (for example, a mouse or a trackball) through which the user can provide input to the computer. Other types of apparatuses may also be used for providing interaction with the user. For example, feedback provided for the user may be sensory feedback in any form (for example, visual feedback, auditory feedback, or haptic feedback). Moreover, input from the user may be received in any form (including acoustic input, voice input, or haptic input).
The systems and techniques described herein may be implemented in a computing system including a back-end component (for example, a data server), a computing system including a middleware component (for example, an application server), a computing system including a front-end component (for example, a client computer having a graphical user interface or a web browser through which a user can interact with implementations of the systems and techniques described herein), or a computing system including any combination of such back-end, middleware or front-end components. Components of a system may be interconnected by any form or medium of digital data communication (for example, a communication network). Examples of the communication network include a local area network (LAN), a wide area network (WAN), and the Internet.
The computing system may include a client and a server. The client and the server are generally remote from each other and typically interact through a communication network. The relationship between the client and the server arises by virtue of computer programs running on respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combined with a blockchain.
It is to be understood that various forms of the preceding flows may be used with steps reordered, added, or removed. For example, the steps described in the present disclosure may be executed in parallel, in sequence or in a different order as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved. The execution sequence of these steps is not limited herein.
The scope of the present disclosure is not limited to the preceding embodiments. It is to be understood by those skilled in the art that various modifications, combinations, subcombinations, and substitutions may be made according to design requirements and other factors. Any modification, equivalent substitution, improvement and the like made within the spirit and principle of the present disclosure falls within the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202111315996.1 | Nov 2021 | CN | national |