Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source

Information

  • Patent Grant
  • 11445227
  • Patent Number
    11,445,227
  • Date Filed
    Wednesday, February 6, 2019
    5 years ago
  • Date Issued
    Tuesday, September 13, 2022
    2 years ago
Abstract
A method, a programmed computer system, for example, a network-based hardware device, and a machine readable medium containing a software program for modifying a high definition video data stream in real time, with no visible delays, to add content on a frame by frame basis, thus simultaneously compositing multiple different customized linear views, for purposes such as creating and broadcasting targeted advertising in real time. The method employs conventional video processing technology in novel and inventive ways to achieve the desired objective by passing data selected by the program back and forth between a GPU and a CPU of a computer. The method is also usable with data streams having lower than high definition where real time processing is desired and yields better results than conventional methods. In such applications, all processing may be done by the CPU of a sufficiently powerful general-purpose computer.
Description
FIELD OF THE INVENTION

The present invention, in some embodiments thereof, relates to processing of high definition video data, and more particularly, but not exclusively, to apparatus and methods for modifying one or more high definition video data streams on a frame-by-frame basis to add content thereto by extracting color mapping information from high-resolution frames and performing frame modification at the level of the extracted color maps, to simultaneously create multiple different synchronized linear views from the same original video stream source(s), while keeping the source(s) intact for future use. The invention also relates to non-volatile machine-readable media containing software for implementing the other aspects of the invention.


The invention has various applications, but will be described in the specific contexts of systems and methods for adding additional content, for example, targeted advertising, to high definition video data streams on a frame-by-frame basis in real time, i.e., within the presentation duration of the individual frames.


For purposes of the discussion herein, image resolution, including video frame resolution, is considered high when it is at least HD resolution of 1920 pixels×1080 lines (2.1 megapixels, aspect ratio 16:9).


BACKGROUND OF THE INVENTION

Computerized video creation and editing, and live video transmission are well-established and mature technologies in many respects. Commercially available applications and related support such as Videolicious®, available from Videolicious, Inc. of New York City and OBS Studio, an open source system, permit live, real time creation of video streams from multiple sources. In that respect, it is an alternative to post production editing followed by delayed transmission. Other applications that permit creating composite video streams, even using smart phones as the input source also exist.


Graphical processing methods and video editing programs capable of segmentation of frames into their constituent features are also known in the art. Such frame processing methods rely on calculations of relations between pixels, which are done by the central processing unit (CPU) and a Graphics Processing Unit (GPU) of an appropriately programmed computer. Color segmentation is one such constituent feature, See, for example, Zitnick, et al., High-quality Video View Interpolation Using a Layered Representation, an application for viewer viewpoint control of live video streams developed by the Interactive Visual Media Group, Microsoft Research, of Redmond, Wash.


However, known methods require substantial computational time, which increases exponentially according to frame size, making these methods unsuitable for real-time frame processing of high-resolution video streams, in particular on low-power electronic devices.


Also known, are video-editing programs capable of pixel modification of a preselected area in a frame. Such frame processing methods apply selected frame processes sequentially, applying each consecutive modification to the result of its predecessor. However, this process requires manual editing and compositing of each frame in the video using the program, which is a long and expensive post-production professional process.


Also known, are streaming servers which re-stream video while performing the same frame modifications to all outgoing streams, and television editing tables and other specially designed equipment for broadcasting video which perform the same.


Also known, are TV set-top-boxes and internet-protocol-based players which can overlay graphics for the viewer to create a unified video view.


There are, however, no prior art solutions known to the inventors hereof that provide for real-time high-resolution frame content modification based on frame segmentation or any other strategy to produce multiple differently customized videos synchronously, from the same input video stream(s), at the broadcasting source. The present invention seeks to meet this need and represents a significant improvement in computer functionality for video stream editing applications.


SUMMARY OF THE INVENTION

According to an aspect of some embodiments of the invention, there is provided method of modifying high resolution video data in real time using a computer including a GPU. The method involves receiving the video data stream by the computer, extracting color data for the individual pixels of the frames of the video data stream to create color layers, cascading the color layers into a unified color map of features in the frame, assembling pixel modification instructions from stored data according to predefined instructions, creating one or more modified sets of video data streams according to the pixel modification instructions, output resolution requirements, and desired file formats, while the original source remains intact. and making the modified video data available for access by one or more intended recipients.


According to some embodiment, the video data is in the form of a video stream provided by any source and in any native format, including a digital camera, a recorded video file, a streaming video file obtained over the internet, and a commercial video program obtained directly off-the-air.


According to some embodiments, the received video data stream is decoded from its native format, if necessary, to the internal processing format employed by the computer.


According to some embodiments, the pixel modification instructions include modifications of color of the original data. According to some embodiments, the pixel modification instructions include additional graphical data to be composited with the original graphical data.


According to some embodiments, the additional graphical data can be video data from any source and in any native format which has been decoded, from its native format, if necessary, to the internal processing format employed by the computer performing the method. According to some embodiments, additional graphical data may include images, definitions of 3D models, and definitions of text to be rendered by the GPU of the computer.


According to some embodiments the pixel modification instructions transmitted to the GPU include instructions for adding (compositing) graphical elements. According to some embodiments, the pixel modification instructions include storyboard script created prior to processing of the video stream and read during processing, and/or commands sent to the system from a human user during processing. According to some embodiments, the pixel modification instructions are generated automatically by a computer algorithm external to the computer performing the method.


According to some embodiments, part of frame, or a whole frame, as color layers, is transferred from the GPU to the CPU of the computer for processing by the external computer program, wherein the external program is one of a computer vision algorithms and a business-intelligence algorithm. According to some embodiments, the color layers which will be transferred to the CPU for subsequent processing by the external computer program undergo specific processing by the GPU prior to transfer, to prepare them for external logic application.


According to some embodiments of the invention, the method further involves executing a self-calibrating routine by accessing succeeding video frames to reduce sensitivity to detection of irrelevant changes in the video data.


According to some embodiments, the results of external processing are used to automatically generate pixel modification instructions. According to some embodiments, the pixel modification instructions for the color layers, including additional graphical elements, are transmitted to the GPU based on stored data for each intended recipient of each modified video stream.


According to some embodiments, the pixel modification instructions for the color layers include data provided by external systems for each intended recipient of each modified video stream. According to some embodiments, the one or more modified video data streams are created by merging the originally segmented color layers and the added graphical components into a series of single-layer color frames with uniform resolution, one per discrete set of instructions, rendering each single-layer color frame in one or more resolutions, as required by intended recipients, to create final frames, encoding, per resolution, each final frame to one or more codecs to create video components, and multiplexing, per video component, video and audio components into one or more different container files.


According to an aspect of the invention, each frame is modified without introducing visible delays to the modified output streams even for high definition video data streams, the multiple differently modified video data streams are created simultaneously and made available to multiple users, and all created modified video data streams which are based on the same original video data stream are synchronized with respect to the original video's timeline.


21. According to some embodiments, the incoming video data stream is characterized by lower than high definition, and all the processing steps are performed by the CPU and active memory of a computer without utilization of a GPU.


A according to an aspect of some embodiments of the invention, there is provide computer system having one or more central processing units, one or more graphics processing units, one or more network or other communication interface units, a shared memory including mass storage capability in which there is installed a program for performing a method as described above. According to some embodiments, the graphic processing functions are performed by a GPU or by a CPU operating in a parallel processing mode.


A according to a further aspect of some embodiments of the invention, there is provided a computer readable medium containing a computer program operable to perform the method as described above





BRIEF DESCRIPTION OF THE DRAWINGS

Some embodiments of the invention are described below, by way of example only, with reference to the accompanying drawings. It is, however, to be recognized that the particulars shown are by way of example only and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.


In the drawings:



FIG. 1 is a block diagram of a computer system providing the functionality of the present invention according to some embodiments thereof;



FIG. 2 is an internal process work flow diagram for the system illustrated in FIG. 1 according to some embodiments of the invention;



FIG. 3 is a flow diagram of synchronization of views creation according to some embodiments of the invention.





DETAILED DESCRIPTION OF AT LEAST ONE EMBODIMENT OF THE INVENTION

Introductory Overview:


The present invention, in some embodiments thereof, relates to processing of high definition video data and, more particularly, but not exclusively, to apparatus and methods for modifying one or more high definition video data streams on a frame-by-frame basis to add content thereto by extracting color mapping information from high-resolution frames and performing frame modification at the level of the extracted color maps, to simultaneously create multiple different synchronized linear views from the same original video stream source(s), while keeping the source(s) intact for future use. The invention also relates to non-volatile machine-readable media containing software for implementing the other aspects of the invention.


The invention has a variety of applications but will be described by way of example in the specific contexts of systems and methods for adding additional content, for example, targeted advertising to high definition video data streams on a frame-by-frame basis in real time, i.e., within the presentation duration of the individual frames.


The method according to the invention can also be used for improved real-time modification of video data on a frame by frame basis where high definition as defined above is not needed.


In simplest terms, the present invention uses an application software program to modify the individual pixels in a video data stream within the individual frames based on feature segmentation. According to an aspect of the invention, the feature upon which the segmentation is based is pixel color. According to some embodiments, the segmentation is performed by a GPU under control of the programming and the CPU of a conventional microprocessor-controlled computer.


More particularly, the present invention addresses the problem that processing of increasingly higher resolution images requires an exponential increase of computing power, so it is impossible to perform the described graphic modifications in real-time on UHD frames on a CPU alone, such that multiple different synchronized views are created simultaneously, based on the same source with different added graphical elements. While it is possible to perform graphic modifications in real time on HD frames, even this requires a very powerful CPU not typically found in most computers.


According to an aspect of the invention, the different modifications are done simultaneously to create multiple different views. According to some embodiments of the invention, the different modified video views can be immediately broadcasted to different viewers. According to some embodiments of the invention, the different modified video views can be saved as files. According to an aspect of the invention, the original video source(s) data remains intact and can be reused.


According to an aspect of the invention, there is provided a programming model for a general-purpose computer which significantly improves its functionality in real-time processing of a video stream and frame-by-frame modification of the video stream according to selectable criteria.


According to some embodiments, the criteria comprise the identity of the intended recipient of the modified video stream.


According to some embodiments, the modification to the video data comprises targeted advertising. Such targeted advertising can be based on collected demographic information about the recipient and/or derived from a computer readable medium or information stored in the host computer.


According to some embodiments, the modification instructions can be derived from what is known in the art as a “storyboard script”, i.e., a program written prior to processing and read during processing, and commands, include the definition of the added graphical content.


According to some embodiments, the content of the advertising is determined by a human user, as part of the initial configuration of the system or in real-time during processing and broadcasting.


According to other embodiments, the added content may be determined by an external system or organization, which makes such determinations by processing available data about the intended recipient. The content of the advertising can also be determined by non-human, business intelligence and artificial-intelligence systems, which analyze available recipient data and make decisions based on their algorithms.


According to some embodiments, the programming operates the computer to automatically create color-based segmentation of a high-resolution video stream in real time, whereby the video stream can be modified according to one or more predetermined criteria to add desired modifications to each frame at the level of the color-based segments, within timeframe limits of a real-time video stream.


According to some embodiments, the program includes a plurality of definitions of frame modifications, a list of mathematical functions, which implements the aforementioned modifications, rules for optimization of the mathematical functions and rules for application of the mathematical functions to assemble the respective frame modifications. The rules for assembly of the mathematical functions are configured to assemble all relevant mathematical functions, in a cascading manner, into a single composite process.


As used herein, “cascading manner into a single composite process” means that the mathematical functions are arranged so the input of consecutive functions is the output of their respective predecessors, so the assembly of mathematical functions becomes a single calculation which is easily optimized by a common routine performed by a GPU, as opposed to list of discrete functions to be applied one after another.


According to a further aspect of the invention, there is provided a non-volatile machine-readable medium, containing programming instructions for operating a computer to modify a video data stream in real time to insert selected content therein on a frame by frame basis. According to some embodiments, the programming instructions implement the image processing functions described above.


According to a further aspect of the invention, there is provided a method of modifying a video data stream in real time to add selected content thereto on a frame by frame basis. According to some embodiments, the method implements the image processing functions described above.


According to yet a further aspect of the invention, there is provided a method of synchronizing the creation of different views which are based on the same source, such that while the views differ in file format, resolution, and/or graphic modifications, they do not differ in timeline with respect to the original source.


Description of an Illustrative Embodiment

Referring to FIG. 1, there is shown a block diagram of a preferred system, generally denoted at 10, for implementing some embodiments of the invention. The system includes one or more input sources 12 for one or more high resolution (HD or higher) video files, a microprocessor-controlled computer system generally designated at 14, and an output interface, generally indicated at 16. Input sources 12 can be a digital camera, a recorded video file, a streaming video file obtained over the internet or a commercial video program obtained directly off-the-air.


Computer system 14 may be of conventional architecture and comprised of conventional components but is programmed according to the invention to provide a technical solution resulting in enhanced functionality as described herein not achievable by previously known methods and systems.


One particularly advantageous application of the invention is the processing of HD or higher resolution video files, in real time, to embed into a succession of frames thereof, additional information, e.g., an advertising message, that can be targeted specifically toward individual recipients or groups of recipients sharing common interests or other characteristics. Accordingly, output interface 16 provides multiple synchronized targeted output video streams, by way of example, four being shown at 16a-16d.


Structurally, computer system 14 may be of any type suitable for high-speed video processing. In some exemplary embodiments, such a system may include an incoming video input interface 18, such as network interface circuitry or video card, which receives the incoming video signal from source 12, a graphics processing unit (GPU) 20, a central processing unit (CPU) 22, a shared high-speed memory 24 which will typically include mass storage capability, and an outgoing video output interface unit 26 which provides the multiple outgoing video stream 16a, 16b, etc.


The video stream may be derived from any known or hereafter developed source, for example, a server, a digital camera, a mobile hand-held devices (smartphones), phablets, tablets, off the air television broadcasts, streaming video provided over the internet or video programming stored on a user's personal computer.


The modified output video stream may be directed to any of the kinds of destinations described above as sources of the original video data, as will be understood by those skilled in the art.



FIG. 2 illustrates an internal process flow diagram for a system such as shown in FIG. 1 according to some embodiments of the invention. The process begins at 30 with input of a high resolution (HD) video data stream, sometimes referred to as a high definition video file or camera stream, for example, from any of the possible sources mentioned above. At 32, the video file is transcoded from its native format to the internal processing format employed by the system, for example, “raw” video, or alternatively YUV420, RGBA 32 bit.


At 34 GPU 20 is operated to layer each graphical frame into separate preliminary layers according to color. The GPU optionally executes a self-feedback routine by accessing succeeding video frame(s) to tune GPU layer sensitivity. This helps eliminate false detection (light changes, stream quality etc.) Thus, it is preferably employed if the incoming video is a raw stream from a camera (as opposed to video which was post-processed with color calibration and smoothing). In such raw data streams, light artifacts may be present. Weighted average of several consecutive frames enables elimination of such artifacts and subsequent calculation errors.


The program can invite the option to be turned on\off manually by the user or deduced automatically from selection of input (on for live camera and off for video from file). By calculating a weighted average per pixel of colors in a current frame and colors in previous frames, unwanted artifacts such as specks of dust disappear from view.


At 36, two processes proceed in parallel. At 36a, GPU 20 is operated to perform pixel color modifications and, at 36b, if an external computer program requires this, the GPU creates a duplicate of the original video stream and performs initial preprocessing on a duplicate and sends the result to the CPU.


An important feature of some embodiments of the invention is the manner in which the color layer data is provided (between 36b and 40b) from GPU 20 to CPU 22 such that the data is optimized for analysis.


Referring still to FIG. 2, at 38, GPU 20 renders additional graphical elements by instructions generated from CPU 22 for one or more of the preliminary color layers; the instructions are created by the CPU from a predefined program or from data received from an external computer program. The data added to the individual frames is provided by a data storage source 40a containing a database of pre-defined graphical components and user related instructions for creating the desired recipient-specific views, or by an external computer program which combines user-intelligence with a database of graphical components 40b.


At 42, the programming operates GPU 20 to merge the color layers into a composite frame (comprised of the original frame and added graphical components, as described in 38) into a single-layer color frame with uniform resolution. The merging is done with respect to optional transparency of each layer. The act of merging several graphical layers into one, also called “flattening” is known to users of graphical editing software that enables layering, such as Photoshop® and Gimp. The result of the merge done by this invention matches the expectations of those users.


Then, at 44, GPU 20 can apply an addition frame color palette modification to create a unified frame by modifying the color of the frame created in 42. As a result, even though the graphics components come from different sources, i.e. the main video stream and the composed elements, such as other video streams and added graphics (e.g. CGI), they undergo the same specific modifications, and the outcome seems to the observer to have come from a single source.


In computers having sufficiently powerful CPUs, and enough active memory, instead of utilizing the GPU to perform parallel video stream processing, the program can be designed such that the CPU operates in a continuous loop over all the pixels within the frame. This option may be useful where real-time video stream processing is desired, but the incoming video stream has less than HD resolution.


Finally, at 46, GPU 20 renders multiple outputs of new video component streams with respect to intended recipient device resolution. The practice of outputting several resolution streams per video is known in the art. Each of the video component streams is encapsulated into a different file container (three of which are indicated at 46a, 46b, and 46c), which are then made available to intended recipients 48a, 48b, and 48c.


An important feature of some embodiments of the invention is the manner in which different views of the same source video stream are made available synchronously. The software created according to the present invention that provides the real-time synchronization of video streams (whether original or preprocessed) is described in connection with FIG. 3 below.



FIG. 3 is a program flow diagram illustrating implementation of the invention according to some embodiments, to synchronize video stream creation, such that any number of video stream which are created from the same source by application of different pixel modifications will display the same base frame at the same time, regardless of performed modifications or processing start time. In considering the following description, it is to be understood that the code defining the program may vary greatly without departing from the concepts of the invention, and that numerous variations of such code will be apparent to those skilled in the art.


Turning now specifically to FIG. 3, at 60, the high-resolution video data is received from an input device such as described above and is demultiplexed into video and audio components. It is to be understood that multiple video streams can be demultiplexed in parallel.


At 70 and 80, two processes proceed in parallel. At 70, video component stream is received by GPU 20, such that a part of GPU internal memory and processing power is dedicated to applying a discrete set of instructions to a video component (three of such parts are indicated at 20a, 20b, and 20c). It is to be understood that when multiple dedicated parts are configured to process the same input source, then graphical frame data is made available to each and all of them at the same time.


At 80, audio component stream or streams are passed through audio filters which combine multiple audio component streams into one. It is to be understood that when only a single audio component stream is passed, it is passed without combination.


At 90, video stream component is decoded from its codec to raw format and is passed as new texture (i.e., color data layers derived from each video frame) for application of various pixel modifications, including addition of graphical components, at 100. Although the preferred embodiment of this invention utilizes GPU 20 to perform decoding, it is understood by those skilled in the art that the decoding can also be done by a CPU 22.


At 110, the final frame is rendered at one or more resolutions and each frame, by resolution, is encoded at 120. Although the preferred embodiment of this invention utilizes GPU 20 to perform encoding, it is understood by those skilled in the art that the encoding can also be done by a CPU 22.


Finally, at 130, video streams and audio streams are multiplexed into one or more file containers (three of which are indicated at 140a, 140b, and 140c), such that each file container contains an audio stream and a video stream which was modified according to initial configuration of the system in relation to this output. It is to be understood that when multiple outputs, which differ in file formats, resolution, and/or pixel modifications, are based on same source inputs, then the outputs are synchronized.


According to the present invention, these technical capabilities are used in a unique way to achieve the enhanced functionality of synchronously creating multiple modified HD video files in real time, i.e., without noticeable delay.


General Interpretational Comments:


Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains.


It is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the above description and/or illustrated in the drawings. The invention is capable of other embodiments or of being practiced or carried out in various ways.


The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. The term “consisting of” means “including and limited to”.


The term “consisting essentially of” means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.


As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise.


The described methods and the required programming can be executed on any computer, which incorporates a suitable central processing unit (CPU) to execute the program, and a GPU in those embodiments for which it is utilized (for high-resolution video streams) to execute the complex frame processing, means for acquiring high-resolution input video-streams, as well as active memory and mass storage, for holding a collection of predefined features.


Implementation of the method and/or system of embodiments of the invention can involve performing or completing some tasks such as selection of intended recipients, manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.


For example, hardware for performing selected tasks according to embodiments of the invention could be implemented on a general-purpose computer suitably programmed as described herein, or as one or more special-purpose chips or circuits, i.e., one or more ASICs.


Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives and/or modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and broad scope of the appended claims.

Claims
  • 1. A method of automatically modifying high resolution video data in real time using a computer including a GPU, the method comprising: receiving a video data stream by the computer, wherein the video data stream includes multiple frames;extracting color data for the individual pixels of the frames of the video data stream to create color layers;cascading the color layers into a unified color map of features in the frame;assembling instructions to modify pixels of the frame at a level of the unified color map of features from stored data according to predefined instructions, wherein the instructions at least include additional graphical data to be composited with original graphical data of the video data stream;creating multiple modified sets of video data streams according to the instructions, output resolution requirements and desired file formats, while original source of the video data stream remains intact, wherein the modified sets of video data are different from each other and the creating includes:merging the color layers and the additional graphical data into single-layer color frames with uniform resolution, wherein each single-layer color frame is based on a discrete set of the instructions,rendering each single-layer color frame based on the output resolution requirements to create at least one final frame,encoding, based on the resolution, each final frame to at least one codec to create video components, andmultiplexing the video components into the desired file formats; andmaking the modified set of video data streams available for access by one or more intended recipients.
  • 2. The method according to claim 1, wherein the video stream is provided by any source and in any native format, including a digital camera, a recorded video file, a streaming video file obtained over a network, and a commercial video program obtained directly off air.
  • 3. The method according to claim 2 further including decoding the received video data stream from its native format to an internal processing format employed by the computer.
  • 4. The method according to claim 1, wherein the instructions include modifications of color of the original graphical data of the video data stream.
  • 5. The method according to claim 1, wherein the additional graphical data includes other video data from any source and in any native format, and has been decoded, from its native format to an internal processing format employed by the computer by which the method is being performed.
  • 6. The method according to claim 1, wherein the additional graphical data includes images, definitions of 3D models to be rendered by the GPU, and definitions of text to be rendered by the GPU.
  • 7. The method according to claim 1, wherein the instructions transmitted to the GPU include instructions for adding graphical elements.
  • 8. The method according to claim 1, wherein the instructions include storyboard script created prior to processing of the video data stream and read during the processing, and/or commands sent from a user during the processing.
  • 9. The method according to claim 1, wherein the instructions are generated automatically by a computer algorithm external to the computer performing the method.
  • 10. The method according to claim 1, wherein part of frame or a whole frame, as color layers, is transferred from the GPU to a CPU of the computer for processing by computer vision algorithms and/or a business-intelligence algorithm.
  • 11. The method according to claim 10, wherein the color layers which will be transferred to the CPU for subsequent processing undergo processing by the GPU prior to transfer, to prepare them for external logic application.
  • 12. The method according to claim 10, wherein results of the processing are used to automatically generate the instructions.
  • 13. The method according claim 1, further including executing a self-calibrating routine by accessing succeeding video frames to reduce sensitivity to detection of irrelevant changes in the video data.
  • 14. The method according to claim 1, wherein the instructions for the color layers, including additional graphical elements, are transmitted to the GPU based on stored data for each intended recipient of each modified video stream.
  • 15. The method according to claim 14, wherein the instructions for the color layers include data provided by external systems for each intended recipient of each modified video stream.
  • 16. The method according to claim 1, wherein each frame is modified without introducing visible delays.
  • 17. The method according to claim 1, wherein the multiple modified video data streams are created simultaneously and made available to multiple users.
  • 18. The method according to claim 17, wherein modified video data streams which are based on the original graphical data of the video data stream are synchronized with respect to the video's timeline.
  • 19. The method according to claim 1, wherein the video data stream is characterized by lower than high definition, and the steps of claim 1 are performed by a CPU and active memory of the computer without utilization of the GPU.
  • 20. A computer system for automatically modifying high resolution video data in real time, comprising: one or more central processing units and/or one or more graphics processing units;one or more network or other communication interface units;a shared memory including mass storage capability;wherein the memory contains a program that configures the computer system to: receive a video data stream, wherein the video data stream includes multiple frames;extract color data for individual pixels of the frames of the video data stream to create color layers;cascade the color layers into a unified color map of features in the frame;assemble instructions to modify pixels of the frame at a level of the unified color map of features from stored data according to predefined instructions, wherein the instructions include additional graphical data to be composited with original graphical data of the video data stream;create multiple modified sets of video data streams according to the instructions, output resolution requirements and desired file formats, while original source of the video data stream remains intact, wherein the modified sets of video data are different from each other and the creation includes: merging the color layers and the additional graphical data of the video data stream into single-layer color frames with uniform resolution, wherein each single-layer color frame is based on a discrete set of the instructions,rendering each single-layer color frame based on the output resolution requirements to create at least one final frame,encoding, based on the resolution, each final frame to at least one codec to create video components, andmultiplexing the video components into the desired file formats; andmake the modified sets of video data streams available for access by one or more intended recipients.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a U.S. National Stage Application of International Application No. PCT/IL2019/050141 filed Feb. 6, 2019, which claims priority from U.S. Patent Application No. 62/683,654 filed Jun. 12, 2018. The entirety of all the above-listed applications are incorporated herein by reference.

PCT Information
Filing Document Filing Date Country Kind
PCT/IL2019/050141 2/6/2019 WO
Publishing Document Publishing Date Country Kind
WO2019/239396 12/19/2019 WO A
US Referenced Citations (243)
Number Name Date Kind
6189064 Macinnis et al. Feb 2001 B1
6380945 Macinnis et al. Apr 2002 B1
6501480 Macinnis et al. Dec 2002 B1
6529935 Macinnis et al. Mar 2003 B1
6538656 Macinnis et al. Mar 2003 B1
6570579 Macinnis et al. May 2003 B1
6573905 Macinnis et al. Jun 2003 B1
6608630 Macinnis et al. Aug 2003 B1
6630945 Macinnis et al. Oct 2003 B1
6636222 Macinnis et al. Oct 2003 B1
6661422 Macinnis et al. Dec 2003 B1
6661427 Macinnis et al. Dec 2003 B1
6700588 Macinnis et al. Mar 2004 B1
6721837 Macinnis et al. Apr 2004 B2
6731295 Macinnis et al. May 2004 B1
6738072 Macinnis et al. May 2004 B1
6744472 Macinnis et al. Jun 2004 B1
6762762 Macinnis et al. Jul 2004 B2
6768774 Macinnis et al. Jul 2004 B1
6771196 Macinnis et al. Aug 2004 B2
6781601 Macinnis et al. Aug 2004 B2
6798420 Macinnis et al. Sep 2004 B1
6819330 Macinnis et al. Nov 2004 B2
6853385 Macinnis et al. Feb 2005 B1
6870538 Macinnis et al. Mar 2005 B2
6879330 Macinnis et al. Apr 2005 B2
6927783 Macinnis et al. Aug 2005 B1
6944746 Macinnis et al. Sep 2005 B2
6963613 Macinnis et al. Nov 2005 B2
6975324 Valmiki et al. Dec 2005 B1
7002602 Macinnis et al. Feb 2006 B2
7007031 Macinnis et al. Feb 2006 B2
7015928 Macinnis et al. Mar 2006 B2
7034897 Macinnis et al. Apr 2006 B2
7057622 Macinnis et al. Jul 2006 B2
7071944 Macinnis et al. Jul 2006 B2
7095341 Macinnis et al. Aug 2006 B2
7096245 Macinnis et al. Aug 2006 B2
7098930 Macinnis et al. Aug 2006 B2
7110006 Macinnis et al. Sep 2006 B2
7184058 Macinnis et al. Feb 2007 B2
7209992 Macinnis et al. Apr 2007 B2
7227582 Macinnis et al. Jun 2007 B2
7256790 Macinnis et al. Aug 2007 B2
7277099 Macinnis et al. Oct 2007 B2
7302503 Macinnis et al. Nov 2007 B2
7310104 Macinnis et al. Dec 2007 B2
7365752 Macinnis et al. Apr 2008 B2
7425678 Macinnis et al. Sep 2008 B2
7427713 Macinnis et al. Sep 2008 B2
7446774 Macinnis et al. Nov 2008 B1
7476804 Macinnis et al. Jan 2009 B2
7485803 Macinnis et al. Feb 2009 B2
7495169 Macinnis et al. Feb 2009 B2
7498512 Macinnis et al. Mar 2009 B2
7504581 Macinnis et al. Mar 2009 B2
7530027 Macinnis et al. May 2009 B2
7538783 Macinnis et al. May 2009 B2
7545438 Macinnis et al. Jun 2009 B2
7554553 Macinnis et al. Jun 2009 B2
7554562 Macinnis et al. Jun 2009 B2
7565462 Macinnis et al. Jul 2009 B2
7592541 Macinnis et al. Sep 2009 B2
7598962 Macinnis et al. Oct 2009 B2
7608779 Macinnis et al. Oct 2009 B2
7659900 Macinnis et al. Feb 2010 B2
7667135 Macinnis et al. Feb 2010 B2
7667710 Macinnis et al. Feb 2010 B2
7667715 Macinnis et al. Feb 2010 B2
7718891 Macinnis et al. May 2010 B2
7746354 Macinnis et al. Jun 2010 B2
7772489 Macinnis et al. Aug 2010 B2
7781675 Macinnis et al. Aug 2010 B2
7795532 Macinnis et al. Sep 2010 B2
7848430 Macinnis et al. Dec 2010 B2
7880084 Macinnis et al. Feb 2011 B2
7881385 Macinnis et al. Feb 2011 B2
7911483 Macinnis et al. Mar 2011 B1
7912220 Macinnis et al. Mar 2011 B2
7916795 Macinnis et al. Mar 2011 B2
7920151 Macinnis et al. Apr 2011 B2
7920624 Macinnis et al. Apr 2011 B2
7966613 Macinnis et al. Jun 2011 B2
7982740 Macinnis et al. Jul 2011 B2
7991049 Macinnis et al. Aug 2011 B2
8005147 Macinnis et al. Aug 2011 B2
8022966 Macinnis et al. Sep 2011 B2
8024389 Macinnis et al. Sep 2011 B2
8078981 Macinnis et al. Dec 2011 B2
8164601 Macinnis et al. Apr 2012 B2
8165155 Macinnis et al. Apr 2012 B2
8171500 Macinnis et al. May 2012 B2
8189678 Macinnis et al. May 2012 B2
8199154 Macinnis et al. Jun 2012 B2
8229002 Macinnis et al. Jul 2012 B2
8237052 Macinnis et al. Aug 2012 B2
8284844 Macinnis et al. Oct 2012 B2
8390635 Macinnis et al. Mar 2013 B2
8395046 Macinnis et al. Mar 2013 B2
8401084 Macinnis et al. Mar 2013 B2
8493415 Macinnis et al. Jul 2013 B2
8659608 Macinnis et al. Feb 2014 B2
8698838 Masterson Apr 2014 B1
8802978 Macinnis et al. Aug 2014 B2
8848792 Macinnis et al. Sep 2014 B2
8850078 Macinnis et al. Sep 2014 B2
8863134 Macinnis et al. Oct 2014 B2
8913667 Macinnis et al. Dec 2014 B2
8942295 Macinnis et al. Jan 2015 B2
8989263 Macinnis et al. Mar 2015 B2
9053752 Masterson Jun 2015 B1
9077997 Macinnis et al. Jul 2015 B2
9104424 Macinnis et al. Aug 2015 B2
9111369 Macinnis et al. Aug 2015 B2
9307236 Macinnis et al. Apr 2016 B2
9329871 Macinnis et al. May 2016 B2
9417883 Macinnis et al. Aug 2016 B2
9575665 Macinnis et al. Feb 2017 B2
9668011 Macinnis et al. May 2017 B2
20020093517 Macinnis et al. Jul 2002 A1
20020106018 Macinnis et al. Aug 2002 A1
20020145613 Macinnis et al. Oct 2002 A1
20030117406 Macinnis et al. Jun 2003 A1
20030158987 Macinnis et al. Aug 2003 A1
20030189571 Macinnis et al. Sep 2003 A1
20030184457 Macinnis et al. Oct 2003 A1
20030185298 Macinnis et al. Oct 2003 A1
20030185305 Macinnis et al. Oct 2003 A1
20030185306 Macinnis et al. Oct 2003 A1
20030187824 Macinnis et al. Oct 2003 A1
20030187895 Macinnis et al. Oct 2003 A1
20030188127 Macinnis et al. Oct 2003 A1
20030189982 Macinnis et al. Oct 2003 A1
20030206174 Macinnis et al. Nov 2003 A1
20030235251 Macinnis et al. Dec 2003 A1
20040017398 Macinnis et al. Jan 2004 A1
20040028141 Macinnis et al. Feb 2004 A1
20040047194 Macinnis et al. Mar 2004 A1
20040056864 Macinnis et al. Mar 2004 A1
20040056874 Macinnis et al. Mar 2004 A1
20040130558 Macinnis et al. Jul 2004 A1
20040150652 Macinnis et al. Aug 2004 A1
20040169660 Macinnis et al. Sep 2004 A1
20040177190 Macinnis et al. Sep 2004 A1
20040177191 Macinnis et al. Sep 2004 A1
20040208245 Macinnis et al. Oct 2004 A1
20040212730 Macinnis et al. Oct 2004 A1
20040212734 Macinnis et al. Oct 2004 A1
20040246257 Macinnis et al. Dec 2004 A1
20050007264 Macinnis et al. Jan 2005 A1
20050012759 Macinnis et al. Jan 2005 A1
20050024369 Macinnis et al. Feb 2005 A1
20050044175 Macinnis et al. Feb 2005 A1
20050122335 Macinnis et al. Jun 2005 A1
20050123057 Macinnis et al. Jun 2005 A1
20050122341 Macinnis et al. Jul 2005 A1
20050160251 Macinnis et al. Jul 2005 A1
20050168480 Macinnis et al. Aug 2005 A1
20050177845 Macinnis et al. Aug 2005 A1
20050231526 Macinnis et al. Oct 2005 A1
20060002427 Macinnis et al. Jan 2006 A1
20060056517 Macinnis et al. Mar 2006 A1
20060193383 Macinnis et al. Aug 2006 A1
20060197771 Macinnis et al. Sep 2006 A1
20060262129 Macinnis et al. Nov 2006 A1
20060268012 Macinnis et al. Nov 2006 A1
20060280374 Macinnis et al. Dec 2006 A1
20060290708 Macinnis et al. Dec 2006 A1
20070005795 Gonzalez Jan 2007 A1
20070030276 Macinnis et al. Feb 2007 A1
20070089114 Macinnis et al. Apr 2007 A1
20070103489 Macinnis et al. May 2007 A1
20070120874 Macinnis et al. May 2007 A1
20070210679 Macinnis et al. Sep 2007 A1
20070210680 Macinnis et al. Sep 2007 A1
20070210681 Macinnis et al. Sep 2007 A1
20070210683 Macinnis et al. Sep 2007 A1
20070210686 Macinnis et al. Sep 2007 A1
20070217518 Macinnis et al. Sep 2007 A1
20070221393 Macinnis et al. Sep 2007 A1
20070285440 Macinnis et al. Dec 2007 A1
20080036339 Macinnis et al. Feb 2008 A1
20080036340 Macinnis et al. Feb 2008 A1
20080067903 Macinnis et al. Mar 2008 A1
20080067904 Macinnis et al. Mar 2008 A1
20080074012 Macinnis et al. Mar 2008 A1
20080074849 Macinnis et al. Mar 2008 A1
20080077953 Fernandez Mar 2008 A1
20080079340 Macinnis et al. Apr 2008 A1
20080094416 Macinnis et al. Apr 2008 A1
20080094506 Macinnis et al. Apr 2008 A1
20080133786 Macinnis et al. Jun 2008 A1
20080174217 Macinnis et al. Jul 2008 A1
20090066724 Macinnis et al. Mar 2009 A1
20090128572 Macinnis et al. May 2009 A1
20090134750 Macinnis et al. May 2009 A1
20090150923 Macinnis et al. Jun 2009 A9
20090282172 Macinnis et al. Nov 2009 A1
20090295815 Macinnis et al. Dec 2009 A1
20090295834 Macinnis et al. Dec 2009 A1
20100103195 Macinnis et al. Apr 2010 A1
20100110106 Macinnis et al. May 2010 A1
20100171761 Macinnis et al. Jul 2010 A1
20100171762 Macinnis et al. Jul 2010 A1
20100219726 Macinnis et al. Sep 2010 A1
20100238994 Cakareski Sep 2010 A1
20100264788 Macinnis et al. Oct 2010 A1
20110063415 Gefen Mar 2011 A1
20110084580 Macinnis et al. Apr 2011 A1
20110122941 Macinnis et al. May 2011 A1
20110173612 Macinnis et al. Jul 2011 A1
20110188580 Macinnis et al. Aug 2011 A1
20110193868 Macinnis et al. Aug 2011 A1
20110234618 Macinnis et al. Sep 2011 A1
20110273476 Macinnis et al. Nov 2011 A1
20110280307 Macinnis et al. Nov 2011 A1
20110292074 Macinnis et al. Dec 2011 A1
20110292082 Macinnis et al. Dec 2011 A1
20120020412 Macinnis et al. Jan 2012 A1
20120087593 Macinnis et al. Apr 2012 A1
20120093215 Macinnis et al. Apr 2012 A1
20120191879 Macinnis et al. Jul 2012 A1
20120267991 Macinnis et al. Oct 2012 A1
20120268655 Macinnis et al. Oct 2012 A1
20120328000 Macinnis et al. Dec 2012 A1
20130022105 Macinnis et al. Jan 2013 A1
20130195175 Macinnis et al. Aug 2013 A1
20140078155 Macinnis et al. Mar 2014 A1
20140207644 Lutnick et al. Jul 2014 A1
20150317085 Macinnis et al. Nov 2015 A1
20160364490 Maugans, III Dec 2016 A1
20160364736 Maugans, III Dec 2016 A1
20160364762 Maugans, III Dec 2016 A1
20160364767 Maugans, III Dec 2016 A1
20180239472 Shaw et al. May 2018 A1
20180176607 Shaw et al. Jun 2018 A1
20180184133 Shaw et al. Jun 2018 A1
20180184138 Shaw et al. Jun 2018 A1
20180199082 Shaw et al. Jul 2018 A1
20180220165 Shaw et al. Aug 2018 A1
20190110096 Shaw et al. Apr 2019 A1
20190370831 Maugans, III Dec 2019 A1
20200410515 Maugans, III Dec 2020 A1
Foreign Referenced Citations (1)
Number Date Country
WO 2008039371 Apr 2008 WO
Non-Patent Literature Citations (4)
Entry
Zitnick et al. “High-quality video view interpolation using a layered representation”, ACM Transactions on Graphics (TOB), vol. 23, No. 3 (2004); retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.398.6681&rep=rep1&type=pdf.
International Search Report dated May 23, 2019 issued in International Application No. PCT/IL2019/050141.
Written Opinion dated May 23, 2019 issued in International Application No. PCT/IL2019/050141.
Supplementary European Search Report dated Apr. 19, 2021 issued in European Application No. EP 19 82 0352.
Related Publications (1)
Number Date Country
20210258621 A1 Aug 2021 US
Provisional Applications (1)
Number Date Country
62683654 Jun 2018 US