This disclosure is generally related to image and video compositing. More specifically, the disclosure is directed to a system for inserting a person into an image or sequence of images and sharing the result on a social network.
Compositing of multiple video sources along with graphics has been a computational and labor intensive process reserved for professional applications. Simple consumer applications exist, but may be limited to overlaying of an image on top of another image. There is a need to be able to place a captured person or graphic object on to and within a photographic, video, or game clip.
Various implementations of systems, methods and devices within the scope of the appended claims each have several aspects, no single one of which is solely responsible for the desired attributes described herein. In this regard, embodiments of the present disclosure may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Without limiting the scope of the appended claims, some prominent features are described herein.
An apparatus for adding image information into at least one image frame of a video stream is provided. The apparatus comprises a storage circuit for storing depth information about first and second objects in the at least one image frame. The apparatus also comprises a processing circuit configured to add a third object into a first planar position. The third object is added at an image depth level of the at least one image frame based on selecting whether the first or second object is a background object. The processing circuit is further configured to maintain the third object at the image depth level in a subsequent image frame of the video stream. The image depth level is consistent with the selection of the first or second object as the background object. The processing circuit is further configured to move the third object from the first planar position to a second planar position in a subsequent image frame of the video stream. The second planar position is based at least in part on the movement of an object associated with a target point.
A method for adding image information into at least one image frame of a video stream is also provided. The method comprises storing depth information about first and second objects in the at least one image frame. The method further comprises adding a third object into a first planar position. The third object is added at an image depth level of the at least one image frame based on selecting whether the first or second object is a background object. The method further comprises maintaining the third object at the image depth level in a subsequent image frame of the video stream. The image depth level is consistent with the selection of the first or second object as the background object. The method further comprises moving the third object from the first planar position to a second planar position in a subsequent image frame of the video stream. The second planar position is based at least in part on movement of an object associated with a target point.
An apparatus for adding image information into at least one image frame of a video stream is also provided. The apparatus comprises a means for storing depth information about first and second objects in the at least one image frame. The apparatus further comprises a means for adding a third object into a first planar position. The third object is added at an image depth level of the at least one image frame based on selecting whether the first or second object is a background object. The apparatus further comprises a means for maintaining the third object at the image depth level in a subsequent image frame of the video stream. The image depth level is consistent with the selection of the first or second object as the background object. The apparatus further comprises a means for moving the third object from the first planar position to a second planar position in a subsequent image frame of the video stream. The second planar position is based at least in part on movement of an object associated with a target point.
Various aspects of the novel systems, apparatuses, and methods are described more fully hereinafter with reference to the accompanying drawings. The teachings of the disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects and embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure. The scope of the disclosure is intended to cover any aspect of the novel systems, apparatuses, and methods disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect disclosed herein may be embodied by one or more elements of a claim.
Although particular embodiments are described herein, many variations and permutations of these embodiments fall within the scope of the disclosure. Although some benefits and advantages of the embodiments are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks, and protocols, some of which are illustrated by way of example in the figures and in the following description of the embodiments. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
According to one embodiment, the depth-based compositing system 100 comprises a content source 110 coupled to the processing circuit 130. The content source 110 is configured to provide the processing circuit 130 with an image(s) or video(s). In one embodiment, the content source 110 provides the one or more image frames that will be the medium in which an image(s) or video(s) of an object source 120 will be inserted. The image(s) or video(s) from the content source 110 will be referred to herein as “Image frame”. For example, the content source 110 is configured to provide one or more video clips from a variety of sources, such as broadcast, movie, photographic, computer animation, or a video game. The video clips may be of a variety of formats, including two-dimensional (2D), stereoscopic, and 2D+depth video. Image frame from a video game or a computer animation may have a rich source of depth content associated with it. A Z-buffer may be used in the computer graphics process to facilitate hidden surface removal and other advanced rendering techniques. A Z-buffer generally refers to a memory buffer for computer graphics that identifies surfaces that may be hidden from the viewer when projected on to a 2D display. The processing circuit 130 may be configured to directly use the depth-layer data in the computer graphics process's z-buffer by the depth-based compositing system 100 for depth-based compositing. Some games may be rendered in a layered framework rather than a full 3D environment. In this context, the processing circuit 130 may be configured to effectively construct the depth-layers by examining the depth-layers that individual game objects are rendered on.
According to one embodiment, the depth-based compositing system 100 further comprises the object source 120 that is coupled to the processing circuit 130. The object source 120 is configured to provide the processing circuit 130 with an image(s) or video(s). The object source 120 may provide the object image that will be inserted into the image frame. Image(s) or video(s) from the object source 120 will be referred to herein as “Object Image”. In one embodiment of the present invention, the object source 120 is further configured to provide graphic objects. The graphic objects may be inserted into the image frame in the same way that the object image may be inserted. Examples of graphic objects include titles, captions, clothing, accessories, vehicles, etc. Graphic objects may also be selected from a library or be user generated. According to another embodiment, the object source 120 is further configured to use a 2D webcam capture technique to capture the object image to be composited into depth-layers. The objective is to leverage 2D webcams in PCs, tablets, smartphones, game consoles and an increasing number of Smart televisions (TVs). In another embodiment, a high quality webcam is used. The high quality webcam is capable of capturing up to 4k or more content at 30 fps. This allows the webcam to be robust in lower light conditions typical of a consumer workspace and with a low level of sensor noise. The webcam may be integrated into the object source 120 (such as within the bezel of a PC notebook, or the forward facing camera of a smartphone) or be a separate system component that is plugged into the system (such as an external universal serial bus (USB) webcam or a discrete accessory). The webcam may be stationary during acquisition of the object image to facilitate accurate extraction of the background. However, the background removal circuit 240 may also be robust enough to extract the background with relative motion between the background and the person of the object image. For example, the user acquires video while walking with a phone so that the object image is in constant motion.
The processing circuit 130 may be configured to control operations of the depth-based compositing system 100. For example, the processing circuit 130 is configured to create a final image(s) or video(s) by inserting the object image provided by the object source 120 into the image frame provided by the content source 110. The final image(s) or video(s) created by the processing circuit 130 will be referred to as “Final image”. In an embodiment, the processing circuit 130 is configured to execute instruction codes (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the processing circuit 130, perform depth-based compositing as described herein. The processing circuit 130 may be implemented with any combination of processing circuits, general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that may perform calculations or other manipulations of information. In an example, the processing circuit 130 is run locally on a personal device, such as a PC, tablet, or smartphone, or on a cloud-based application that is controlled from a personal device.
According to one embodiment, the depth-based compositing system 100 further comprises a control input circuit 150. The control input circuit 150 is coupled to the processing circuit 130. The control input circuit 150 may be configured to receive input from a user and may be configured to send the signal to the processing circuit 130. The control input circuit 150 provides a way for the user to control how the depth-based compositing is performed. For example, the user may use a pointing device on a PC or by a finger movement on a touchscreen device or by hand and finger gesture on a device equipped with gesture detection. In one embodiment, the control input circuit 150 is configured to allow the user to control positioning of the object image spatially in the image frame when the processing circuit 130 performs depth-based compositing. In an alternative or additional embodiment, a non-user (e.g. a program or other intelligent source) may provide input to the control input circuit 150.
The control input circuit 150 may further be configured to control the depth of the object image. In one embodiment, the control input circuit 150 is configured to receive a signal from a device (not shown in
The control input circuit 150 may also be configured to control the size and orientation of the object image relative to objects in the image frame. The user provides an input to the control input circuit 150 to control the size, for example, a slider or a pinching gesture (e.g., moving two fingers closer together to reduce the size or further apart to increase the size) on a touchscreen device or gesture detection equipped device. When the object image includes video, editing may be done in real-time, at a reduced frame rate, or on a paused frame. The image frame and/or object image may or may not include audio. If audio is included, the processing circuit 130 may mix the audio from the image frame with the audio from the object image. The processing circuit 130 may also dub the final image during the editing process.
According to one embodiment, the depth-based compositing system 100 further comprises the storage circuit 160. The storage circuit 160 may be configured to store the image frame from the content source 110 or the object image from the object source 120, user inputs from the control input circuit 150, data retrieved throughout the depth-based compositing within the processing circuit 130, and/or the final image created by the processing circuit 130. The storage circuit 160 may store for very short periods of time, such as in a buffer, or for extended periods of time, such as on a hard drive. In one embodiment, the storage circuit 160 comprises both read-only memory (ROM) and random access memory (RAM) and provides instructions and data to the processing circuit 130 or the control input circuit 150. A portion of the storage circuit 160 may also include non-volatile random access memory (NVRAM). The storage circuit 160 may be coupled to the processing circuit 130 via a bus system. The bus system may be configured to couple each component of the depth-based compositing system 100 to each other component in order to provide information transfer.
According to one embodiment, the depth-based compositing system 100 further comprises an output medium 140. The output medium 140 is coupled to the processing circuit 130. The processing circuit 130 provides the output medium 140 with the final image. In one embodiment, the output medium 140 records, tags, and shares the final image to a network, social media, user's remote devices, etc. For example, the output medium 140 may be a computer terminal, a web server, a display unit, a memory storage, a wearable device, and/or a remote device.
According to one embodiment, the processing circuit 130 further comprises the depth extraction circuit 210 and the depth-layering circuit 220. The depth layering circuit 220 is coupled to the depth extraction circuit 210, the metadata extraction circuit 260, and the motion tracking circuit 230. The depth extraction circuit 210 may receive the image frame from the content source 110. In one embodiment, the depth extraction circuit 210 and the depth-layering circuit 220 extracts and separates the image frame into multiple depth-layers so that a compositing/editing circuit 250 may insert the object image into an insert layer that is located within the multiple depth-layers. The compositing/editing circuit 250 may then combine the insert layer with the other multiple depth-layers to generate the final image. Depth extraction generally refers to the process of creating depth value for one or more pixels in an image. Depth layering, on the other hand, generally refers to the process of separating an image into a number of depth layers based on the depth value of pixels. Generally, a depth layer will contain pixels with a range of depth values.
According to one embodiment, the processing circuit 130 further comprises a background subtraction circuit 240. The background subtraction circuit 240 receives the object image from the object source 120 and removes the background of the object image. The background may be removed so that just the object may be inserted into the image frame. The background subtraction circuit 240 may be configured to remove the background using depth based techniques described in the US Pat. Pub. No. US20120069007 A1, which is herein incorporated by reference in its entirety. For example, the background subtraction circuit 240 refines an initial depth map estimate by detecting and tracking an observer's face, and models the position of the torso and body to generate a refined depth model. Once the depth model is determined, the background subtraction circuit 240 selects a threshold to determine which depth range represents foreground objects and which depth range represents background objects. The depth threshold may be set to ensure the depth map encompasses the detected face in the foreground region. In an alternative embodiment, alternative background removal techniques may be used to remove the background, for example, as those described in U.S. Pat. No. 7,720,283 to Sun, which is herein incorporated by reference in its entirety.
According to one embodiment, the processing circuit 130 further comprises the motion tracking circuit 230. The motion tracking circuit 230 receives the layers from the depth-layering circuit 220 and a control signal from the control input circuit 150. In one embodiment, the motion tracking circuit 230 is configured to determine how to smoothly move the object image in relation to the motion of other objects in the image frame. In order to do so, the object image is displaced from one frame to the next frame by an amount that is substantially commensurate with the movement of other nearby objects of the image frame.
According to one embodiment, the processing circuit 130 further comprises the compositing/editing circuit 250. The compositing/editing circuit 250 is configured to insert the object image into the image frame. In one embodiment, the object image is inserted into the image frame by first considering the alpha matte for the object image provided by the threshold depth map. The term ‘alpha’ generally refers to the transparency (or conversely, the opacity) of an image. An alpha matte generally refers to an image layer indicating the alpha value for each image pixel to the processing circuit 130. Image composition techniques are used to insert the object image with the alpha matte into the image frame. The object image is overlaid on top of the image frame such that pixels of the object image obscure any existing pixels in the image frame, unless the object image pixel is transparent (as is the case when the depth map has reached its threshold). In this case, the pixel from existing image is retained. This reduces the number of frames needed to have insertion positions identified to just a few key frames or only the starting position. The image frame may already have the insertion positions marked by metadata or may include metadata for motion tracking provided by the metadata extraction circuit 260. Alternatively or additionally, the motion tracking circuit 230 may mark the image frame to signify the location. The marking of the object image may be inserted by placing a small block in the image frame that the processing circuit 130 may recognize. This may be easily detected by an editing process. This also survives high levels of video compression. In order to achieve a more pleasing final image, the compositing/editing circuit 250 uses edge blending, color matching and brightness matching techniques to provide the final image with a similar look as the image frame, according to one or more embodiments. The processing circuit 130 may be configured to use the depth-layers in a 2D+depth-layer format to insert the object image (not shown in
According to another embodiment, the processing circuit 130 includes audio with the image frame and the object image. If both the image frame and object image include audio, then the processing circuit 130 mixes the audio sources to provide a combined output. The processing circuit 130 may also share the location information from the person in the object image with the audio mixer so that the processing circuit 130 may pan the person's voice to follow the position of the person. For greater accuracy, the processing circuit 130 may use a face detection process to provide additional information on the approximate location of the person's mouth. In a stereo mix, for example, the processing circuit 130 positions the person from left to right. In a surround sound or object based mix, in an alternative or additional example, the processing circuit 130 shares planar and depth location information of the person (or graphic object) of the object image with the audio mixer to improve the sound localization.
One or more functions described in correlation with
According to one embodiment, the processing circuit 130 further comprises a recording circuit 270. The recording circuit 270 may receive the final image from the processing circuit 130 and store the final image. One purpose of the recording circuit 270 is for the network to be able to retrieve the final image at any time to tag the final image by the tagging circuit 280 and/or share or post by a sharing circuit 290 the final image on social media.
According to one embodiment, the processing circuit 130 further comprises the tagging circuit 280. The tagging circuit 280 receives the stored final image from the tagged circuit 280 and tags the final image with metadata that describes characteristics of the insert image and the image frame. For example, this tagging helps with correlation of the final image with characteristics of the social media to make the final image more related to the users, the profiles, the viewers, and/or the purpose of the social media. This metadata may be demographic information related to the inserted person such as age group, sex, physical location; information related to an inserted object or objects such as brand identity, type and category; or information related to the image frame such as the type of content or the name of the program or video game that the clip was extracted from.
According to one embodiment, the processing circuit 130 further comprises the sharing circuit 290. The sharing circuit 290 receives the stored final image with the tagged metadata from the tagging circuit 280. The sharing circuit 290 shares the final image over a network(s) (not shown in
In this example, the depth-layers 320, 330, and 340 are described or positioned as a back layer 320, a middle layer 330, and a front layer 340. The back layer 320 contains a mountain terrain, the middle layer 330 contains trees, and the front layer 340 contains a car. As described in
According to another embodiment, in a video sequence, the above controls manipulate the object image 410 as the image frame 310 is played back on screen. User actions may be recorded simultaneously with the playback. This allows the user to easily “animate” the inserted object image 410 within the video sequence.
The depth-based compositing system 100 may further be configured to allow the user to select a foreground/background mode for scene objects in the image frame 310. For example, the scene object selected as foreground will appear to lie in front of the object image 410, and the scene object selected as background will appear to lie behind the object image 410. This allows the object image 410 to not intersect with the scene object that spans a range of depth values.
At step 1010, the user selects the target point 910 of
At step 1020, the processing circuit 130 estimates the bounding cube 920 of
At step 1030, the processing circuit 130 propagates the target point 910 to the next frame in the image frame 310. For example, the processing circuit 130 may use a motion estimation algorithm to locate the target point 910 in a future frame of the image frame 310.
At step 1040, the processing circuit 130 locates a new target point 910 and performs a search around the new target point 910 to see if a match was found to obtain a new bounding cube 920 for the scene object. To determine if a match a found, the target point 910 selected by the user. Once the target point 910 is selected by the user, the processing circuit 130 tracks the bounding cube 920 positioned around the object inside the bounding cube 920 (e.g., the car). The processing circuit 130 uses the bounding cube 920 to validate that the tracked target point 910 has correctly propagated from a first position (e.g., position 1) to a second position (e.g., position 2) using an image motion tracking technique. If the bounding cube 920 generated at position 2 does not match the bounding cube 920 at position 1, then the motion tracking technique may have failed, the object may have moved out of frame or to a depth layer that is not visible. If a match was found, the processing circuit 130 performs step 1020 again.
The rendering of the object image 410 is based on the foreground/background selection of the scene object in the image frame 310 as well as the depth of the object image 410. If a match was not found, then the inserted object 410 may be connected to an object inside the bounding cube 920 that moved out of frame or to a depth layer that is not visible. At step 1050, the processing circuit 130 automatically deselects the inserted object 410 or removes the inserted object 410 from the image frame, and the inserted object 410 is no longer connected to the object inside the bounding cube 920. At step 1060, the method ends.
At step 1101, the method begins. At step 1110, the user selects foreground (“FG”) or the background (“BG”) for the scene object.
At step 1120, the processing circuit 130 determines whether the scene object is inside the bounding cube 920. If the scene object is not inside the bounding cube 920, then at step 1130, the processing circuit 130 will use Draw Mode 0. Draw Mode 0 is the default Draw Mode and it will be used if the object image 410 does not intersect with the bounding cube 920 of the scene object. Then, the object image is drawn as if its depth is closer than that of the image frame.
At step 1120, if the scene object is inside the bounding cube 920, then at step 1140, the processing circuit 130 determines whether the user selected FG or BG. If the user selected BG, then at step 1150, the processing circuit 130 will use Draw Mode 1. Draw Mode 1 is used if the object image 410 intersects with the bounding cube 920 of the scene object, and the user has specified that the scene object will be in the background. Then, the processing circuit 130 determines an intersection region, which is the intersection points of the object image 410 that lie within the bounding cube 920 and points in the scene objects that lie within the bounding cube 920. The object image 410 will appear in the composited drawing regardless of the specified depth of the scene object because the scene object will be in the background.
At step 1140, if the processing circuit 130 determines that the user selected FG, then at step 1160, the processing circuit will use Draw Mode 2. Draw Mode 2 is used if the object image 410 intersects the bounding cube 920 of the scene object, and the user specified the scene object as foreground. Then the processing circuit 130 determines the intersection region defined in step 1150. The image frame 410 will appear in the composited drawing regardless of the specified depth of the scene object because the scene object will be in the foreground. At step 1170, the method ends.
According to another embodiment, the depth-based compositing system 100 includes descriptive metadata that is associated with the shared result. The depth-based compositing system 100 may deliver this with the image frame 310, stored on a server with the source or delivered to a third party. One possible application is to provide information for targeted advertising. Given that feature extraction is part of the background removal process, demographic information such as age group, sex and ethnicity may be derived from an analysis of the captured person. This information might also be available from one of their social networking accounts. Many devices support location services so that the location of the captured person may also be made available. The depth-based compositing system 100 may include a scripted content that describes the content such as identifying it as a children's sing-a-long video. The depth-based compositing system 100 may also identify the image frame 310 from a sports event and the names of the competing teams along with the type of sport. In another example, if an object image 410 is inserted, the depth-based compositing system 100 provides information associated with the object image 410 such as the type of object, a particular brand or a category for the object. In particular, this may be a bicycle that fits in the personal vehicle category. An advertiser may also provide graphic representations of their products so that consumers may create their own product placement videos. The social network or networks where the final result is shared may store the metadata which may be used to determine the most effective advertising channels.
In the disclosure herein, information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Various modifications to the implementations described in this disclosure and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the principles and the novel features disclosed herein. The word “exemplary” is used exclusively herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations.
Certain features that are described in this specification in the context of separate implementations also may be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also may be implemented in multiple implementations separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
The various operations of methods described above may be performed by any suitable means capable of performing the operations, such as various hardware and/or software component(s), circuits, and/or module(s). Generally, any operations illustrated in the Figures may be performed by corresponding functional means capable of performing the operations.
The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer readable medium may comprise non-transitory computer readable medium (e.g., tangible media). In addition, in some aspects computer readable medium may comprise transitory computer readable medium (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein may be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein may be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station may obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
While the foregoing is directed to aspects of the present disclosure, other and further aspects of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 62/099,949, entitled “SYSTEM AND METHOD FOR INSERTING OBJECTS INTO AN IMAGE OR SEQUENCE OF IMAGES,” filed Jan. 5, 2015, the entirety of which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62099949 | Jan 2015 | US |