This invention relates to systems and methods for creating animated videos. More particularly, it relates to one or more systems that allow its users to manually move about a plurality of characters, before one or more prop scenes, in front of a physical (or digital) background scene. Using a novel device holder/stand with a specially angled mirror, the device is able to record using computer vision REAL TIME movements and interactions between movable characters and/or physical or digital prop objects.
Various subcomponents of this system and method may be covered by patents. Applicants do not claim to have invented “green screen” technologies or smart device (phone or tablet) recording per se. The closest found “art” to this concept concerns Microsoft's U.S. Pat. No. 8,325,192 and Motion Games' U.S. Pat. No. 8,614,668. This invention distinguishes over both, however. Microsoft, for instance, uses an image capture device in communication with the processor and arranged to capture images of animation components. By contrast, this invention captures a real time video stream of the physical objects that are moving in a predefined space in front of our stand that has a mirror system. That stand-with-mirror system changes the field of vision for the camera of a smart device so that the latter can be placed at an angle while still seeing what is in front of its camera in somewhat of a hands-free augmented reality. Compared to the prior art found to date, this invention uses the physical position and speed of movement of our characters in the real world to create digital interactions that are important for creating animated cartoons that match the details inputted by a team of experienced cartoon producers. The relative positions of the physical characters affect the facial and body movements of our characters. It will also affect the sounds and digital environments that surround the digital characters being represented on the digital screen (as an alias for the real world, physical characters).
This invention addresses a system for allowing an inexperienced user to create high quality, animated cartoons through the specially held camera component of his/her Smart Device. It provides a setup that allows manipulation of more than one character, prop object or scene at the same time while creating and recording the video, hands-free.
Further features, objectives and advantages of this invention will be clearer when reviewing the following detailed description made with reference to the accompanying drawings in which:
The invention enables an inexperienced user to create a high quality animated cartoon and eventually a cartoon channel in a short period of time using a first preferred setup (
The user of this system would be able create an animated cartoon by placing multiple physical characters and either physical or digital prop objects in a physical scene (we call it, a “studio”) as in the way a regular movie is normally shot. The smart device D will be placed, face (or main display) up, on its holder H or stand that has a mirror M built into its front most plane. That mirror M will be used to transfer the physical scene(s) from the studio to the camera of the smart Device while the Device rests on the stand. In other words, mirror M works to change the view angle of the Device's camera. Having the smart Device on this holder/stand frees the user's hand to manipulate/move the plurality of characters C1, C2, C3 or prop objects P1, P2, P3, P4 in the scene for said system user to build (or otherwise create) his/her own animated story.
The camera for smart Device D will be taking a real time video stream of the “studio”. Then, through computer vision (i.e., a type of digital machine learning), the system will identify movable characters C and/or prop objects P in the given scene. In this first embodiment, all of these characters, physical objects and scene backgrounds would need to be scanned beforehand and saved in the software database stored on the device D so that when the camera, through computer vision, recognizes a previously scanned object, it will populate the digital scene on the smart Device with the recognized object.
Once the Characters and Prop objects are recognized digitally, a user would be able to move/manipulate the characters and/or prop objects about, in the digital scene, by moving the very physical characters and prop objects in front of the camera. The user can also switch scenes by removing a first physical background scene S3 behind the characters and prop objects and replacing it with another one such as scenes S1, S2, S4 or S5. The change of the physical scene would also change the digital scene on the smart Device. Scenes can also be changed digitally without having to change the physical background scene.
The position of the physical items (both characters C and prop objects P) in the scene would also affect the way these physical items will interact with one another. For example, if you have a physical piano in the scene and you place a character next to that piano, nothing happens. But . . . the moment you place that same character behind the piano, the character will start playing the piano on the device's display screen. Note, if you place that same character in front of the piano, no interaction happens. The physical location of the objects—relative to the camera—determines the type of interaction, if any, that will be happening among the different characters and prop objects in the scene. The prop objects P can also be purely digital meaning that a character C would have the same interaction with a digital prop P as if the prop P also existed in the physical scene.
The innovation of this first embodiment is akin to what a computer mouse does when manipulating objects on a digital screen. But in this case, it is a tool that will allow the system's user to manipulate the characters and potentially prop objects in a physical setting for creating a high quality, animated cartoon video in a short period of time.
In the next, or second generation of this invention, the System tracks the relative distance, speed, direction and acceleration of the characters and/or prop objects positioned in front of the digital camera to the system's purposefully angled Device (or tablet) for creating special effects for these digitally controlled characters and/or prop objects for an even more sophisticated Animated Cartoon.
This second generation System allows a user to create special effects and even greater interactivity between digital characters and/or prop objects using the digital camera on the smart Device to monitor the relative distance, speed, direction and acceleration of physical items (characters and/or prop objects) positioned in front of the Device's camera.
This next generation invention consists of:
The user of this second system will be able create a special effects or a high level of character interactivity in a cartoon by placing one OR MORE objects in front of the digital camera of the smart Device (phone or tablet). These physical items will need to be mapped to their digital counterparts in the cartoon-making software used by this system. That software will then calculate the relative distances of the physical items and their acceleration through the viewing range of the digital camera. The calculated relative distance, orientation, acceleration and speed of the physical items will determine the interactivity of these digital characters and the effects that are created within the digital environment. The artificial intelligence implemented by this more advanced system will then help a cartoon creator automate and facilitate the creation of cartoon characters that may more closely resemble real world characters, or that would normally require a cartoon studio to hire a crew of artists, designers and animators for achieving a similar, or the same, level of interactivity and liveliness.
The digital camera will be: (a) taking a video stream of the items that are visible to the camera; and (b) analyzing the physical items to relay their relative positions to the digital characters and prop objects.
The physical items of this second system will also be able to transfer features or behavior to the accessories and digital props that may be tied to the digital characters. Also, the physical objects relative position, speed, and acceleration would affect the way the digital characters or objects interact with: (a) other digital characters, (b) prop objects or even (c) background scenes in the digital world.
Referring now to the accompanying drawings,
Positioning a smart device D (or tablet) on a specially shaped (and angled) novel holder/stand H, resting the device D, main camera side down, on the rest support RS of that holder H, an animation can be made and recorded with the relative movements of the characters and/or prop objects about the base B. Their relative movements, as viewed via an angled mirror M, will translate to animated actions, sounds and the like of corresponding display characters DC1, DC2, DC3 and/or display prop objects DP1, DP2, DP3 and DP4 as seen LIVE, in real time, on the display screen to device D.
In the next generation of systems per this invention, per
Towards the angled front of holder H, there is situated a mirror M for receiving the action of movements occurring on the base of a System and transferring those movements, real time, to the camera of the device D. The holder H with its adjustable mirror system should be able to handle different types, sizes and/or models of smart devices (phones OR tablets). The holder's primary purpose is to change a device's camera view angle so the user can view a scene while the device is facing the ground, the floor or a table and keep the user/creator's hands free to effectively animate a cartoon story by manipulating the physical characters infront of the screen. The holder should also accommodate a smart device camera flash so that the flash light of the device proper can be turned on to improve the tracking of physical objects in front of the camera scene, or when the video recording environment is darker than preferred.
For optimally situating the one or more background scenes (such as S3 in
A “tween” boy wants to create his own animated cartoon video or movie about a Formula 1® racing car. The boy purchases a System set that includes a background of the racing track, a racing car and a racing character. The boy downloads our mobile application and then places his own smart device on the holder/stand. Next, he builds a studio “setup” that includes placing the racing track background scene in an area in front of the holder/stand, and his racing character in front of the background scene facing the Device's camera on the holder/stand.
The boy starts recording the video/movie scene by pressing the record button on the downloaded Application. The racer and the racing track will show up on his digital screen. The boy then makes his racer say some words about his excitement for the race by clicking on the racer and selecting a talk icon that makes the racer talk in the boy's own voice.
The boy then physically moves his racer forward towards the camera. Next, he slides the racing car (prop) into the scene and it shows up on the Device's digital screen. When the boy next places his racer character IN the prop car, the car ON the digital screen starts moving forward. This all happens while the boy records his own animation video on the Device. After the boy stops recording, his own video is ready for publishing and sharing on YouTube® or any other social network. Using the System, it would have taken the boy roughly 2 minutes to create a 30 second, animated cartoon video.
A “tween” girl wants to create an animated cartoon video or movie about learning algebra in the classroom. The girl buys for her tablet several of our physical characters. The girl downloads our Intelligent App from one of the App stores. She then places the physical characters in front of the tablet which is on its holder/stand from our System. The way she moves the physical characters in front of that tablet relative to one another will affect their movement IN the digital scene.
The girl makes one of the characters the teacher. The moment she starts recording her video, she makes the digital teacher character talk. As the teacher talks in the cartoon video, the other characters automatically start looking at the teacher representing how eye movement would have occurred in a real world setting for real people.
A boy wants to create a cartoon video of characters racing one another in race cars. He places his smartphone on stand and purchases couple of our physical characters. The boy downloads our Intelligent App from the App store. The boy places the physical characters infront of the of the stand and he places the digital representation of those characters in a digital cars. As he moves the physical characters towards the camera, the speed that he moves the physical characters with affects the sounds that the digital race cars make while he is recording the cartoon video. The way the boy rotates the physical pets causes the digital race car to steer and make screeching sounds in the recorded cartoon video.
This application is a perfection of: U.S. Provisional Ser. No. 62/679,683 filed on Jun. 1, 2018, and U.S. Provisional Ser. No. 62/758,187 filed on Nov. 9, 2018, both disclosures of which are fully incorporated herein.
Number | Date | Country | |
---|---|---|---|
62679683 | Jun 2018 | US | |
62758187 | Nov 2018 | US |