The entire contents of Taiwan Patent Application No. 099118563, filed on Jun. 8, 2010, from which this application claims priority, are incorporated herein by reference.
1. Field of the Invention
The present invention generally relates to an interacting method and a simulation system, and more particularly to a method for interacting with a video and a game simulation system.
2. Description of Related Art
It has become a popular entertainment that most people watch sport games via a television or a computer. In current sportscasts, however, it is usually only allowed for people to accept a video content provided by the broadcaster unilaterally, while interacting with the video content is not provided to a viewer. Therefore, people may feel emptiness after finishing watching the game video.
In order to increase the applications of a video, some video decomposition technologies would be developed gradually to perform image processing adaptively. The concept of Bayesian Matting is usually used to decomposing the video frame into a background scene and at least one foreground object. However, user must indicate the foreground regions for each frame to separate into the foreground objects correctly. Specifically, user requires human-assistance to outline the foreground objects and then generates a trimap which is a map indicating the foreground (black), background (white), and unknown (gray) regions on each image. Finally, put the trimap data into formula to generate the foreground objects decomposed. The generation of a trimap is time consuming and requires human-assistance, especially when performing the above complicated compiling process for the whole video.
Therefore, how to provide a customized, easy-made and interactive game video for the viewer and thus more enjoyment on game watching is the object to be achieved by the present invention.
In view of the foregoing, it is an object of the embodiment of the present invention to provide a method for interacting with a video content, which is capable of operating the video content interactively, resulting in more enjoyment on game watching obtained by a video viewer.
It is another object of the embodiment of the present invention to provide a method for interacting with a video content, which is capable of decomposing a video into a background scene and at least one foreground object automatically, thereby facilitating to simplify the image process.
To achieve the above objects, the present invention provides a method for interacting with a video by a motion detector, comprising the steps of: first, a video-content-decomposition procedure is executed to decompose the video into a background scene and at least one foreground object. Then, at least one event database is classified according to the state of the foreground object. Finally, the suitable foreground objects are selected from the event database according to a detected motion by the motion detector. Wherein, the foreground objects selected are rendered on the background scene sequentially according to the detected motion.
It is a further object of the embodiment of the present invention to provide a game simulation system which analyzes the current situations of various contestants when competing, thereby simulating the competition result of the contestants each other.
To achieve the above objects, the present invention provides a game simulation system comprising a database and a processor unit. The processor unit is configured to decompose at least one game video of at least one contestant into a plurality of foreground objects and classify the movement categories of the contestants to store in the database according to the movement of the foreground objects. Therefore, the competition result of the contestants could be simulated according to the movement categories of one contestant and another contestant in the database.
Firstly, please refer to
In one embodiment of the present invention, the background and the foreground object in the video 15 are transmitted separately. Take a tennis game video as example hereinafter, as shown in
It requires human-assistance to outline the foreground objects to separate the background and the foreground in prior art, in view of the above defect, some threshold values indicating the pixel difference would be pre-determined to distinguish between background and foreground in the present invention. Please refer to
Sequentially, please refer to
After building the complete event database, the foreground objects 153, which conform to the game scenario, can be selected from the event database to be displayed on the background scene 151 in sequence according to a detected motion by the motion detector, wherein the detected motion could comprise the motion of the user or the motion of the remote controller 19 controlled by user 17. In one embodiment, due to various sizes of the foreground objects 153 decomposed, the event database must be preformed normalizing procedure to adjust the position and size of each foreground object 153 on the background scene. With regard to the detail database normalization technique, please refer to Taiwan Patent Nos. 98111457 and 98118748, and U.S. patent application Ser. Nos. 12/458,042 and 12/556,214, incorporated herein by reference; thus, no more details will be described here.
In order to interactive with the video 15 realistically, it further analyzes and records the moving directions and speeds of each foreground object 153 in the present invention. Take a tennis game video as example, the hitting properties of player is in accordance with statistics in game video. Therefore, the forehand hitting strength, backhand hitting strength, or hitting degree of a player would be analyzed and gathered by hitting sound volume or motion speed of the ball in the video to generate a strength chart. As shown in
It is worth mentioning that in addition to hitting strength and directions, the properties of the player (foreground object 153) in the video 15 such as moving path, hitting motion, or hitting behavior can be analyzed according to the game video content. When interacting with the video, the adaptive foreground objects 153 would be selected to display according to the properties of the foreground objects 153 analyzed previously. Therefore, in addition to vivid visual effect, the action of the avatar may be adjusted slightly according to the behavior of the real player, and further improves reality. Moreover, after analyzing the sport properties of various players, we can control two players to compete with each other, and forecast victory or defeat of the players according to their properties.
Please refer to
In one preferred embodiment, the moving directions of the foreground objects 153 can be simplified previously. For example, here, 360 degrees of moving direction are segmented into small buckets, and each bucket has 30 degrees. It limits 12 moving directions, therefore time and system resource may be reduced. Furthermore, the motions of sequence foreground objects 153 in single sequent frame clip are smooth and coherent, but the motions of sequence foreground objects 153 in different sequent frame clips are not. Therefore, the less the number of sequent frame clips which are close to path AB are, the better it is. In other words, the more the number of the foreground objects 153 between relay point P and beginning position A are, the better it is. In the case, least number of sequent frame clips would be connected to form the moving path from beginning position A to destination position B, and improve motion smooth of the foreground objects 153.
The adaptive foreground objects 153 would be selected to display on the background scene 151 according to the detected motion such as the motion of the user or the motion of the remote controller 19 controlled by user 17 in the present invention. When user 17 change current state of motion, the corresponding foreground objects 153 may be selected from adaptive event database to display. Specifically, when user 17 serves, the adaptive foreground objects 153 are selected from the serving database 41 to display, and when user 17 hits, the adaptive foreground objects 153 are selected from the hit database 45, and so on. Due to the human behavior model and the corresponding event databases, the frames can be rendered smoothly when change states of player.
In one specific embodiment, each foreground object 153 has a texture feature and a shape feature, which indicate the color and shape information of the foreground object 153, respectively. For a suitable connection, the next foreground object 153 in a successive clip should have visual similarity between current clip according to the texture and the shape features. However, when the avatar on the frame change its state, e.g., change from standby state to hit state, the selected foreground objects 153 may not be similar enough to current foreground objects 153, and the rendering result will look weird if directly cascading two frame clips. To make the transition smooth, we propose at least one smooth frame between two cascading frame clips in the present invention. The transition frames are calculated by current clip and selected clip with the consideration of smoothness of shape, color and motion. Please refer to
A game system not only includes the foreground objects 153 rendering but also contains the vivid background scene 151 rendering. And 3D scenes can be rendered from 2D image after user manually labels the 3D structure of the image. The present invention further provides an image producing technique, which combines 3D rendering to make the background scene 151 vivid in various viewing angles, to render the virtual 2D background scene 151 from 3D structure in any viewing angle. Please refer to
Finally, please refer to
First, a video-content-decomposition procedure is executed in the transmitting end 11 to decompose the video into a background scene 151 and a plurality of foreground objects 153 which are transmitted to the receiving end 13 in sequence in step S801. During transmitting, the states of the decomposed foreground objects 153 are determined in the transmitting end 11, and the receiving end 13 stores them into corresponding event databases respectively according to the state of the foreground objects 153 in step S803. Then, in step S805, the transmitting end 11 generates the strength chart by gathering hitting strengths and directions of the foreground objects 153 according to the hitting sound volume or motion speed of the ball in the video 15. In one specific embodiment, the video-content-decomposition procedure may be executed by the receiving end 13 after transmitting the whole video 15 from the transmitting end 11. The states of the decomposed foreground objects 153 may be determined to classify by the receiving end 13, and the strength chart may be generated by the receiving end 13 which analyzes each foreground object 153.
In step S807, when finishing receiving the whole video 15, the database and relative information have been built, the user interacts with the video 15 according to the detected motion by the motion detector, for example, the user controls the remote controller 19 to interact with the video 15 and starts to play game in step S809. At beginning of the game, a foreground object 153 is selected from the serving database 41 to display on the background scene 151 (tennis court). Then, in step S811, some adaptive foreground objects 153 are selected from the event databases according to the detected motion such as the motion of the user or the operating instructions by waving or inputting the remote controller 19. For example, when user 17 wants to hit the tennis ball back, the specific foreground object 153 in accordance with the hitting degree would be found from the hit database 45 to display. Wherein, the moving direction and speed of the tennis ball is controlled according to the waving strength from user 17 or the strength chart.
Due to various sizes of the foreground objects 153 decomposed, the selected foreground object 153 must be preformed a normalizing procedure to adjust the position and size of each foreground object 153 on the background scene 151, in step S813, before displaying the normalized foreground object 153 in step S815.
Moreover, in step S817, the current foreground object 153 is determined whether it changes into moving state according to the detected motion such as the motion of the user or the operation of the remote controller 19. If not, still select the adaptive foreground objects 153 from the event databases according to the detected motion or operating instructions of the remote controller 19. If the state of the current foreground object 153 will be changed into moving state, in step S819, a moving path closing procedure is executed to find the sequence frame clips of foreground objects 153 which are close to the moving path. Specifically, the beginning position where the foreground object 153 currently locates at must be detected and the destination position where the foreground object 153 want to move to must be forecasted. Then, at least one sequence frame clip, which is close to the path from the beginning position to the destination position, is found from the moving database 47. The found sequence frame clips of the foreground objects 153 are displayed sequentially.
In step S821, the difference between the current and the next foreground objects 153 displayed is always determined whether it is too large or not, e.g., more than a default threshold value. If not, the step S811 is still processed. If the difference between two foreground objects 153 is too large, a smoothing procedure is executed to interpolate at least one smooth frame antecedent to the next displayed frame clip of the foreground object, in step S813.
Finally, determine whether the user 17 wants to end the game, in step S825. If so, this turn of the interactive game is finished. If not, the step S811 is still processed.
Note that, determining the difference and the relative process (steps S821-S823) may, but is not limited to, be executed before the moving path closing procedure (steps S817-S819).
According to the above embodiment, the method for interacting with a video content, provided in the present invention, decomposes the video content and displays adaptive foreground objects in sequence according to the video scenario, which is capable of interacting with the video content, resulting in more enjoyment on game watching obtained by a video viewer. Furthermore, the present invention utilizes the threshold value to distinguish between background and foreground automatically, which avoids human-assistance outlining each foreground object, thereby facilitating to simplify the image process.
In view of the foregoing, the present invention further provides a game simulation system to forecast victory or defeat. Please refer to
Similarly, the processor unit may decompose at least one game video of another contestant into a plurality of foreground objects 153 and classify the movement categories of the contestant to store in the database. Moreover, the game comprises one-to-one competition (such as tennis, table tennis, pugilism, taekwondo), many-to-many competition (such as doubles tennis, basketball, soccer), or one-to-many competition (such as baseball).
Take the combat competition such as pugilism or taekwondo as example, the foreground objects 153 comprise the contestants which fight each other, and the movement of the foreground objects 153 comprises moving directions, moving speed, or moving distance of the contestants. Moreover, the processor unit gathers these moving directions, moving speed, or moving distance of the foreground objects 153 to generate the corresponding strength charts of these contestants, respectively, thereby determining the attack strength of these contestants. The above movement category comprises punching state, kicking state, defense state, and moving state, which are stored in the corresponding event database.
Take taekwondo as example, the processor unit decomposes one (or many) taekwondo competition video of a contestant A into a plurality of foreground objects, and generates the movement categories (punching state, kicking state, defense state, and moving state) of the contestant A according to the movement of the foreground objects (moving directions, moving speed, or moving distance of the contestant A). All states are stored in the database, and each state is stored in corresponding event database respectively. For example, the processor unit analyzes the moving directions, moving speed, or moving distance of fists of the contestant A and obtains plural punching states, and stores them in a punching database. Certainly, database further comprises a kicking database, a defense database, and a moving database. Similarly, the processor unit also can analyze one (or many) taekwondo competition video of another contestant B and generate various movement categories. Then, the processor unit simulates the competition result of the contestants A, B according to the movement categories (punching state, kicking state, defense state, and moving state) of the contestants A, B.
Still take the tennis game as example, the processor unit decomposes one (or many) tennis game video of the contestants (players) A, B into a plurality of foreground objects, and generates the movement categories (serving state, standby state, hit state, and moving state) of the contestants A, B according to the movement of the foreground objects (moving directions, moving speed, or moving distance of the contestants A, B, and a tennis ball), respectively. Similar, various states are stored in the corresponding event database, respectively. The processor unit further generates the strength charts of the contestants A, B according to the movement of the foreground objects, and determines the serving or hitting strength according to the movement categories of the contestants A, B, thereby simulating the competition result of the contestants each other.
For the conventional games, it requires many game development engineers to structure or render the players and scenes in the game, which consumes lots of time and costs. However, the method for interacting with a video and the game simulation system disclosed in the present invention can generate any game from any video. That is, after displaying a video, a game would be generated from the video content according to the method provided in the present invention. The viewer can play the newest game at once without spending lots of time or costs building 3D models or rendering the stadium. The brand-new game generation method is the object that current game manufacturers are far behind to catch up. The present invention not only saves lots of costs but also improves more enjoyment.
Although specific embodiments have been illustrated and described, it will be appreciated by those skilled in the art that various modifications may be made without departing from the scope of the present invention, which is intended to be limited solely by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
099118563 | Jun 2010 | TW | national |