Claims
- 1. A method for dynamic embedding of objects into context selectable locations in digital media items to form versions of said digital media items, said method comprising the steps of:
finding context suitable locations in said visual media; dynamically selecting a subset of said objects for at least one location; and embedding one object of said subset of objects in said at least one location, thereby to form a version of said digital media item.
- 2. A method according to claim 1, wherein said object comprises visual representation of at least one of the following:
a product; a logo; the name of a product; the name of a company; a text related to a product; a text related to a company; an object related to a company; and an object related to a product.
- 3. A method according to claim 1, wherein said digital media item is video.
- 4. A method according to claim 1, wherein said digital media item is a still image.
- 5. A method according to claim 3, wherein said location includes a location in time and a geometric location, said location in time being a subset of the duration of said video.
- 6. A method according to claim 1, wherein at least one of said locations comprises an existing object within said visual content, and wherein said object being embedded, replaces said existing object.
- 7. A method according to claim 1, wherein at least one of said objects is a void object designed to be used when no other object is selected for said location.
- 8. A method according to claim 1, wherein said object selection is additionally for representing information.
- 9. A method according to claim 8, wherein said information comprises forensic information.
- 10. The method of claim 1, wherein said digital media is any one of a group comprising: video, still image, audio, and written media.
- 11. A method according to claim 1, wherein said selection is based on information about at least one the following:
viewers of said visual content; preferences of viewers of said visual content; preferences of advertisers; demographics of the viewers of said visual content; subject-matter of said digital media item; atmosphere induced by said digital media item; content of said objects to be embedded; interests of viewers of said digital media item; and products being the subject of said objects to be embedded.
- 12. A method according to claim 11, wherein at least some of said information is stored in a database.
- 13. A method according to claim 1, comprising preparing alternative versions of said digital media item by making alternative object selections for each version.
- 14. A method according to claim 1, wherein said digital media is arranged in layers, and wherein at least some of said advertisement objects are embedded in separate layers.
- 15. A method according to claim 1, wherein said digital media is at least partially generated by computer, based on scene representation, wherein at least some of said objects to be embedded are also based on a scene representation, and wherein at least some of said generating is done after the embedding of said objects.
- 16. A method according to claim 15, wherein said scene representation comprises at least one of the following:
a three dimensional scene representation; an object based representation; an object based representation which further comprises interaction between objects; and an object based representation which further comprises physical interaction between objects.
- 17. A method according to claim 1 wherein said method further comprises the steps of:
analyzing said visual content; locating at least one replaceable object in said visual content; selecting at least one replaceable object in said visual content; and embedding said objects to be embedded by replacing at least some of said replaceable objects with said objects to be embedded.
- 18. A method according to claim 17, wherein said analyzing comprises analyzing at least one of the following properties:
lighting; shading; texture; object orientation and location; relative object location; object movement; frame panning; frame zooming; frame rotation; refraction; transparency; focus; and reflection, and wherein the step of embedding said advertisement objects by replacing said replaceable objects is done in a manner optimizing the retention of at least some of said properties, thereby to enhance the realism in the produced visual content.
- 19. A method according to claim 17, wherein at least one of said advertisement objects is represented by a three dimensional model.
- 20. A system for dynamic embedding of at least one embeddable object in digital media, wherein said embeddable object is to be embedded in a manner designed to be perceived as an integral part of said digital media and thereby to form different versions of said digital media, said system comprising:
a locator operable to find at least one location in said visual content contextwise suitable for objects of a group to which said embeddable object belongs; a selector operable to dynamically select objects from said group; and an embedding mechanism operable to embed said selected object in said contextwise suitable location.
- 21. A system according to claim 20, wherein said embeddable object comprises visual representation of at least one of the following:
a product; a logo; the name of a product; the name of a company; a text related to a product; a text related to a company; an object related to a company; and an object related to a product.
- 22. A system according to claim 20, wherein said digital media is video.
- 23. A system according to claim 20, wherein said digital media is a still image.
- 24. A system according to claim 22, wherein said location includes a location in time and a geometric location, said location in time being a subset of the duration of said video content.
- 25. A system according to claim 20, wherein at least one of said locations comprises an included object in said digital media, and wherein said advertisement object, when embedded in said location, replaces said object.
- 26. A system according to claim 20, wherein at least one of said embeddable objects is a null object designed to be used when no other object is selected for said location.
- 27. A system according to claim 20, wherein said selector is further operable to make said selection to represent information.
- 28. A system according to claim 27, wherein said information comprises forensic information.
- 29. A system according to claim 20, wherein the selection done by said selector is based on information about at least one the following:
viewers of said digital media; preferences of viewers of said digital media; preferences of advertisers; demographics of viewers of said digital media; subject-matter of said digital media; content of said digital media; atmosphere induced by said digital media; content of said embeddable objects; interests of viewers of said digital media; and products which are the subjects of said embeddable objects.
- 30. A system according to claim 29, further comprising a database operable to store at least some of said information.
- 31. A system according to claim 20, operable to prepare several versions of at least part of said digital media by embedding different ones of said embeddable objects.
- 32. A system according to claim 20, wherein said digital media comprises layers, and wherein at least some of said embeddable objects are embedded in separate layers.
- 33. A system according to claim 20, wherein said visual content is at least partially generated by computer based on scene representation, wherein at least some of said embeddable objects are also based on a scene representation, wherein said visual content is created by generating said scene representation and wherein at least some of said generating is done after the embedding of said advertisement objects by said embedding mechanism.
- 34. A system according to claim 33, wherein said scene representation comprises at least one of the following:
a three dimensional scene representation; an object based representation; an object based representation which further comprises interaction between objects; and an object based representation which further comprises physical interaction between objects.
- 35. A system according to claim 20, further comprising an analyzer for analyzing said visual content, wherein said locator is operable to locate at least one replaceable object in said digital media based on information provided by said analyzer, wherein said selector is operable to select located replaceable object in said digital media, and wherein said embedding mechanism is operable to embed said embeddable object by replacing at least some of said replaceable objects with said embeddable object.
- 36. A system according to claim 35, wherein said analyzer is operable to analyze for at least one of the following properties:
lighting; shading; texture; object orientation and location; relative object location object movement; frame panning; frame zooming; frame rotation; refraction; transparency; focus; and reflection, and wherein the embedding by said embedding mechanism is done in a manner optimizing the retention of at least some of said properties, thereby to enhance realism of the embedding.
- 37. A system according to claim 35, wherein at least one of said embeddable objects is represented by a three dimensional model.
- 38. A method for dynamic embedding of at least one embeddable object into verbal digital media content, wherein said embeddable object is embedded in a manner designed to be perceived as an integral part of said verbal content, said method comprising the steps of:
finding at least one location in said digital verbal content which is contextwise associable with a type of object; dynamically selecting an embeddable object being of said type; and embedding said selected embeddable object in said contextwise associable location.
- 39. A method according to claim 38, wherein said embeddable objects comprises verbal representation of at least one of the following:
the name of a product; the name of a company; a text related to a product; and a text related to a company.
- 40. A method according to claim 38, wherein said verbal content is audio content.
- 41. A method according to claim 38, wherein said verbal content is textual content.
- 42. A method according to claim 38, wherein at least one of said locations comprises an existing object within said verbal content, and wherein said embeddable object, when embedded in said location, replaces said existing object.
- 43. A method according to claim 38, wherein at least one of said advertisement objects is a null object designed to be used when no advertisement object is selected for said location.
- 44. A method according to claim 38, wherein said selecting is further operable to represent information.
- 45. A method according to claim 44, wherein said information comprises forensic information.
- 46. A method according to claim 38, wherein said selection is based on information about at least one the following:
users of said verbal content; preferences of the users of said verbal content; preferences of advertisers; demographics of users of said verbal content; subject-matter of said verbal content; atmosphere induced by said verbal content; content of said embeddable objects; interests of users of said verbal content; and products represented by said embeddable objects.
- 47. A method according to claim 46, wherein at least some of said information is stored in a database.
- 48. A method according to claim 38, wherein several versions of at least part of said verbal content are prepared by embedding different objects therein.
- 49. A method according to claim 38, wherein said verbal content comprises layers, and wherein at least some of said objects are embedded in separate layers.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to and claims priority from U.S. Provisional Patent Application No. 60/308,816, filed Aug. 1, 2001, the contents of which are hereby incorporated by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60308816 |
Aug 2001 |
US |