Not applicable.
In data-searching systems preceding the Web, and on the Web since its inception, search engines have employed a variety of tools to aid in organizing and presenting advertisements in tandem with search results and other online content, such as digital images and streaming video. These tools are also leveraged to optimize the revenue received by the search engine, where optimizing revenue may be facilitated by selecting advertisements that are relevant to a user and by placing the selected advertisements in a noticeable location. In addition, companies that advertise strive to develop advertisements that are attention-capturing, frequently selected by the search engines for display, and, upon being displayed, readily perceived by users of those search engines. If these three objectives are achieved, a company is likely to be successful in selling a particular item or a particular service. For instance, an eye-catching advertisement placed in a top-center banner position on a web page will likely receive more attention from a user and, thus, likely generate more revenue for the search engine and the company, as opposed to a bland advertisement positioned in a lower portion of the web page. That is, because the advertisement is noticed by the user, the likelihood that the user will take action (e.g., visit a website of the advertiser) based on the advertisement is increased.
However, when presenting advertisements by employing the conventional techniques above, the number of advertisements that could be potentially displayed in a particular web page is unduly limited. That is, search engines have not leveraged all available portions of the web page and have been ineffective in optimizing advertising revenue from the companies that advertise. For instance, large regions of a display area within the web page may be occupied by digital videos or other animated graphics. However, because digital videos display moving objects and other active visual stimulus, search engines as well as companies that advertise are reluctant to place advertisements on top of the videos based on fears that the advertisement will not be noticed, or worse, create an unwelcome distraction.
Accordingly, employing a process to track movement of objects within digital videos and to use the tracked movement to develop and place advertisements within the digital videos, such that the advertisements would appear to visually interact within the objects, would increase the number of opportunities to place an advertisement within a particular web page and would increase the likelihood the user would notice the placed advertisements, thereby increasing the likelihood that the user will take action based on the advertisements.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Embodiments of the present invention generally relate to computer-readable media and computerized methods for identifying and tracking an object within video content of a media file (e.g., digital video) such that an awareness of characteristics of the video content is developed. This awareness can then be used for manipulating how and when an advertisement is overlaid on the video content. For instance, the advertisement may be manipulated to interact within the identified object.
The step of developing the awareness of video-content characteristics is carried out by an offline authoring process. This offline authoring process is implemented to identify an object within the video content with which an advertisement will visually interact. Next, the identified object is tracked. Tracking may include the steps of targeting a patch within the object appearing in the video content of the media file and tracking the movement of the patch over a sequence of frames within the media file. As more fully discussed below, a “patch” generally refers to a prominent set of pixels within the object that exhibits an identifiable texture (e.g., an eye of a person or animal). Based on the tracked movement of the patch, locations of the patch within the sequence of frames are written to a trajectory. In an exemplary embodiment, a trajectory includes a list of patch locations, configured as X and Y coordinates, that are each associated with a particular frame in the sequence of frames.
Next, the step of manipulating how and when an advertisement is overlaid on the video content is performed by the online rendering process. Initially, the online rendering process is carried out upon initiating play of the media file. Accordingly, several steps are typically performed before the online rendering process is invoked, such as receiving a plurality of advertisements that are each designed with consideration of the trajectory and choosing one of the received advertisements for rendering based on a selection scheme (e.g., revenue optimizing, rotational, and the like). Upon choosing an advertisement and receiving an indication (e.g., user-initiated selection of a representation of the media file on a web page) to invoke the online rendering process, the online rendering process conducts the following procedures: generating an ad-overlay that accommodates a container to hold the video advertisement; positioning the container within the ad-overlay according to the trajectory; and inserting the chosen advertisement into the container. Accordingly, the ad-overlay is rendered on top of the video content when playing the media file such that the advertisement appears to visually interact with the object or other video content.
The present invention is described in detail below with reference to the attached drawing figures, wherein:
The subject matter of the present invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies.
Accordingly, in one embodiment, the present invention relates to computer-executable instructions, embodied on one or more computer-readable media, that perform a method for dynamically placing an advertisement on top of video content in a media file, based on movement of an object therein. Initially, the method involves performing an offline authoring process for generating a trajectory. Typically, the offline authoring process includes the steps of targeting a patch within the object appearing in the video content of the media file, tracking the movement of the patch over a sequence of frames within the media file, and, based on the tracked movement of the patch, writing locations of the patch within the sequence of frames to the trajectory. As used herein, the term “patch” is not meant to be limiting but may encompass any segment of the object that can be consistently identified within a predefined sequence of frames within the media file. For instance, the term patch may refer to a prominent set of pixels (e.g., eyes) within the object (e.g., bear) that exhibits an identifiable texture. See
Next, the method involves performing an online rendering process upon initiating play of the media file. Typically, the online rendering process includes the steps of automatically selecting the advertisement and, while the media file is playing, dynamically placing the selected advertisement on top of the video content as a function of the locations within the trajectory. Accordingly, the advertisement and media file are rendered in a synchronized manner such that the advertisement appears to visually interact within the object, or at least some portion of the video content.
In another embodiment, aspects of the present invention involve a computerized method, implemented at one or more processing units, for utilizing an awareness of video content within a media file to select and to place a video advertisement therein. In particular, the method includes abstracting one or more coordinate locations of an object appearing in the video content of the media file. As used herein, the term “object” is not meant to be limiting, but may encompass an expansive scope of items, elements, lines, points, figures, or other aspects of the video content being presented upon playing the media file. In some embodiments, the object represents a most impressive figure or item within the video content. In one exemplary instance, with reference to
Next, the computerized method continues with, at least temporarily, storing coordinate locations of the object on a trajectory. In one embodiment, the coordinate locations are stored in association within a sequence of frames comprising the media file. The trajectory is utilized to generate an ad-overlay that accommodates a container that holds the video advertisement. Typically, the container is positioned within the ad-overlay according to the coordinate locations stored in the trajectory. For instance, the container may be placed on top of the coordinate locations. By way of example, as discussed with reference to
Stated generally, the computerized method includes the steps of receiving the video advertisement, inserting the video advertisement into the container, and rendering the ad-overlay on top of the video content when playing the media file. As such, embodiments of the present invention provide for selection and presentation of advertisements, or the video advertisement. As utilized herein, the term “advertisement” or the phrase “video advertisement” is not meant to be limiting. For instance, advertisements could relate to a promotional communication between a seller offering goods or services to a prospective purchaser of such goods or services. In addition, the advertisement could contain any type or amount of data that is capable of being communicated for the purpose of generating interest in, or sale of, goods or services, such as text, animation, executable information, video, audio, and other various forms. By way of example, the advertisement may be configured as a digital image that is published within an advertisement space allocated within a UI display. In the instance described above, the UI display is rendered by a web browser or other application running on a client device. In an exemplary embodiment of video advertisements, the video advertisement may be specifically designed to visually interact with the object within the video content of the media file. The design of the video advertisement may be performed by an administrator associated with the web browser, a third-party advertising company, or any other entity capable of generating video content. Further, the design of the video advertisement may be based on the trajectory, the timestamps associated with locations of the object, a theme of the media file, an identity of the object, or any other useful criteria. Thus, the video advertisement may be developed in such a way as to visually interact with the video content when played.
In yet another embodiment, the present invention encompasses a computer system for abstracting information from the video content of a media file and for placing the advertisement within the video content to visually interact therewith. Typically, the abstracted information allows for developing the visually interactive advertisement, as discussed immediately above. In an exemplary embodiment, and as shown in
Having briefly described an overview of embodiments of the present invention and some of the features therein, an exemplary operating environment suitable for implementing the present invention is described below.
Referring to the drawings in general, and initially to
The invention may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Embodiments of the present invention may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Embodiments of the invention may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With continued reference to
Computing device 100 typically includes a variety of computer-readable media. By way of example, and not limitation, computer-readable media may comprise Random Access Memory (RAM); Read Only Memory (ROM); Electronically Erasable Programmable Read Only Memory (EEPROM); flash memory or other memory technologies; CDROM, digital versatile disks (DVDs), or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices; or any other medium that can be used to encode desired information and be accessed by computing device 100.
Memory 112 includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, nonremovable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 100 includes one or more processors that read data from various entities such as memory 112 or I/O components 120. Presentation component(s) 116 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc. I/O ports 118 allow computing device 100 to be logically coupled to other devices including I/O components 120, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
The system architecture for implementing the method of utilizing awareness of video content within a media file to select and place an advertisement will now be discussed with reference to
Typically, each of the first processing unit 210 and the second processing unit 220 includes, or is linked to, some form of a computing unit (e.g., central processing unit, microprocessor, etc.) to support operations of the component(s) running thereon. As utilized herein, the phrase “computing unit” generally refers to a dedicated computing device with processing power and storage memory, which supports operating software that underlies the execution of software, applications, and computer programs thereon. In one instance, the computing unit is configured with tangible hardware elements, or machines, that are integral, or operably coupled, to the first processing unit 210 and the second processing unit 220 in order to enable each device to perform communication-related processes and other operations (e.g., executing an offline authoring process 215 or an online rendering process 225). In another instance, the computing unit may encompass a processor (not shown) coupled to the computer-readable medium accommodated by each of the first processing unit 210 and the second processing unit 220.
Generally, the computer-readable medium includes physical memory that stores, at least temporarily, a plurality of computer software components that are executable by the processor. As utilized herein, the term “processor” is not meant to be limiting and may encompass any elements of the computing unit that act in a computational capacity. In such capacity, the processor may be configured as a tangible article that processes instructions. In an exemplary embodiment, processing may involve fetching, decoding/interpreting, executing, and writing back instructions.
Also, beyond processing instructions, the processor may transfer information to and from other resources that are integral to, or disposed on, the first processing unit 210 and the second processing unit 220. Generally, resources refer to software components or hardware mechanisms that enable the first processing unit 210 and the second processing unit 220 to perform a particular function. By way of example only, a resource accommodated by the first processing unit 210 includes a component to conduct the offline authoring process 215, while a resource accommodated by the second processing unit includes a component to conduct the online rendering process 225.
In embodiments, the second processing unit 220 may be integral to a computer that has a monitor to serve as the display device 250. In these embodiment, the computer may include an input device (not shown). Generally, the input device is provided to receive input(s) affecting, among other things, a media file 205, such as invoking the play of its video content 290, or altering properties of the video content being surfaced at a graphical user interface (GUI) 260 display. Illustrative input devices include a mouse, joystick, key pad, microphone, I/O components 120 of
In embodiments, the display device 250 is configured to render and/or present the GUI 260 thereon. The display device 250, which is operably coupled to an output of the second processing unit 220, may be configured as any presentation component that is capable of presenting information to a user, such as a digital monitor, electronic display panel, touch-screen, analog set-top box, plasma screen, Braille pad, and the like. In one exemplary embodiment, the display device 250 is configured to present rich content, such as the advertisement 270 embedded within video content 290 and/or digital images. In another exemplary embodiment, the display device 250 is capable of rendering other forms of media (e.g., audio signals).
The data store 230 is generally configured to store information associated with the advertisement 270 and the media file 205 that may be selected for concurrent presentation. In various embodiments, such information may include, without limitation, the advertisement 270, the media file 205, a description file 255 to be passed to an ad-designing entity 240; and a group of advertisements (being a compilation of advertisements developed specifically for presentation in tandem with the media file 205) associated within the media file 205, and a trajectory 265. In addition, the data store 230 may be configured to be searchable for suitable access to the stored advertisement 270 and the stored media file(s) 205. For instance, the data store 230 may be searchable for one or more of the advertisements within the group that are targeted toward interests of a user, relevant to the video content 290, and/or associated within the media file 205.
It will be understood and appreciated by those of ordinary skill in the art that the information stored in the data store 230 may be configurable and may include any information relevant to the storage or, access to, and retrieval of the advertisement 270 for placement within the video content 290 of the media file 205 and for rendering the integrated advertisement 270 and media file 205 on the GUI 260. The content and volume of such information are not intended to limit the scope of embodiments of the present invention in any way. Further, though illustrated as single, independent components, the data store 230 may, in fact, be a plurality of databases, for instance, a database cluster, portions of which may reside on the first processing unit 210, the second processing unit 220, another external computing device (not shown), and/or any combination thereof.
This distributed computing environment 200 is but one example of a suitable environment that may be implemented to carry out aspects of the present invention and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the illustrated distributed computing environment 200 be interpreted as having any dependency or requirement relating to any one or combination of the devices 210, 220, and 250, the data store 230, and the components for carrying out the processes 215 and 225 as illustrated. In some embodiments, the components may be implemented as stand-alone devices. In other embodiments, one or more of the components may be integrated directly into the processing units 210 and 220. It will be appreciated and understood that the components for implementing the processes 215 and 225 are exemplary in nature and in number and should not be construed as limiting.
Accordingly, any number of components and devices may be employed to achieve the desired functionality within the scope of embodiments of the present invention. Although the various components and devices of
Further, the devices 210, 220, and 250, and the data store 230, of the exemplary system architecture may be interconnected by any method known in the relevant field. For instance, they may be operably coupled via a distributed computing environment that includes multiple computing devices coupled with one another via one or more networks (not shown). In embodiments, the networks may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. Accordingly, the network is not further described herein.
In operation, the components are designed to perform the offline authoring process 215 and the online rendering process 225. In embodiments, the offline authoring process 215 includes a plurality of discrete steps that may include the following: targeting a patch within an object appearing in the video content 290 of the media file 205; tracking movement of the patch over a sequence of frames within the media file 205; based on the tracked movement of the patch, abstracting coordinate locations of the patch within the video content 290; and writing locations of the patch within the sequence of frames to the trajectory 265.
Accordingly, various aspects of embodiments of the present invention involve abstracting information from the media file 205. By way of clarification, as used herein, the phrase “media file” is not meant to be construed as limiting, but may encompass any general structure for time-based multimedia, such as video and audio. Further, the media file 205 may be configured with any known file formats (e.g. container formats, MP4, and 3GP) that facilitates interchange, management, editing, and presentation of the video content 290. The presentation may be local, via a network, or other streaming delivery mechanism. For instance, the media file may be a digital video that is configured to play upon receiving a user-initiated selection (during an online computing session) directed thereto. Also, upon implementing the offline authoring process 215, the media file 205 may be accessed at a variety of storage locations. For instance, these storage locations may reside locally on the first processing unit 210, in the possession of a user (e.g., internal folders, CD memory, external flash drives, etc), online space accommodated by remote web servers responsible for managing media, a networking site, or a public database for hosting a media collection.
Upon retrieving the media file 205, the offline authoring process 215 abstracts information from the media file 205 to generate a trajectory 265 and/or a description file 255. The “trajectory” 265 essentially serves as a vehicle to store the abstracted information in a logical format. By way of example, if the abstracted information includes locations of an object within the video content 290 of the media file 205, the trajectory may assume a form of an XML file that stores the locations as metadata. The trajectory 265 may be distinct from the media file 205, or may comprise data appended to the media file 205 such that media file 205 includes the abstracted information, yet the video content 290 remains unaltered.
Although several configurations of the trajectory 265 have been described, it should be understood and appreciated by those of ordinary skill in the art that other types of suitable formats that can persist information abstracted from the media file 205 may be used, and that embodiments of the present invention are not limited to those types and formats of trajectories described herein. For instance, the trajectory 265 may include timestamps associated with each of the locations of the object abstracted from the media file 205, where the timestamps are potentially utilized for developing the advertisement 270 and for starting and stopping play of the advertisement 270 in a manner that synchronizes the presentation of it and the media file 205. Consequently, in this instance, the trajectory 265 persists a robust set of information for accurately describing a location and timing of the object's existence at the location within the media file 205.
One embodiment of abstracting information from the media file 205 is shown in
Initially, the sequence of frames 301, 302, 303, 311, and 312 is analyzed to find the object 320 within the video content 290. In embodiments, analyzing involves selecting key frames, shown at reference numerals 301, 302, and 303, and labeling them as such. Then locations 341, 343, and 345 of positions of the object 320 within the key frames 301, 311, and 312, respectively, are manually ascertained. These locations 341, 340, and 345 may be retained in a listing of locations within the trajectory 265 and may be associated with their respective key frames 310, 302, and 303. As illustrated in
Next, a mechanism is applied to automatically interpolate movement of the object 320 on intermediate frames, shown at reference numerals 311 and 312, that are in-between the key frames 301, 302, and 303. In embodiments, the mechanism may comprise a video or vision computing algorithm (e.g., various research algorithms used to understand the video content 290 and recognize the object 320 therein) to review the locations 341, 343, and 345 of the object 320 in the key frames 310, 302, and 303 and to interpolate predicted locations 342 and 344 for the intermediate frames 311 and 312, respectively. Interpolation may be carried out by deducing a difference in location of the object 320 from one frame to the next, and identifying the predicted locations 342 and 344 within the difference, thereby linking the locations 341, 343, and 345 into a continuous movement pattern of the object 320. Accordingly, a semiautomatic procedure is conducted for accurately pulling locations 341, 342, 343, 344, and 345 from the video content 290. Advantageously, this semiautomatic procedure is scalable to accommodate abstracting accurate locations from large media files because it is not necessary to manually recognize and record a location of the object therein for each frame of a selected sequence of frames in which the advertisements will be placed.
Although only one intermediate frame is illustrated as existing in-between a preceding and a succeeding key frame, it should be understood and appreciated by those of ordinary skill in the art that other types of suitable orders and arrangements of manually analyzed and automatically analyzed frames may be used, and that embodiments of the present invention are not limited to those alternating key and intermediate frames as described herein. For instance, if the object 320 is difficult to automatically track or is taking a sporadic path of movement, there may be five intermediate frames in-between successive key frames. But, if the object 320 is easily tracked or is taking a predictable path of movement, there may be twenty intermediate frames in-between successive key frames.
In an exemplary embodiment, an additional algorithm is executed for automatically tuning the predicted locations generated by interpolation. The tuning process may involve automatically locating the object 320 using known characteristics of the object 320, such as shape, color, size, and predicted location at a particular frame. Further, known characteristics may include an identifiable texture associated with a patch on the object 320, as discussed more fully below with reference to
In both the tuning process and the manual process for identifying a location of the object 320 in a particular frame, characteristics of the object may be used. For instance, a shape or color of the object 320 may be known and applied to locate the object among other objects within the video content 265. In an exemplary embodiment, a patch may be used to assist in locating the object 320. The patch will now be discussed with reference to
As shown in
Further, the window of pixels 420 can be used to manually or automatically identify a vector 425 established by the window, or set, of pixels 420 that are designated as the patch 410. Typically, attributes of the identified vector 425 are maintained in the trajectory 265. These attributes may include a radial direction and origin. In operation, the attributes in the trajectory 265 are employed to render an advertisement at positions within the video content 290 that consistently intersect the identified vector 425.
Often the vector 425 is based on a feature of the object 405 that naturally provides a linear subspace. For instance, as illustrated in
Returning to
Upon developing the description file 255, it may be passed to the ad-designer entity 240. In embodiments, the ad-designer entity 240 uses some or all information carried by the description file 255 to create the advertisement 270. By way of example, the creation of the advertisement 270 may be based on a concept of a bear in a stream, as illustrated in
As shown in
Further, the online rendering process 225 carries out a plurality of steps for placing the advertisement 270 on top of the video content 290. Initially, the trigger for implementing the online rendering process 225 involves a user selection of a representation of the media file 205. This user selection may involve a user-initiated click action directed toward a uniform resource locator (URL) linked to the media file 205. Or, the user selection may involve launching a web browser that is configured to present the media file 205. In yet other embodiments, the user selection involves receiving an indication that a user-initiated selection occurred with respect to a visual representation of the advertisement 270.
Incident to invoking the online rendering process 225, a variety of steps are performed to manage presentation of the advertisement 270 by incorporating or inserting the advertisement within the video content 270. Typically, some or all aspects of incorporating the advertisement 270 are performed in a real-time basis as the video content 290 is streaming to the second processing unit 220. The advertisement 270 incorporated into the streaming video content 290 is represented by reference numeral 275, which is being delivered to the display device 250 for rendering.
The variety of steps performed by the online rendering process 225 include one or more of the following, in no particular order: selecting the advertisement 270; generating an ad-overlay that accommodates a container to hold the advertisement 270, where the container is positioned within the ad-overlay according to the trajectory 265; inserting the advertisement 270 into the container; and rendering the ad-overlay on top of the video content 290 when playing the media file 205. A particular embodiment of performing these steps is depicted at
Further, a trajectory associated with the object 320 allows for creation and placement of the advertisement 510 such that it visually interacts with the video content. In one embodiment, the trajectory provides an advertisement designer with a concept of a path of the object 320 allowing the advertisement designer to animate the advertisement 510 in a meaningful way. As illustrated, the flag (advertisement 510) is blowing in a direction as if it were attached to the football (object 320) as it travels through the air. In another embodiment, the trajectory allows the online rendering process to dynamically place the advertisement 510 on top of the video content by rendering the advertisement 510 at positions within the video content that substantially correspond to the locations of the object 320, or patch, written to the trajectory. Accordingly, the flag may be placed, based on X and Y coordinate locations of the football, along its entire path.
Further, other aspects of the video content may be used to place the advertisement 510. For instance, an interesting map that records locations of significant objects embedded within the video content may be applied. As used herein, the phrase “interesting map” relates to information gathered from the sequence of frames that may be employed for positioning the advertisement 510 (flag) on top of the object 320 (football). For instance, the interesting map may include information about another object 520 (receiver) within the video content. In operation, the position of the advertisement 510 may be adjusted by an offset 550 so that it does not obscure the object 520 when being placed. As such, the interesting map allows for building freedom into the placement of the advertisement 510 about the locations in the trajectory. This freedom provides the ability to rotate or translate laterally/vertically the advertisement 510 to avoid blocking any significant object (e.g., the object 520) or other critical aspects in the video content.
Returning to
Turning now to
The exemplary flow diagram 700 commences with targeting a patch within an object appearing in video content of the media file, as indicated at block 710. As described with reference to
Turning now to
The present invention has been described in relation to particular embodiments, which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill-in-the-art to which the present invention pertains without departing from its scope.
From the foregoing, it will be seen that this invention is one well adapted to attain all the ends and objects set forth above, together with other advantages, which are obvious and inherent to the system and method. It will be understood that certain features and sub-combinations are of utility and may be employed without reference to other features and sub-combinations. This is contemplated by and is within the scope of the claims.
This nonprovisional patent application claims the benefit of U.S. Provisional Application No. 61/247,375, filed Sep. 30, 2009, entitled “VIDEO CONTENT-AWARE ADVERTISEMENT PLACEMENT.”
Number | Name | Date | Kind |
---|---|---|---|
6424370 | Courtney | Jul 2002 | B1 |
6724915 | Toklu et al. | Apr 2004 | B1 |
6774908 | Bates | Aug 2004 | B2 |
7116342 | Dengler et al. | Oct 2006 | B2 |
7158666 | Deshpande et al. | Jan 2007 | B2 |
7248300 | Ono | Jul 2007 | B1 |
7456874 | Ono | Nov 2008 | B1 |
7979877 | Huber et al. | Jul 2011 | B2 |
20020023094 | Takenori et al. | Feb 2002 | A1 |
20020112249 | Hendricks et al. | Aug 2002 | A1 |
20030158780 | Isobe et al. | Aug 2003 | A1 |
20040068547 | Kang | Apr 2004 | A1 |
20040100556 | Stromme | May 2004 | A1 |
20040116183 | Prindle | Jun 2004 | A1 |
20060078047 | Shu et al. | Apr 2006 | A1 |
20060126719 | Wilensky | Jun 2006 | A1 |
20080046920 | Bill | Feb 2008 | A1 |
20080195468 | Malik | Aug 2008 | A1 |
20080295129 | Laut | Nov 2008 | A1 |
20090022473 | Cope et al. | Jan 2009 | A1 |
20090171787 | Mei | Jul 2009 | A1 |
20090175538 | Bronstein | Jul 2009 | A1 |
20090227378 | Rom et al. | Sep 2009 | A1 |
20090307722 | Gross | Dec 2009 | A1 |
20100070996 | Liao et al. | Mar 2010 | A1 |
Number | Date | Country |
---|---|---|
2002-032590 | Jan 2002 | JP |
2003-242410 | Aug 2003 | JP |
2004-516589 | Jun 2004 | JP |
2008-077173 | Apr 2008 | JP |
2008146492 | Jun 2008 | JP |
2009-094980 | Apr 2009 | JP |
20090044221 | May 2009 | KR |
454158 | Sep 2001 | TW |
Entry |
---|
PCT International Search Report, International Application No. PCT/US2010/047198 mailed Apr. 21, 2011, 8 pages. |
Xu, Changsheng, et al., “Sports Video Analysis: Semantics Extraction, Editorial Content Creation and Adaptation”, Published Date: Apr. 2009, National Lab of Pattern Recognition, Beijing, China, http://www.academypublisher.com/jmm/vol04/no02/jmm04026979.pdf. |
Han, Fred, “Google Certifies Tumri's Ad Platform for Context-Aware Ads”, Published Date: Jan. 6, 2009 http://www.reuters.com/article/pressRelease/idUS131693+06-Jan-2009+BW20090106. |
Brunheroto, J., et al., “Issues in Data Embedding and Synchronization for Digital Television”, Published Date: 2000, IBM Research, Hawthorne, NY, http://ieeexplore.ieee.org//stamp/stamp.jsp?tp=&arnumber=00870990. |
V4x Interactive Media Platform—Retrieved Date: Aug. 11, 2009, http://www.v4x.com/docs/brochure.pdf. |
“Third Office Action and Search Report Issued in Chinese Patent Application No. 201080044011.X”, Mailed Date: May 9, 2014, 14 Pages. |
“Office Action Issued in Taiwan Patent Application No. 99127767”, Mailed Date: Feb. 26, 2015, 14 Pages. |
European Search Report dated Apr. 16, 2015 in Application No. 10820993.3, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20110078623 A1 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
61247375 | Sep 2009 | US |