The disclosed embodiments relate generally to web-based editing, composition, and annotation of digital videos.
Conventional web-based systems permitting the storage and display of digital videos typically allow video playback and have some rudimentary tools for supplementing or altering the original video. These tools typically oblige a user to manually specify aspects such as time ranges during which a particular condition applies. Conventional systems also lack mechanisms for selecting video clips from a larger video and compositing the videos or video clips into a single compilation.
The present invention includes web-based systems and methods for editing digital videos. A graphical editing interface allows designating one or more videos to assemble into a video compilation. The graphical editing interface further allows specifying the portion of a constituent video of the video compilation that will be displayed when the video compilation is played. The graphical editing interface additionally allows the association of annotations—specifying, for example, slides, people, and highlights—with portions of the video. The associated annotations alter the appearance of the video compilation when it is played, such as by displaying slides, or text associated with the annotations, along with the video at times associated with the annotations. The associated annotations also enhance the interactivity of the video compilation, such as by allowing playback to begin at points of interest, such as portions of the video for which there is an associated annotation. The associated annotations can be created by selection of annotation tools of the graphical editing interface, where at least one of the annotation tools is created responsive to a user providing information associated with the tool.
The features and advantages described in this summary and the following detailed description are not all-inclusive. Many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims presented herein.
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
A client 130 executes a browser 132, and connects to the front end server 124 via a network 105, which is typically the Internet, but may also be any network, including but not limited to a LAN, a MAN, a WAN, a mobile, wired or wireless network, a private network, or a virtual private network. While only a single client 130 and browser 132 are shown, it is understood that very large numbers (e.g., millions) of clients are supported and can be in communication with the video hosting server 108 at any time. The client 130 may include a variety of different computing devices. Examples of client devices 130 are personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones or laptop computers. As will be obvious to one of ordinary skill in the art, the present invention is not limited to the devices listed above.
In some embodiments, the browser 132 includes an embedded video player 134 such as, for example, the Flash™ player from Adobe Systems, Inc. or any other player adapted for the video file formats used in the video hosting video hosting server 108. A user can access a video from the video hosting server 108 by browsing a catalog of videos, conducting searches on keywords, reviewing play lists from other users or the system administrator (e.g., collections of videos forming channels), or viewing videos associated with a particular user group (e.g., communities).
Video server 126 receives uploaded media content from content providers and allows content to be viewed by client 130. Content may be uploaded to video server 126 via the Internet from a personal computer, through a cellular network from a telephone or PDA, or by other means for transferring data over network 105 known to those of ordinary skill in the art. Content may be downloaded from video server 126 in a similar manner; in one embodiment media content is provided as a file download to a client 130; in an alternative embodiment, media content is streamed client 130. The means by which media content is received by video server 126 need not match the means by which it is delivered to client 130. For example, a content provider may upload a video via a browser on a personal computer, whereas client 130 may view that video as a stream sent to a PDA. Note also that video server 126 may itself serve as the content provider. Communications between the client 130 and video hosting server 108, or between the other distinct units of
Users of clients 130 can also search for videos based on keywords, tags or other metadata. These requests are received as queries by the front end server 124 and provided to the video server 126, which is responsible for searching the video database 128 for videos that satisfy the user queries. The video server 126 supports searching on any fielded data for a video, including its title, description, tags, author, category and so forth.
Users of the clients 130 and browser 132 can upload content to the video hosting server 108 via network 105. The uploaded content can include, for example, video, audio or a combination of video and audio. The uploaded content is processed and stored in the video database 128. This processing can include format conversion (transcoding), compression, metadata tagging, and other data processing. An uploaded content file is associated with the uploading user, and so the user's account record is updated in the user database 140 as needed.
For purposes of convenience and the description of one embodiment, the uploaded content will be referred to a “videos”, “video files”, or “video items”, but no limitation on the types of content that can be uploaded are intended by this terminology. Each uploaded video is assigned a video identifier when it is processed.
The user database 140 is responsible for maintaining a record of all users viewing videos on the website. Each individual user is assigned a user ID (also referred to as a user identity). The user ID can be based on any identifying information, such as the user's IP address, user name, or the like. The user database may also contain information about the reputation of the user in the video context, as well as through other applications, such as the use of email or text messaging. The user database may further contain information about membership in user groups. The user database may further contain, for a given user, a list of identities of other users who are considered friends of the user. (The term “list”, as used herein for concepts such as lists of authorized users, URL lists, and the like, refers broadly to a set of elements, where the elements may or may not be ordered.)
The video database 128 is used to store the received videos. The video database 128 stores video content and associated metadata, provided by their respective content owners. The video files have metadata associated with each file such as a video ID, artist, video title, label, genre, and time length.
A video editing server 150 provides the ability to create compilations from, and add annotation to, videos in the video database 128. The video editing server 150 has a user database 152 that maintains a record of all users using the video editing system. Each individual user is assigned a user ID (also referred to as a user identity). The user ID can be based on any identifying information, such as the user's IP address, user name, or the like. The user database 152 may also contain information about the reputation of the user in the video context. The user database may further contain information about membership in user groups. The user database may further contain, for a given user, a list of identities of other users who are considered friends of the user. (The term “list”, as used herein for concepts such as lists of authorized users, URL lists, and the like, refers broadly to a set of elements, where the elements may or may not be ordered.) In an embodiment in which the video hosting server 108 and the video editing server 150 are implemented using the same server system, then the user database 140 and the user database 152 are implemented as a single database.
The video editing server 150 keeps a record of various user video editing actions, such as aggregating videos into a compilation, clipping videos to more restricted portions, annotating the videos with information about slides, people or events, adding popup visuals such as text boxes, and the like. It then stores these records within an editing database 154 in association with the user ID from the user database 152. The video editing server 150 also provides to entities such as the client 130 or the video hosting server 108, for a given video, records of editing actions stored within the editing database 154 for that video. In one embodiment, the video editing server 150 is on a separate physical server from the video hosting server 108, although in other embodiments the annotation functionality is included within the video hosting server 108. Video editing server 150 may be operated by the same entity that operates video hosting server 108, or may be a service provided by a third party, e.g., for a fee.
The editing database 154 stores information on video compilations, which are ordered collections of videos treated as a single composite video, and may include a single video. For example, editing database 154 stores in association with, e.g., a unique compilation identifier, an identifier of the user who created of the compilation, a total number of videos within the compilation, an identifier of each video to be played as part of the compilation, as well as an indicator (such as a time range) of the portion of each video to be played and an indicator of the order of the video within the compilation (such as “1” to indicate the first video in the compilation ordering). The editing database also maintains an association between each annotation and the appropriate portion of the annotated video or video compilation. In one embodiment, for example, the editing database 154 stores an identifier of the annotation type (e.g., a text box, or an annotation corresponding to a person) along with any information associated with that type (e.g., a text caption, or an ID corresponding to the person), a time stamp(s) of the portion of the video or compilation to which the annotation applies (e.g., from time 01:05 to time 01:26), an identifier of the video which the annotation annotates, and an identifier of a user who submitted the annotation (e.g., the user ID from the user database 152). Many other storage implementations for annotations would be equally possible to one of skill in the art.
Timeline bar 320 represents the video currently being played, and thus corresponds to one of the segments on the video segment bar 316. The current time marker 322 indicates the current playback location within the current video, and thus corresponds to the thumb marker 317. The timeline bar 320 shows the times of the video currently being played, including the total length. In the example of
The user interface 300 of
For example,
Any of the annotations displayed on the timeline 320—e.g., slide, people, and highlight tags—can be associated with different portions of the video compilation by dragging their tag icons 355-357 to a different location on the timeline. Similarly, the annotations may be removed entirely by dragging their icons off the timeline, selecting them and pressing the “Delete” key, or the like.
The resulting video compilation may then be saved, e.g. via the Save button 370, and published to the Internet. Related information, such as compilation title, compilation description, compilation tags (e.g. for finding the compilation as part of a search), whether the compilation is private to the author or also available to the public, and whether users other than the author can provide comments, can also be specified as part of publishing, e.g. as depicted in the sample user interface of
During video playback, either while the video is being edited, or during post-publication viewing, any previously-specified annotations are displayed and can be used to alter the flow of control of the video compilation. For example, any slides that were associated with the video compilation are displayed within the associated slides region 347 of
It is appreciated that the exact components and arrangement thereof, the order of operations, and other aspects of the above description are purely for purposes of example, and a wide variety of alternative component arrangements and operation orders would be equally possible to one of skill in the art. For example, the video editing server 150 of
Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
It should be noted that the process steps and instructions of the present invention can be embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.
While the invention has been particularly shown and described with reference to a preferred embodiment and several alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention.
Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5339393 | Duffy et al. | Aug 1994 | A |
5388197 | Rayner | Feb 1995 | A |
5414806 | Richards | May 1995 | A |
5530861 | Diamant et al. | Jun 1996 | A |
5600775 | King et al. | Feb 1997 | A |
5664216 | Blumenau | Sep 1997 | A |
5708845 | Wistendahl et al. | Jan 1998 | A |
5732184 | Chao et al. | Mar 1998 | A |
5760767 | Shore et al. | Jun 1998 | A |
5781188 | Amiot et al. | Jul 1998 | A |
5812642 | Leroy | Sep 1998 | A |
5966121 | Hubbell et al. | Oct 1999 | A |
6006241 | Purnaveja et al. | Dec 1999 | A |
6262732 | Coleman et al. | Jul 2001 | B1 |
6289346 | Milewski et al. | Sep 2001 | B1 |
6295092 | Hullinger et al. | Sep 2001 | B1 |
6415438 | Blackketter et al. | Jul 2002 | B1 |
6570587 | Efrat et al. | May 2003 | B1 |
6774908 | Bates et al. | Aug 2004 | B2 |
6792618 | Bendinelli et al. | Sep 2004 | B1 |
6917965 | Gupta et al. | Jul 2005 | B2 |
6956593 | Gupta et al. | Oct 2005 | B1 |
6993347 | Bodin et al. | Jan 2006 | B2 |
7032178 | McKnight et al. | Apr 2006 | B1 |
7055168 | Errico et al. | May 2006 | B1 |
7131059 | Obrador | Oct 2006 | B2 |
7137062 | Kaufman et al. | Nov 2006 | B2 |
7149755 | Obrador | Dec 2006 | B2 |
7207006 | Feig et al. | Apr 2007 | B1 |
7254605 | Strum | Aug 2007 | B1 |
7383497 | Glenner et al. | Jun 2008 | B2 |
7418656 | Petersen | Aug 2008 | B1 |
7559017 | Datar et al. | Jul 2009 | B2 |
7599950 | Walther et al. | Oct 2009 | B2 |
7636883 | Albornoz et al. | Dec 2009 | B2 |
7724277 | Shingu et al. | May 2010 | B2 |
7761436 | Norton et al. | Jul 2010 | B2 |
7805678 | Niles et al. | Sep 2010 | B1 |
8202167 | Ackley et al. | Jun 2012 | B2 |
8209223 | Fink et al. | Jun 2012 | B2 |
8280827 | Muller et al. | Oct 2012 | B2 |
8392834 | Obrador | Mar 2013 | B2 |
20010023436 | Srinivasan et al. | Sep 2001 | A1 |
20020059218 | August et al. | May 2002 | A1 |
20020059584 | Ferman et al. | May 2002 | A1 |
20020065678 | Peliotis et al. | May 2002 | A1 |
20020069218 | Sull et al. | Jun 2002 | A1 |
20020078092 | Kim | Jun 2002 | A1 |
20020120925 | Logan | Aug 2002 | A1 |
20020188630 | Davis | Dec 2002 | A1 |
20030002851 | Hsiao et al. | Jan 2003 | A1 |
20030018668 | Britton et al. | Jan 2003 | A1 |
20030039469 | Kim | Feb 2003 | A1 |
20030068046 | Lindqvist et al. | Apr 2003 | A1 |
20030093790 | Logan et al. | May 2003 | A1 |
20030107592 | Li et al. | Jun 2003 | A1 |
20030112276 | Lau et al. | Jun 2003 | A1 |
20030196164 | Gupta et al. | Oct 2003 | A1 |
20030231198 | Janevski | Dec 2003 | A1 |
20040021685 | Denoue et al. | Feb 2004 | A1 |
20040125133 | Pea et al. | Jul 2004 | A1 |
20040138946 | Stolze | Jul 2004 | A1 |
20040168118 | Wong et al. | Aug 2004 | A1 |
20040172593 | Wong et al. | Sep 2004 | A1 |
20040205482 | Basu et al. | Oct 2004 | A1 |
20050044254 | Smith | Feb 2005 | A1 |
20050081159 | Gupta et al. | Apr 2005 | A1 |
20050160113 | Sipusic et al. | Jul 2005 | A1 |
20050203876 | Cragun et al. | Sep 2005 | A1 |
20050203892 | Wesley et al. | Sep 2005 | A1 |
20050275716 | Shingu et al. | Dec 2005 | A1 |
20050289469 | Chandler et al. | Dec 2005 | A1 |
20060041564 | Jain et al. | Feb 2006 | A1 |
20060053365 | Hollander et al. | Mar 2006 | A1 |
20060059426 | Ogikubo | Mar 2006 | A1 |
20060064733 | Norton et al. | Mar 2006 | A1 |
20060087987 | Witt et al. | Apr 2006 | A1 |
20060101328 | Albornoz et al. | May 2006 | A1 |
20060200832 | Dutton | Sep 2006 | A1 |
20060286536 | Mohler | Dec 2006 | A1 |
20060294085 | Rose et al. | Dec 2006 | A1 |
20060294134 | Berkhim et al. | Dec 2006 | A1 |
20070002946 | Bouton et al. | Jan 2007 | A1 |
20070038610 | Omoigui | Feb 2007 | A1 |
20070099684 | Butterworth | May 2007 | A1 |
20070121144 | Kato | May 2007 | A1 |
20070162568 | Gupta et al. | Jul 2007 | A1 |
20070174774 | Lerman et al. | Jul 2007 | A1 |
20070240072 | Cunningham et al. | Oct 2007 | A1 |
20070250901 | McIntire et al. | Oct 2007 | A1 |
20070266304 | Fletcher et al. | Nov 2007 | A1 |
20080005064 | Sarukkai | Jan 2008 | A1 |
20080016245 | Cunningham et al. | Jan 2008 | A1 |
20080028294 | Sell et al. | Jan 2008 | A1 |
20080046458 | Tseng et al. | Feb 2008 | A1 |
20080086742 | Aldrey | Apr 2008 | A1 |
20080091723 | Zuckerberg et al. | Apr 2008 | A1 |
20080168070 | Naphade et al. | Jul 2008 | A1 |
20080168073 | Siegel et al. | Jul 2008 | A1 |
20080250331 | Tulshibagwale | Oct 2008 | A1 |
20080284910 | Erskine et al. | Nov 2008 | A1 |
20090064005 | Cunningham et al. | Mar 2009 | A1 |
20090165128 | McNally et al. | Jun 2009 | A1 |
20090199251 | Badoiu et al. | Aug 2009 | A1 |
20090204882 | Hollander et al. | Aug 2009 | A1 |
20090210779 | Badoiu et al. | Aug 2009 | A1 |
20090249185 | Datar et al. | Oct 2009 | A1 |
20090297118 | Fink et al. | Dec 2009 | A1 |
20090300475 | Fink et al. | Dec 2009 | A1 |
20100169927 | Yamaoka et al. | Jul 2010 | A1 |
20130042179 | Cormack et al. | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
2004-080769 | Mar 2004 | JP |
2006-157689 | Jun 2006 | JP |
2006155384 | Jun 2006 | JP |
2007-151057 | Jun 2007 | JP |
2007-274090 | Oct 2007 | JP |
2007-310833 | Nov 2007 | JP |
2007-0004153 | Jan 2007 | KR |
WO 2007082169 | Jul 2007 | WO |
Entry |
---|
Ronald Schroeter, Jane Hunter, Douglas Kosovic; Vannotea—A Collaborative Video Indexing, Annotation and Discussion System for Broadband Networks; Knowledge capture, 2003; pp. 1-8. |
Arman, F., et al., “Image Processing on Encoded Video Sequences”, ACM Multimedia Systems Journal, pp. 211-219, vol. 1, No. 5, 1994. |
Ford, R., et al., Metrics for shot boundary detection in digital video sequences, Multimedia Systems, Jan. 2000, pp. 37-46, vol. 8. |
Gonzalez, N., “Video Ads: Every Startup Has a Different Solution,” TechCrunch, Jul. 6, 2007, 7 Pages, [online] [Retrieved on Apr. 20, 2009] Retrieved from the internet <URL:http://www.techcrunch.com/2007/07/06/video-ads-somebody-needs-to-solve-this-problem/>. |
Good, R., “Online Video Publishing Gets Into the Conversation: Click.TV,” Robin Good, What Communication Experts Need to Know, Apr. 18, 2006, 10 pages, [online] [retrieved on Jan. 16, 2007] Retrieved from the Internet: <URL: http://www.masternewmedia.org/news/2006/04/18/online—video—publishing—gets—into.html>. |
Mikolajczyk, K. et al., “A Performance Evaluation of Local Descriptors”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Oct. 2005, vol. 27, No. 10, pp. 1615-1630, 16 pages. |
Moenne-Loccoz, N., et al., “Managing Video Collections at Large,” CUDB '04: Proceedings of the 1st International Workshop on Computer Vision Meets Database, Jun. 2004, pp. 59-66. |
Naphade, M.R., et al., “A High Performance Shot Boundary Detection Algorithm using Multiple Cues”, 1998 International Conference on Image Processing, pp. 884-887, Oct. 4-7, 1988, Chicago, IL, USA. |
Tjondronegoro, D., et al., “Content-Based Video Indexing for Sports Applications Using Integrated Multi-Modal Approach,” Multimedia '05: Proceedings of the 13th Annual ACM International Conference on Multimedia, Nov. 2005, p. 1035-1036. |
Zabih, R., et al., “A Feature-Based Algorithm for Detecting and Classifying Scene Breaks”, Proc. ACM Multimedia 95, pp. 189-200, Nov. 1993, San Francisco, CA. |
“Ticket #3504 (new enhancement),” Participatory Culture Foundation, Software Development, Aug. 14, 2006, 1 page, [online] [retrieved on Jan. 16, 2007] Retrieved from the Internet: <URL: https://develop.participatoryculture.org/trac/democracy/ticket/3504>. |
“Video Marketing, Video Editing & Hosting, Interactive Video,” Veeple.com, 2009, 1 page, [online] [Retrieved on Apr. 20, 2009] Retrieved from the internet <URL:http://www.veeple.com/interactivity.php>. |
“More on Mojiti,” bavatuesdays.com, Mar. 23, 2007, 4 pages, [online] [Retrieved on Apr. 20, 2009] Retrieved from the internet <URL:http://bavatuesdays.com/more-on-mojiti/>. |
“BubblePLY,” PLYmedia Inc. 2008, 1 page, [online] [Retrieved on Apr. 20, 2009] Retrieved from the internet URL:http://www.plymedia.com/products/bubbleply/bubbleply.aspx>. |
“Ooyala—Interactive Video Advertising,” Ooyala, Inc. 2009, 1 page, [online] [Retrieved on Apr. 20, 2009] Retrieved from the internet <URL:http://www.ooyala.com/products/ivideo>. |
MirriAd, 2008, 1 page, [online] [Retrieved on Apr. 20, 2009] Retrieved from the internet <URL:http://www.mirriad.com>. |
Screenshot of “Remixer”, YouTube.com, May 2007 to Feb. 2008, 1 page. |
Korean Intellectual Property Office Notice of Preliminary Rejection, Korean Patent Application No. 10-2009-7015068, Feb. 5, 2010, 12 pages. |
Korean Intellectual Property Office Notice of Preliminary Rejection, Korean Patent Application No. 10-2009-7015068, Oct. 5, 2009, 4 pages. |
PCT International Search Report and Written Opinion, PCT/US2009/034422, Oct. 6, 2009, 12 Pages. |
PCT International Search Report and Written Opinion, PCT/US2007/088067, Jul. 21, 2008, 13 pages. |
PCT International Search Report and Written Opinion, PCT/US2009/042919, Jun. 17, 2009, 8 pages. |
PCT International Search Report and Written Opinion, PCT/US2009/033475, Aug. 20, 2009, 7 Pages. |
Screenshot of “Interactive Video Demo—Check out the Yelp / AdSense demo,” Ooyala, Inc. 2009, 1 page, [online] [Retrieved on Apr. 23, 2009] Can be retrieved from the internet <URL:http://www.ooyala.com/products/ivideo>. |
Examiner's first report on Australian Patent Application No. AU 2010249316, Mailed Jun. 20, 2011, 3 Pages. |
Office Action mailed May 2, 2011, for U.S. Appl. No. 12/389,359, 12 pages. |
First Office Action for Chinese Patent Application No. CN 200980108230.7 mailed Feb. 28, 2012, 11 Pages. |
Rejection Decision for Chinese Patent Application No. CN 200780050525.4 mailed Jan. 18, 2012, 23 Pages. |
Supplementary European Search Report for European Patent Application No. EP 07865849, May 18, 2012, 7 Pages. |
Gasi Y et al: “Sharing video annotations”, Image Processing, 2004. ICIP '04. 2004 International Conference on Singapore Oct. 24-27, 2004, Piscataway, NJ, USA, IEEE, vol. 4, Oct. 24, 2004, pp. 2227-2230. |
Andrew S. Gordon: “Using annotated video as an information retrieval interface,” Proceedings of the 5th International Conference on Intelligent User Interfaces, IUI '00, Jan. 1, 2000, pp. 133-140. |
United States Office Action, U.S. Appl. No. 13/250,998, May 25, 2012, 12 pages. |
United States Office Action, U.S. Appl. No. 13/371,321, May 25, 2012, 12 pages. |
Chinese Second Office Action, Chinese Application No. 200980108230.7, Aug. 13, 2012, 11 pages. |
Chinese Office Action, Chinese Application No. 200910206036.4, Sep. 18, 2012, 19 pages. |
European Extended Search Report, European Application No. 09709327.2, Sep. 21, 2012, 7 pages. |
Notification of Second Board Opinion for Chinese Patent Application No. CN 200780050525.4, Dec. 26, 2013, 5 Pages. |
Extended European Search Report for European Patent Application No. EP 09711777, Dec. 12, 2012, 16 Pages. |
Miyamori, H., et al: “Generation of views of TV content using TV viewers' perspectives expressed in live chats on the web”, Proceedings of the 13th Annual ACM International Conference on Multimedia, Multimedia '05, Nov. 6, 2005, p. 853-861. |
Office Action for Canadian Patent Application No. CA 2,726,777, Nov. 26, 2012, 3 Pages. |
Office Action for Canadian Patent Application No. CA 2,672,757, Mar. 21, 2013, 5 Pages. |
Office Action for Japanese Patent Application No. P2010-546967, Apr. 23, 2013, 5 Pages. |
Notification of Reexamination Board Opinion for Chinese Patent Application No. 200780050525.4, Jun. 14, 2013, 11 Pages. |
Communication pursuant to Article 94(3) for European Patent Application No. EP 09709327.2, Jan. 10, 2014, 6 Pages. |
Screenshot for Zentation.com, 1 page, [online] [Retrieved on Jun. 26, 2009] Retrieved from the internet <URL:http://www.zentation.com/>. |
Screenshot for Zentation.com, 1 page, [online] [Retrieved on Jun. 26, 2009] Retrieved from the internet <URL:http://www.zentation.com/viewer/index.phppasscode=epbcSNExIQr>. |
Screenshot for Zentation.com, 1 page, [online] [Retrieved on Jun. 26, 2009] Retrieved from the internet <URL:http://www.zentation.com/viewer/setup.php?passcode=De2cwpjHsd>. |
“New Feature: Link within a Video,” Google Video Blog, Jul. 19, 2006, pp. 1-5, [online] [Retrieved on Jul. 18, 2008] Retrieved from the internet <URL:http://googlevideo.blogspot.com/2006/07/new-feature-link-within-video—19.html>. |
“New commenting and stats features,” Google Video Blog, Nov. 14, 2006, pp. 1-5, [online] [Retrieved on Jul. 18, 2008] Retrieved from the internet <URL:http://googlevideo.blogspot.com/2006/11/new-commenting-and-stats-features.html>. |
“Online Media Bookmark Manager,” Media X, pp. 1-2, [online] [Retrieved on Jul. 18, 2008] Retrieved from the internet <URL:http://mediax.stanford.edu/documents/bookmark.pdf>. |
Screenshot of Veeple Labs—Interactive Video, 1 page, [online] [Retrieved on Jun. 9, 2008] Retrieved from the internet <URL:http://www.veeple.com/>. |
Screenshot of “Interactive Video Demo—Check out the Yelp/AdSense demo,” Ooyala, Inc. 2009, 1 page, [online] [Retrieved on Apr. 23, 2009] Can be retrieved from the internet <URL:http://www.ooyala.com/products/ivideo>. |