A wide variety of devices exist that enable creation of videos. Examples of devices that enable creation of videos include video cameras and handheld devices with an integrated camera.
In addition, a variety of web-based services exist that enable individuals to share videos via the Internet. For example, a web site may enable individuals to upload videos so that the videos may be viewed by friends and family.
It may be desirable to offer a web-based video browsing. For example, a web site that enables individuals to share videos may benefit by offering a video printing service that enables clients to browse videos and select individual video frames for printing.
Providing a web-based video browsing may present a number of problems. For example, web clients may connect to a web server using a variety of different network connections that yield different communication speeds. A video browsing and selection system that is adapted to a high speed connection may not work well using a low speed connections, and visa versa. In addition, the source videos from which video frames are selected may include a large number of video frames and browsing large numbers of video frames may impose an undesirable burden on a user.
A system for web-based video browsing is disclosed including a web server and a web client that cooperatively provide a set of video browsing functions. The video browsing functions enable a user of the web client to browse a source video and select a video frame from the source video without imposing an excessive burden on the user. The distribution of the video browsing functions between the web server and the web client may be adapted to a communication speed between the web server and the web client.
Other features and advantages of the present invention will be apparent from the detailed description that follows.
The present invention is described with respect to particular exemplary embodiments thereof and reference is accordingly made to the drawings in which:
a-7d illustrate a semi-automatic method for selecting a video frame.
The video browsing functions 18 are distributed among the web server 10 and the web client 12. The distribution of the video browsing functions 18 is adapted to the characteristics of a communication link 16 used by the web client 12 to reach the communicate network 14. The distribution may be selected to enhance the experience of a user of the web client 12. For example, if the communication link 16 has a relatively limited bandwidth then the video browsing functions 18 are distributed among the web server 10 and the web client 12 to minimize bandwidth utilization on the communication link 16. On the other hand, if the communication link 16 has a relatively high bandwidth then the video browsing functions 18 are distributed among the web server 10 and the web client 12 to take full advantage of the available bandwidth of the communication link 16.
The components of the video browsing functions 18 described above, e.g. the video browser 300 may be implemented using software that runs on the web server 10 and the web client 12. For example, the web client 12 may be a personal computer that is capable of running the web browser 300 implemented in software. The software that runs on the web client 12 may be pre-downloaded and installed permanently at the web client 12 or may be downloaded as needed and installed in a temporary folder on the web client 12 and then removed after a working session is finished. Alternatively, the software may run remotely from the web server 10 and operate on the source video 22 at the web client 12.
a-7d illustrates a semi-automatic method for selecting the video frame 30 from the source video 22. The semi-automatic method depicted in steps 200-206 may be implemented in the video browser 300 that executes on the web server 10 or the web client 12.
a illustrates step 200 during which a set of key frames 40-44 are extracted from the source video 22. A set of blocks 50-53 located in between adjacent pairs of the key frames 40-44 represents the respective sections of the source video 22 in between the corresponding key frames. For example, the block 50 selected by the user of the web client 12 represents a section of the source video 22 located in between the key frames 40 and 41. A block 149 represents a section of the source video 22 before the key frame 40 and a block 154 represents a section of the source video 22 after the key frame 44.
Any known method for extracting a set of key frames may be employed at step 200. The number of key frames extracted at step 200 may be user-selectable or may be adaptively determined in response to the content of the source video 22.
The user of the web client 12 examines the key frames 40-44 from step 200 and decides that they subjectively prefer the portion of the source video 22 that is bounded by the key frames 40 and 41. The user of the web client 12 indicates this preference by selecting the block 50, e.g. using keyboard/mouse of the web client 12.
b illustrates step 202 during which a set of key frames 60-64 are extracted from the portion of the source video 22 that corresponds to the block 50 selected at step 200. A block 169 represents a section of the source video 22 before the key frame 60 and a block 174 represents a section of the source video 22 after the key frame 64. The key frames 60-64 show additional detail of the section of the source video 22 that corresponds to the block 50. The user of the web client 12 examines the key frames 60-64 and decides that they subjectively prefer the key frame 63. The user of the web client 12 indicates this preference by selecting the key frame 63.
c illustrates step 204 during which the key frame 63 selected by the user is presented along with the set of M previous video frames 70-72 and M subsequent video frames 80-82 from the source video 22. The video frames 70-72 followed by the key frame 63 followed by the video frames 80-82 are a continuous sequence of video frames from the source video 22 without any intervening video frames. The user of the web client 12 selects the video frame 70 from among the video frames 70-72.
d illustrates step 206 during which the video frame 70 selected by the user at step 204 is presented along with a set of M previous video frames 90-92 and M subsequent video frames 120-122 from the source video 22. The user of the web client 12 selects the video frame 30 from among the video frames 90-92, the video frame 70, and the video frames 120-122.
A semi-automatic method enables a user to express the subjective desirability of the video frame 30. A semi-automatic process helps avoid imposing tedious manual operations on a user while enabling the user to obtain the best video frames according to their own subjective preferences.
The video browser 300 may select video frame 30 automatically. For example, a variety of known methods for selecting key frames from a video may be employed and the video frame 30 may be one of the extracted key frames. One example is to extract a key frame once every N frames in the source video 22. The key frames may be selected based on a content analysis of the source video 22 so that more key frames are selected from a highlight portion of the source video 22.
Alternatively, a fully manual method may be used in which the user of the web client 12 browses all of the video frames in the source video 22 to select the video frame 30.
The frame enhancer 302 may employ one or more of a variety of methods to enhance the image quality of the video frame 30. Examples include increasing the resolution of the video frame 30 by applying a super-resolution process, reducing noise and artifacts in the video frame 30 using a de-noising process, sharpening edges of the video frame 30, correcting colors in the video frame 30 with white balance, etc.
The foregoing detailed description of the present invention is provided for the purposes of illustration and is not intended to be exhaustive or to limit the invention to the precise embodiment disclosed. Accordingly, the scope of the present invention is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5664227 | Mauldin et al. | Sep 1997 | A |
5835667 | Wactlar et al. | Nov 1998 | A |
6173317 | Chaddha et al. | Jan 2001 | B1 |
6230172 | Purnaveja et al. | May 2001 | B1 |
6535639 | Uchihachi et al. | Mar 2003 | B1 |
6631424 | McDonough et al. | Oct 2003 | B1 |
6782049 | Dufaux et al. | Aug 2004 | B1 |
6791579 | Markel | Sep 2004 | B2 |
20010013068 | Klemets et al. | Aug 2001 | A1 |
20020071651 | Wurz et al. | Jun 2002 | A1 |
20020135808 | Parry | Sep 2002 | A1 |
20030009488 | Hart | Jan 2003 | A1 |
20030033606 | Puente et al. | Feb 2003 | A1 |
20030122861 | Jun et al. | Jul 2003 | A1 |
20030212993 | Obrador | Nov 2003 | A1 |
20060238806 | Karaoguz et al. | Oct 2006 | A1 |
Number | Date | Country |
---|---|---|
0782085 | Jul 1997 | EP |
1996-334120 | Sep 1997 | JP |
2003-216406 | Jul 2003 | JP |
Entry |
---|
Internet Multimedia Management Systems V. Edited by Smith, John R.; Zhang, Tong; Panchanathan, Sethuraman. Proceedings of the SPIE, vol. 5601, pp. 25-35 (2004). |
Supplementary European Search Report dated Feb. 23, 2010 for Application No. 08724624.5-1527, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20080178086 A1 | Jul 2008 | US |