Claims
- 1. A method for generating annotations of viewable segments within a video sequence comprising the steps of:
selecting a start frame from a video sequence; selecting an end frame from a video sequence to form in conjunction with the selected start frame a designated video segment; associating an attribute with the designated video segment; and storing the attribute as metadata within a lookup table for subsequent selection and presentation of the designated video segment to a viewer.
- 2. The method of claim 1, further including the step of automatically annotating scene division metadata within the lookup table.
- 3. The method of claim 1, further including the step of annotating a video segment responsive to an automated object recognition sytem.
- 4. The method of claim 3, wherein the objects automatically recognized by the system include a first-level attribute selected from the group consisting of scene boundaries, the presence of actors, the presence of specific objects, the occurrence of decipherable text in the video images, zoom or pan camera movements, or motion analysis.
- 5. The method of claim 1, further including the steps of:
selecting a second start frame from a video sequence; selecting a second end frame from a video sequence to form in conjunction with the selected second start frame a second designated video segment, wherein said second designated video segment at least partially overlaps with said designated video segment; associating a second attribute with the second designated video segment; and storing the second attribute as metadata within the lookup table for subsequent selection and presentation of the second designated video segment to a viewer.
- 6. The method of claim 1 wherein said annotation includes a plurality of elements including a structural element and a thematic element.
- 7. The method of claim 1, wherein said metadata includes a low-level annotation comprising a type indicator, start time, duration or stop time, and a pointer to a label string.
- 8. The method of claim 7 wherein the type indicator refers to a one selected from the group consisting at least from a person, event, object, or text.
- 9. The method of claim 7 wherein the start and stop times are given in absolute terms.
- 10. The method of claim 7 wherein the start and stop times are given in relative terms to a reference point within the video sequence.
- 11. The method of claim 7, wherein said metadata includes a second-level annotation comprising a type indicator, a pointer to a label, and a pointer to a first of a linked list of elements.
- 12. The method of claim 1, further including the steps of:
presenting for visual inspection a list of the attributes contemporaneous with a timeline of the video sequence; selecting at least one attribute from the list; and performing the associating step responsive to the step of selecting at least one attribute from the list.
- 13. A method for retrieving and displaying segments from a video sequence comprising the steps of:
receiving a request for a video segment from a viewer; retrieving a start frame and an end frame associated with said requested video segment from a memory lookup table; finding a base frame associated with said start frame according to an offset associated with said start frame; decoding from said base frame; and displaying a video segment starting only from said start frame and continuing to said end frame.
- 14. The method of claim 13, further including the steps of:
displaying a list of thematic events; and receiving a selection of one of the thematic events to form a video segment request.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit from U.S. Provisional Patent Application No. 60/266,010 filed Feb. 2, 2001 whose contents are incorporated herein for all purposes.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60266010 |
Feb 2001 |
US |