System for presenting audio-video content

Information

  • Patent Grant
  • 7904814
  • Patent Number
    7,904,814
  • Date Filed
    Thursday, December 13, 2001
    22 years ago
  • Date Issued
    Tuesday, March 8, 2011
    13 years ago
Abstract
A system for presenting a summarization of audio and/or visual content having a plurality of segments to a user together with a graphical user interface that preferably indicates to the viewer the relative temporal position of video segments viewed in the summary within the content from which the summary was derived.
Description

The present invention relates to viewing audio-video content.


The amount of video content is expanding at an ever increasing rate, some of which includes sporting events. Simultaneously, the available time for viewers to consume or otherwise view all of the desirable video content is decreasing. With the increased amount of video content coupled with the decreasing time available to view the video content, it becomes increasingly problematic for viewers to view all of the potentially desirable content in its entirety. Accordingly, viewers are increasingly selective regarding the video content that they select to view. To accommodate viewer demands, techniques have been developed to provide a summarization of the video representative in some manner of the entire video. Video summarization likewise facilitates additional features including browsing, filtering, indexing, retrieval, etc. The typical purpose for creating a video summarization is to obtain a compact representation of the original video for subsequent viewing.


There are two major approaches to video summarization. The first approach for video summarization is key frame detection. Key frame detection includes mechanisms that process low level characteristics of the video, such as its color distribution, to determine those particular isolated frames that are most representative of particular portions of the video. For example, a key frame summarization of a video may contain only a few isolated key frames which potentially highlight the most important events in the video. Thus some limited information about the video can be inferred from the selection of key frames. Key frame techniques are especially suitable for indexing video content.


The second approach for video summarization is directed at detecting events that are important for the particular video content. Such techniques normally include a definition and model of anticipated events of particular importance for a particular type of content. The video summarization may consist of many video segments, each of which is a continuous portion in the original video, allowing some detailed information from the video to be viewed by the user in a time effective manner. Such techniques are especially suitable for the efficient consumption of the content of a video by browsing only its summary. Such approaches facilitate what is sometimes referred to as “semantic summaries”.


There are numerous computer based editing systems that include a graphical user interface. For example, U.S. Pat. No. 4,937,685 discloses a system that selects segments from image source material stored on at least two storage media and denote serially connected sequences of the segments to thereby form a program sequence. The system employs pictorial labels associated with each segment for ease of manipulating the segments to form the program sequence. The composition control function is interactive with the user and responds to user commands for selectively displaying segments from the source material on a pictorial display monitor. The control function allows the user to display two segments, a “from” segment and a “to” segment, and the transition there between. The segments can be displayed in a film-style presentation or a video-style presentation directed to the end frame of the “from” segment and the beginning frame of the “to” segment. The system can selectively alternate between the film-style and video-style presentation. Such a system is suitable for a video editing professional to edit image source material and view selected portions of the image in a film-style or video-style presentation. However, such a system is ineffective for consumers of such video content to view the content of the source material in an effective manner.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an exemplary illustration of a graphical user interface for presenting video and a time line.



FIG. 2 is an exemplary illustration of an alternative time line.



FIG. 3 is an exemplary illustration of another alternative time line.



FIG. 4 is an exemplary illustration of yet another alternative time line.



FIG. 5 is an exemplary illustration of another graphical user interface for presenting video and a time line.



FIG. 6 is an exemplary illustration of a graphical user interface for modifying the presentation of the video.



FIG. 7 illustrates different presentation modes.



FIG. 8 illustrates hierarchical data relating to a video.



FIG. 9 is an exemplary illustration of yet another alternative time line.



FIG. 10 is an exemplary illustration of yet another alternative time line.



FIG. 11 is an exemplary illustration of yet another alternative time line.



FIG. 12 is an exemplary illustration of yet another alternative time line.



FIG. 13 illustrates additional navigational options.



FIG. 14 illustrates a regular scanning time line.



FIG. 15 illustrates a summary scanning time line.



FIG. 16 illustrates summary scanning with a thumbnail index of visual indications.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

A typical football game lasts about 3 hours of which only about one hour turns out to include time during which the ball is in action. The time during which the ball is in action is normally the exciting part of the game, such as for example, a kickoff, a hike, a pass play, a running play, a punt return, a punt, a field goal, etc. The remaining time during the football game is typically not exciting to watch on video, such as for example, nearly endless commercials, the time during which the players change from offense to defense, the time during which the players walk onto the field, the time during which the players are in the huddle, the time during which the coach talks to the quarterback, the time during which the yardsticks are moved, the time during which the ball is moved to the spot, the time during which the spectators are viewed in the bleachers, the time during which the commentators talk, etc. While it may indeed be entertaining to sit in a stadium for three hours for a one hour football game, many people who watch a video of a football game find it difficult to watch all of the game, even if they are loyal fans. A video summarization of the football video, which provides a summary of the game having a duration shorter than the original football video, may be appealing to many people. The video summarization should provide nearly the same level of the excitement (e.g. interest) that the original game provided.


It is possible to develop models of a typical football video to identify potentially relevant portions of the video. Desirable segments of the football game may be selected based upon a “play”. A “play” may be defined as an sequence of events defined by the rules of football. In particular, the sequence of events of a “play” may be defined as the time generally at which the ball is put into play (e.g., a time based upon when the ball is put into play) and the time generally at which when the ball is considered out of play (e.g., a time based upon when the ball is considered out of play). Normally the “play” would include a related series of activities that could potentially result in a score (or a related series of activities that could prevent a score) and/or otherwise advancing the team toward scoring (or prevent advancing the team toward scoring).


An example of an activity that could potentially result in a score, may include for example, throwing the ball far down field, kicking a field goal, kicking a point after, and running the ball. An example of an activity that could potentially result in preventing a score, may include for example, intercepting the ball, recovering a fumble, causing a fumble, dropping the ball, and blocking a field goal, punt, or point after attempt. An example of an activity that could potentially advance a team toward scoring, may be for example, tackling the runner running, catching the ball, and an on-side kick. An example of an activity that could potentially prevent advancement a team toward scoring, may be for example, tackling the runner, tackling the receiver, and a violation. It is to be understood that the temporal bounds of a particular type of “play” does not necessarily start or end at a particular instance, but rather at a time generally coincident with the start and end of the play or otherwise based upon, at least in part, a time (e.g., event) based upon a play. For example, a “play” starting with the hiking the ball may include the time at which the center hikes the ball, the time at which the quarterback receives the ball, the time at which the ball is in the air, the time at which the ball is spotted, the time the kicker kicks the ball, and/or the time at which the center touches the ball prior to hiking the ball. A summarization of the video is created by including a plurality of video segments, where the summarization includes fewer frames than the original video from which the summarization was created. A summarization that includes a plurality of the plays of the football game provides the viewer with a shortened video sequence while permitting the viewer to still enjoy the game because most of the exciting portions of the video are provided, preferably in the same temporally sequential manner as in the original football video. Other relevant portions of the video may likewise be identified in some manner. Other types of content, such as baseball, are likewise suitable for similar summarization including the identification of plays.


The present inventors considered the aforementioned identification of a “play” from a video and considered a traditional presentation technique, namely, creation of another video by concatenation of the “play” segments into a single sequence for presentation to the user. In essence, such techniques mask any underlying description data regarding the video, such as data relating to those portions to include, and provide an extracted composite. The data may be, for example, time point/duration data and structured textual or binary descriptions (e.g., XML documents that comply with MPEG-7 and TV-Anytime standards). While suitable for passive viewing by a user, the present inventors consider such a presentation to be inadequate for effective consumption of audiovisual material by a user. The user does not have the ability to conceptualize the identified subset of the program in the context of the full program. This is important for the user, because they should create a mental model of the temporal event relationships of the program that they are consuming (e.g., watching). For example, viewing a simple composite of a slam-dunk summary is a limited experience for viewing a sequence of events. In particular, the present inventors consider that a graphical user interface illustrating the temporal information regarding the location of the video segments within the original video enhances the viewing experience of the user and provides an improved dimension to the viewing experience.


Referring to FIG. 1, the system may present the video content to the user in one or more windows 20 and may present a corresponding time line 30, which may be referred to generally as temporal information, representative of the entire video or a portion thereof with the identified play segments 32 or otherwise identified thereon. The segments 32 may relate to any particular type of content, such as for example, interesting events, highlights, plays, key frames, events, and themes. A graphical indicator 35 illustrates where in the time line 30 corresponds with the presently displayed video. The system may present the play segments 32 in order from the first segment 34 to the last segment 36. The regions between the play segments 32 relates to non-play regions 38, which are typically not viewed when presenting a summarization of the video consisting of play segments 32. The time line 30 may be a generally rectangular region where each of the plurality of segments 32 is indicated within the generally rectangular region, preferably with the size of each of the plurality of segments indicated in a manner such that the plurality of segments with a greater number of frames are larger than the plurality of segments with a lesser number of frames. Also, the size of the regions 38 between each of the plurality of segments may be indicated in a manner such that the regions 38 with a greater number of frames are larger than segments and regions with a lesser number of frames. Moreover, the size of each region 38 and segments 32 are preferably generally consistent with the length of time of the respective portions of the video. The indicator changes location relative to the time line as the currently displayed portion of the video changes.


In an alternative embodiment, the relevant segments may be identified in any manner and relate to any parts of the video that are potentially of interest to a viewer with the total of the identified segments being less than the entire video. In essence, a plurality of segments of the video are identified in some manner. Referring to FIGS. 2, 3, and 4, alternative representations of the time line 30 for the video and segments of potential interest are illustrated.


While the described system is suitable for indicating those portions of the video that are likely desirable for the user, the particular type of content that the time line indicates is unknown to the viewer. For example, during a basketball game the time line may select a large number of good defensive plays and only a few slam dunks. However, the particular viewer may be more interested in the slam dunks, and accordingly, will have to watch a significant series of undesired good defensive plays in order to watch the few slam dunks. Moreover, the system provides the viewer with no indication of when such slam dunks may occur, or whether all of the slam dunks for a particular video have already occurred. To overcome this limitation, the present inventors came to the realization that the time line should not only indicate those portions that are potentially desirable for the viewer, but also provide some indication of what type of content is represented by different portions of the time line. The indication may indicate simply that different portions relate to different content, without an identification of the content itself.


Referring to FIG. 5, the time line 48 may indicate a first type of content with first visual indications 50, a second type of content with second visual indications 52, and a third type of content with third visual indications 54. Additional visual indications may likewise be used, if desired. Moreover, the indications may be provided in any visually identifiable manner, such as color, shade, hatching, blinking, flashing, outlined, normal bands, grey scale bands, multi-colored bands, multi-textured bands, multi-height bands, etc. To provide further interactivity with the video, the system may provide a selectable indicator 56 that indicates the current position within the time line, which may be referred to generally as temporal information, of a currently displayed portion of the video. This permits the user to have a more accurate mental model of the temporal-event relationships of the program they are viewing and interact therewith.


The selectable indicator 56 changes location relative to the time line 48 as the currently displayed portion of the video changes. The user may select the selectable indicator 56, such as by using a mouse or other pointing device, and move the selectable indicator 56 to a different portion of the video. Upon moving the selectable indicator 56, the video being presented changes to the portion of the video associated with the modified placement of the selectable indicator 56. This permits the user to select those portions of the video that are currently of the greatest interest and exclude those that are less desirable. The user may modify the location of the selectable indicator 56 to any other location on the time line 48, including other indicated portions 50, 52, 54, and the regions in between. Typically, the presentation of the video continues from the modified location.


The system may include a set of selectors 58 that permits the user to select which portions of the video should be included in the summarized presentation. For example, if the slow motion segments are not desired, then the user may unselect the slow motion box 58 and the corresponding slow motion regions of the time line 48 will be skipped during the summary presentation. However, it is preferred that the slow motion portions are still indicated on the time line 48, while not presented to the user in the summary presentation.


Referring to FIG. 6, a time line 70 may include layered visual bands. The layered visual bands may indicate overlapping activities (e.g., two different characterizations of the content of the video that are temporally overlapping), such as for example, the team that is in possession of the ball and the type of play that occurred, such as a slam dunk. For purposes of illustration, indicated portions 72 may be team A in possession and indicated portions 74 may be team B in possession. Also, the indicated portions 76 and 78 may be representative of different types of content.


The potential importance of displaying multiple different types of content, each having a visually distinguishable identifier, within the context of the video may be illustrated by the following example. Three point summary segments in the game of basketball made toward the end of the game have more significance, and the possession summary provides the user context about each of the three point segments without having to view the preceding portions. In essence, the three point segments reveal limited contextual information, but taken in combination with the entire program time line and overlaid “possession” summary, the summary provides a context to support the temporal-event relationship model.


As previously indicated, the interface may support changing the current playback position of the video. More than merely permitting the user to select a new position in the video, the present inventors determined that other navigational options may be useful in the environment of presenting audiovisual materials. The other navigational modes should correspond to a consistent set of behaviors.


Referring to FIG. 7, the system may include a strong sense mode which, if selected, modifies the functionality of the selectable indicator 56. In the strong sense mode, the user may modify the location of the selectable indicator 56 to another position. In the event that the user selects a location within a region between the indicated segments, the system automatically relocates the selectable pointer 56 to the closest start of the indicated segments. Alternatively, the system may automatically relocate the selectable pointer 56 to the next indicated segment, or the previous indicated segment. In the event that the user selects a location within an indicated segment, the system automatically relocates the selectable pointer 56 to the start of the indicated segment. In essence, the system assists the user in relocating the selectable pointer 56 to the start of one of the indicated segments. After viewing the selected indicated segment, the system goes to the next indicated segments, and so on, until presenting the last temporally indicated segment. In this manner the regions between the indicated segments will not be inadvertently viewed. This is also useful for summaries of short events occurring in a relatively long video, because the resolution of the cursor may make it difficult to manually position the indicator to the beginning of a segment.


The system may also include a mild sense mode which, if selected, modifies the functionality of the selectable indicator 56. In the mild sense mode, the user may modify the location of the selectable indicator 56 to another position. In the event that the user selects a location within a region between the indicated segments, the system automatically relocates the selectable pointer 56 to the closest start of the indicated segments. Alternatively, the system may automatically relocate the selectable pointer 56 to the next indicated segment, or the previous indicated segment. In the event that the user selects a location within an indicated segment, the system does not relocate the selectable pointer 56 within the indicated segment. In essence, the system assists the user in relocating the selectable pointer 56 to the start of one of the indicated segments if located between indicated segments and otherwise does not relocate the indicator. After viewing the selected indicated segment, the system goes to the next indicated segments, and so on, until presenting the last temporally indicated segment. In this manner the regions between the indicated segments will not be inadvertently viewed. This is also useful for summaries of reasonably long events occurring in a relatively long video, because the viewer may not desire to view the entire event.


The system may also include a weak sense mode which, if selected, modifies the functionality of the selectable indicator 56. In the weak sense mode, the user may modify the location of the selectable indicator 56 to another position. In the event that the user selects a location within a region between the indicated segments, the system does not relocate the selectable pointer 56 to the closest start of the indicated segments. In the event that the user selects a location within an indicated segment, the system does not relocate the selectable pointer 56 within the indicated segment. In essence, the system does not assists the user in relocating the selectable pointer 56 to the start of one of the indicated segments nor relocates the selectable pointer 56 within the region between indicated segments. After viewing the selected indicated segment, or otherwise the region between the indicated segments, the system goes to the next indicated segments, and so on, until presenting the last temporally indicated segment. In this manner the regions between the indicated segments are viewable while maintaining the summary characteristics. This is also useful for regions between indicated summaries that may be of potential interest to the viewer.


The system may also include a no sense mode which, if selected, modifies the functionality of the selectable indicator 56. In the no sense mode, the user may modify the location of the selectable indicator 56 to another position. In the event that the user selects a location within a region between the indicated segments, the system does not relocate the selectable pointer 56 to the closest start of the indicated segments. In the event that the user selects a location within an indicated segment, the system does not relocate the selectable pointer 56 within the indicated segment. In essence, the system does not assists the user in relocating the selectable pointer 56 to the start of one of the indicated segments nor relocates the selectable pointer 56 within the region between indicated segments. After viewing the selected indicated segment, or otherwise the region between the indicated segments, the system continues to present the video in temporal order, including regions between the indicated segments. In this manner the regions between the indicated segments together with the indicated segments, are viewable while maintaining the temporal graphical interface. It is to be understood that other navigational modes may likewise be used, as desired.


The present inventors came to the realization that descriptions related to video content may include summarization data and preferences, such as the MPEG-7 standard and the TV-Anytime standard. These descriptions may also include navigational information. Moreover, the data within the descriptions may be hierarchical in nature, such as shown in FIG. 8. The most rudimentary presentation of this data is to instantiate a single sequence or branch from the full collection. For instance, presenting a summary of the “slam dunks” for a basketball game. One technique for the presentation of the hierarchical material is to indicate each segment on the time line and thereafter present the sequence, as previously described. After considering the hierarchical nature of the data and the time line presentation of the video material, it was determined that the visual indications on the time line may be structured to present the hierarchical information in a manner that retains a portion of the hierarchical structure. Referring to FIG. 9, one manner of maintaining a portion of the hierarchical structure is to graphically present the information in ever increasing specificity where at least two levels of the hierarchy, preferably different levels, are presented in an overlapping manner. For example, in baseball the time line may include data from the innings 80, the team at bat 82 (e.g., team A, team B), and the plays 84 which may be further differentiated. In the event that the data has hierarchical or non-hierarchical temporal information with overlapping time periods, the temporal information may be displayed in such a manner to maintain the differentiation of the overlapping time periods.


In general, the time line may include multiple layers in a direction perpendicular to the length of the time line. This multiple level representation permits more information regarding the content of the video to be presented to the user in a more compact form and consistent format. The levels may be of different widths and heights, as desired. Also, the techniques for presenting the information in the time line may be associated with a particular layer of the time line. These layers may be managed, in the graphical user interface, as windows that may be minimized, reordered, shrinked or expanded, highlighted differently, etc. Also, the time line layering allows the particular presentation technique for each layer to be dynamically reconfigured by the user.


Referring to FIG. 10, to further annotate the time line textual information may be included therein. The textual information may, for example, include the name of the summary segment overlaid on the associated band in the time line. For example, in a football game, the current “down” may be shown. Referring to FIG. 11, textual information may also be presented as floating windows that pop up when the user brings the cursor over the associated segment. For example, in a baseball game, the user may move the cursor over the player-at-bat summary to learn who is batting in each segment, etc. Referring to FIG. 12, audible information may be presented together with the presentation of the video and temporal information. For instance, in a baseball game, the last-pitch-for-player-at-bat and the last-pitch-of-inning, may be associated with distinct audio clips that are played back at the beginning or otherwise associated with these particularly interesting plays.


The techniques discussed herein may likewise be applied to audio content, such as for example, a song, a group of songs, or a classical music symphony. Also, the techniques discussed herein may likewise be applied to audio broadcasts, such as commentary from national public radio or “books on tape”. For example, the first paragraph, medical paragraphs, topical information, etc. may be summarized. Moreover, the techniques discussed herein may likewise be applied to audio/visual materials.


The strong sense, mild sense, weak sense, and no sense (see FIG. 7) navigation selections permit enhanced interactivity with the audiovisual material. However, such navigational selections are cumbersome and may not provide the functionality that may be desired by consumers of audiovisual materials. To provide an enhanced experience to consumers of audiovisual summaries additional navigational functionality should be provided, where the functionality is associated with the visual interface presented to the user.


Referring to FIG. 13, a summary/normal button 100 selection is provided to enable the user to select between the summary presentation (e.g., primarily the summary materials) and the normal presentation (e.g., include both the summary materials and non-summary materials) of the audiovisual materials. A play/pause button 102 begins playback from the current position or pauses the playback at the current position if the program is already playing. A reverse skip button 104 and a forward skip button 106 cause the program to skip rearward or skip forward in the audiovisual content a predetermined time duration or otherwise to another summary portion.


To reduce the time necessary for a user to consume a program the user may use a forward scan button 108 or a reverse scan button 110. Referring to FIG. 14, the forward scan button 108, when coupled with the normal playback 100, may use a predetermined period of time to determine the amount to advance 120 and another predetermined period of time of the short playback portion 122. In essence, each portion is displayed briefly before jumping to the next segment, unless the user decides to terminate the scan and resumes either normal or summary playback. It will be noted that this technique does not make use of the program summary description.


Referring to FIG. 15, the forward scan button 108, when coupled with the summary playback 100, may use the summary description depicted in the scroll bar to determine the amount to advance 124 and another predetermined period of time of the short playback portion 126. In essence, each summary portion is displayed briefly before jumping to the next segment, unless the user decides to terminate the scan and resumes either normal or summary playback. It will be noted that this technique makes use of the program summary description. Different techniques may be used to determine the offset into the program segment as well as the duration of the playback. For example, the offset and duration may be based on the program description or they may be based on a statistical analysis of the segment time boundaries. The example shown in FIG. 15 illustrates an offset of zero seconds (n) and a playback duration at an arbitrary number of seconds. That is, the viewer previews the first n seconds of each of the summary segments.


Another technique to dynamically determine the offset and duration may be by permitting the user to configure the scanning parameters. For instance, the user may press the play or skip button prior to activating the scan operation. Then if the time between pressing the play button (or skip) and pressing the scan button is within a reasonable range, this duration may be used as the scan playback duration parameter. Alternatively, the user may manually select the duration and/or offset parameter. Similarly, the same techniques may be used for the reverse scan button 110.


The user interface may likewise permit the configuration of other scanning operations. For example, the scanning modes may be activated by pressing the skip buttons 104 or 106 for a greater than a “hold” period of time, or the skip buttons 104 or 106 may have a “repeat key” behavior that is equivalent to being in the respective scan modes. The scan modes may be used as a fundamental technique for consuming the program, or as a rapid advance feature which will position the program for further operations. The scan mode may be terminated by any suitable action, such as for example, pressing another button while in the scan mode and/or activating another navigational option (e.g., play, reverse skip, forward skip, etc.).


An navigation example is described, for purposes of illustration, with respect to a baseball viewer that is interested in advancing to and watching all the plays of the game in which their favorite player is playing.

    • (a) The viewer activates the forward scanning mode by pressing the scan button. The viewer watches the program, waiting to detect their favorite player in the action, at which point they enter normal playback mode by pressing the play button.
    • (b) The game is then played back at normal rate without skipping or scanning anything. When the player is no longer in the action, the user may return to step (a), or they may,
    • (c) enter summary playback mode by pressing the summary/normal button 100. The game is played back in summary mode, just displaying the program summary segments. When the game becomes dull the player may return to step (a). Or if the favorite player returns to action, the user may
    • (d) re-enter normal (default) playback mode by pressing the summary/normal button. This puts the user back into step (b).


The combined effect of the improved navigational functionality together with the visual information provides a powerful user interface paradigm. Several effects may be realized, such as for example, (a) the visual cues facilitate the navigational process of finding specific program locations, (b) the combination of visual cues and navigation components conveys an impression of the “big picture” in the essence of the whole time line, and (c) the combination forms a feedback loop where the visual cues provide the intuitive feedback for the operation of the navigation controls. As it may be appreciated, the visual cues reinforce the commands and operations activated by the user, giving a strong feedback to the user. For instance, as the user activates the scanning operation, they will observe the scroll bar behavior depicting the scanning action. This in conjunction with the constantly updating main viewing area, gives a clear impression to the user of exactly what the system is doing. This likewise gives the user a stronger sense of control over the viewing experience.


Referring to FIG. 16, the indexed mode of the program summaries may likewise be associated with thumbnail images that are graphical indices into the program time line, which further enhance the viewing experience. The thumbnail images are associated with respective summary segments, and may be key frames if desired. In addition, the thumbnails presented may be dynamically modified to illustrate a selected set proximate the portion of the program currently being viewed. Also, the thumbnail associated with the summary segment currently being viewed may be highlighted.


As it may be observed, during normal playback the program will highlight thumbnails at a rate based on the different gaps between each segment, which is typically irregular. However, when the program is played back in summary scanning mode, the highlighted thumbnails will advance at a regular pace from segment to segment. This regular (or linear) advancing of the thumbnail indices is a graphical mapping of the irregular (non-linear) advancement of the actual program. That is, the program is playing back in an irregular sequence, while the visual cues are advancing at a regular rate.


The various navigational operations described herein, expanded by their specific configuration parameters, makes possible a large number of complex navigation sequences. Depending on the user, the program genre, and/or the perspective the user has on a particular game (or program), there may be a wide variety of combinations that the user would like to include in a “macro” type navigation function (or button). A customized button (or function) may be provided for the user to perform a desirable sequence of operations. A sample list of navigation operations and their configuration parameters is illustrated below:













Navigation Operation
Configuration Parameters







Regular Skip
Direction



Period of time to advance/retreat



Audio and video fade in periods


Smart Skip
Direction



Number of segments to advance/retreat



Segment “theme” patterns (used to filter segments



within summary)



Period of time to offset into segment



Base of offset (start or end)



Audio and video fade in periods


Regular Scan
Direction



Period of time to advance/retreat



Period of time to playback



Audio and video fade in and fade out periods


“Smart” Scan
Direction



Number of segments to advance/retreat



Segment “theme” patterns



Period of time to offset into segment



Base of offset (start or end)



Period of time to playback



Audio and video fade in and fade out periods


Play
Duration



Smart or default playback mode


Pause
Duration









One example of a personalized nagivational control is a button configured to “replay the last two seconds of the segment previously viewed.” This macro button could be as follows: smart skip, in reverse, one segment, no theme change, offset two seconds, from end of segment, with zero fade in; play, for two seconds, in default mode; and resume prior navigation operation.

Claims
  • 1. A method of presenting information regarding a video comprising a plurality of frames comprising: (a) summarizing a video, said summarization comprising a plurality of segments of said video, based upon an event characterized by a semantic event that includes a sports play, where each of said segments includes a plurality of sequential frames of said video;(b) displaying said summarization in a first portion of a display; and(c) displaying a graphical user interface on a second portion of said display, simultaneously with said summarization, said interface sequentially indicating the relative location of each of said plurality of segments within said summarization relative to at least one other of said segments as each of said plurality of segments is displayed, each of said plurality of segments represented by a bounded spatial region on said second portion of said display, said bounded spatial region having a respective size based on the number of sequential frames included in the respective segment represented by said bounded spatial region;(d) displaying to a user said relative location for a first semantic characterization of a said sports play in said video using a first visual indication and displaying said relative location for a second semantic characterization of a said sports play in said video using a second visual indication different from said first visual indication, where said first and second semantic characterizations are each individually distinguishable when their associated visual indications graphically overlap;(e) receiving from said user, by interaction with said graphical user interface, a selection of one of said plurality of segments;(f) in response to said selection, presenting a selected one of said plurality of segments and not presenting at least one other of said plurality of segments; and(g) wherein a user selects a portion of said video not included within said plurality of segments, in response thereto, said system presents said selected portion not included within said plurality of segments, and wherein after presenting said selected portion not included within said plurality of segments presents said selected plurality of segments in temporal order without portions of said video not included within said plurality of segments, and wherein said user selects a portion of said video included within said plurality of segments, in response thereto, said system presents said portion of said video within said plurality of segments.
  • 2. The method of claim 1 wherein said first and second semantic characterizations of a said sports play temporally overlap in said summarization.
  • 3. The method of claim 1 wherein said graphical user interface includes a generally rectangular region where each of said plurality of segments is indicated within said generally rectangular region.
  • 4. The method of claim 1 wherein the size of each of said plurality of segments is indicated in a manner such that said plurality of segments with a greater number of frames are larger than said plurality of segments with a lesser number of frames.
  • 5. The method of claim 4 wherein the size of the regions between each of said plurality of segments is indicated in a manner such that said regions between with a greater number of frames are larger than said plurality of segments with a lesser number of frames.
  • 6. The method of claim 4 where said user selects one of said plurality of segments by interacting with said graphical user interface at a point within the displayed bounded spatial region associated with the selected one of said plurality of segments.
  • 7. The method of claim 6 wherein presentation of a selected one of said plurality of segments begins at the first frame of said segment irrespective of which point within said displayed bounded spatial region that said user interacted with.
  • 8. The method of claim 7 including a user-moveable scroll bar on said graphical user interface indicating the relative temporal location of currently-presented frames of said summary, wherein said user selects one of said plurality of segments by moving said scroll bar over the selected one of said plurality of segments, and where said scroll bar snaps to the beginning of the selected one of said plurality of segments.
  • 9. The method of claim 6 wherein presentation of a selected one of said plurality of segments begins at a frame of said segment temporally corresponding to the point within said displayed bounded spatial region that said user interacted with.
  • 10. The method of claim 6 including a selector by which said user may alternatively select a chosen one of (i) presentation of a selected one of said plurality of segments beginning at the first frame of said segment irrespective of which point within said displayed bounded spatial region that said user interacted with; and (ii) presentation of a selected one of said plurality of segments beginning at a frame of said segment temporally corresponding to the point within said displayed bounded spatial region that said user interacted with.
  • 11. The method of claim 1 wherein at least two of said plurality of segments are temporally overlapping.
  • 12. The method of claim 11 wherein said temporally overlapping segments are visually indicated in a manner such that each of said overlapping segments are independently identifiable.
  • 13. The method of claim 1 wherein a user selects a portion of said video not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments.
  • 14. The method of claim 13 wherein said one of said plurality of segments is the segment most temporally adjacent to said portion of said video.
  • 15. The method of claim 13 wherein said one of said plurality of segments is the next temporally related segment.
  • 16. The method of claim 13 wherein said one of said plurality of segments is the previous temporally related segment.
  • 17. The method of claim 1 wherein a user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video from the start thereof.
  • 18. The method of claim 1 wherein a user selects a portion of said video not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments, and wherein said user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video within said plurality of segments.
  • 19. The method of claim 1 wherein a user selects a portion of said video not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments, and wherein said user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video within said plurality of segments starting from the beginning thereof.
  • 20. The method of claim 1 wherein said temporal information is hierarchical and is displayed in such a manner to retain a portion of its hierarchical structure.
  • 21. The method of claim 1 wherein said temporal information relates to overlapping time periods and said temporal information is displayed in such a manner to maintain the differentiation of said overlapping time periods.
  • 22. The method of claim 1 wherein said temporal information is displayed within a time line, wherein the temporal information is presented in a plurality of layers in a direction perpendicular to the length of said time line.
  • 23. The method of claim 1 wherein said temporal information is displayed within a time line, wherein textual information is included within said time line.
  • 24. The method of claim 1 wherein said temporal information is displayed within a time line, wherein additional textual information is displayed upon selecting a portion of said time line.
  • 25. The method of claim 1 wherein said temporal information is displayed together with a time line, wherein additional textual information is displayed together with selecting a portion of said time line.
  • 26. The method of claim 1 wherein said temporal information is displayed within a time line, wherein additional audio annotation is presented upon presenting a portion of said time line.
  • 27. A method of presenting information regarding a video comprising a plurality of frames comprising: (a) identifying a plurality of different segments of said video, where each of said segments includes a plurality of frames of said video;(b) displaying, simultaneously with a said segment of said video, a graphical user interface including information regarding the temporal location of a said segment relative to at least one other of said segments of said video;(c) displaying in an interactive display said temporal location for a first semantic characterization of an event in said video using a first visual indication and displaying said temporal location for a second semantic characterization of an event in said video using a second visual indication different from said first visual indication, where said first and second semantic characterizations are each individually distinguishable when their associated visual indications graphically overlap;(d) displaying to a user at least one selector by which said user may interact with said interactive display to select for viewing selective identified ones of said plurality of segments;(e) receiving user-selections of identified ones of said plurality of different segments;(f) presenting user-selected ones of said plurality of different segments; and(g) wherein a user selects a portion of said video not included within said plurality of different segments, in response thereto, said system presents said selected portion not included within said plurality of different segments, and wherein after presenting said selected portion not included within said plurality of different segments presents said selected plurality of different segments in temporal order without portions of said video not included within said plurality of different segments, and wherein said user selects a portion of said video included within said plurality of different segments, in response thereto, said system presents said portion of said video within said plurality different of segments.
  • 28. The method of claim 27 wherein said graphical user interface includes a generally rectangular region where each of said plurality of segments is indicated within said generally rectangular region.
  • 29. The method of claim 27 wherein the size of each of said plurality of segments is indicated in a manner such that said plurality of segments with a greater number of frames are larger than said plurality of segments with a lesser number of frames.
  • 30. The method of claim 29 wherein the size of the regions between each of said plurality of segments is indicated in a manner such that said regions between with a greater number of frames are larger than said plurality of segments with a lesser number of frames.
  • 31. The method of claim 27 further comprising an indicator that indicates the current position within said temporal information of a currently displayed portion of said video.
  • 32. The method of claim 31 wherein said indicator changes location relative to said temporal information as the portion of said currently displayed portion of said video changes.
  • 33. The method of claim 27 further comprising (a) indicating with an indicator the current position within said temporal information of a currently displayed portion of said video; and(b) modifying the position of said indicator within said temporal information which modifies the displayed portion of said video.
  • 34. The method of claim 33 wherein said indicator is modified to a portion of said video that is not included within said plurality of segments.
  • 35. The method of claim 27 wherein at least two of said plurality of segments are temporally overlapping.
  • 36. The method of claim 35 wherein said temporally overlapping segments are visually indicated in a manner such that each of said overlapping segments are independently identifiable.
  • 37. The method of claim 27 wherein a user selects a portion of said video not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments.
  • 38. The method of claim 37 wherein said one of said plurality of segments is the segment most temporally adjacent to said portion of said video.
  • 39. The method of claim 37 wherein said one of said plurality of segments is the next temporally related segment.
  • 40. The method of claim 37 wherein said one of said plurality of segments is the previous temporally related segment.
  • 41. The method of claim 27 wherein a user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video from the start thereof.
  • 42. The method of claim 27 wherein a user selects a portion of said video not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments, and wherein said user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video within said plurality of segments.
  • 43. The method of claim 27 wherein a user selects a portion of said video not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments, and wherein said user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video within said plurality of segments starting from the beginning thereof.
  • 44. The method of claim 27 wherein said temporal information is hierarchical and is displayed in such a manner to retain a portion of its hierarchical structure.
  • 45. The method of claim 27 wherein said temporal information relates to overlapping time periods and said temporal information is displayed in such a manner to maintain the differentiation of said overlapping time periods.
  • 46. The method of claim 27 wherein said temporal information is displayed within a time line, wherein the temporal information is presented in a plurality of layers in a direction perpendicular to the length of said time line.
  • 47. The method of claim 27 wherein said temporal information is displayed within a time line, wherein textual information is included within said time line.
  • 48. The method of claim 27 wherein said temporal information is displayed within a time line, wherein additional textual information is displayed upon selecting a portion of said time line.
  • 49. The method of claim 27 wherein said temporal information is displayed together with a time line, wherein additional textual information is displayed together with selecting a portion of said time line.
  • 50. The method of claim 27 wherein said temporal information is displayed within a time line, wherein additional audio annotation is presented upon presenting a portion of said time line.
  • 51. The method of claim 27 wherein a user selectable skip function skips a set of frames to a modified location of said video in at least one of a forward temporal direction or a reverse temporal direction, and displays said video at said modified location.
  • 52. The method of claim 27 wherein a user selectable skip function skips to a later temporal segment or a previous temporal segment, and displays said video at said later temporal segment or said previous temporal segment, respectively.
  • 53. The method of claim 27 wherein a user selectable scan function skips a set of frames to a modified location of said video in at least one of a forward temporal direction or a reverse temporal direction, and displays said video at said modified location, and thereafter automatically skips another set of frames to another modified location of said video in at least one of said forward temporal direction or said reverse temporal direction, and displays said video at said another modified location.
  • 54. The method of claim 53 wherein at least one of said forward temporal direction and said reverse temporal direction are consistent with said different segments.
  • 55. The method of claim 54 wherein said display of said video is at the start of the respective one of said different segments.
  • 56. The method of claim 54 wherein said display of said video is at a predetermined offset within the respective one of said different segments.
  • 57. The method of claim 56 wherein said respective image associated with the currently presented said different segments is visually highlighted.
  • 58. The method of claim 27 wherein said graphical user interface displays a respective image associated with at least a plurality of said different segments.
  • 59. The method of claim 58 wherein during presentation of said video said visually highlighted respective images are said highlighted in a substantially regular interval while the sequence of said presentation of said video is at substantially irregular intervals.
  • 60. A method of presenting information regarding audio comprising: (a) identifying a plurality of different segments of said audio, where each of said segments includes a temporal duration of said audio;(b) displaying simultaneously with said segment of said audio a graphical user interface including information regarding the temporal location of a said segment relative to at least one other of said segment of said audio;(c) displaying in an interactive display said temporal location for a first semantic characterization of an event in said audio using a first visual indication and displaying said temporal location for a second semantic characterization of an event in said audio using a second visual indication different from said first visual indication, where said first and second semantic characterizations are each individually distinguishable when their associated visual indications graphically overlap;(d) displaying to a user at least one selector by which said user may interact with said display to select for listening selective identified ones of said plurality of segments;(e) receiving user-selections of identified ones of said plurality of different segments; and(f) presenting user-selected ones of said plurality of different segments; and(g) wherein a user selects a portion of said audio not included within said plurality of different segments, in response thereto, said system presents said selected portion not included within said plurality of different segments, and wherein after presenting said selected portion not included within said plurality of different segments presents said selected plurality of different segments in temporal order without portions of said audio not included within said plurality of different segments, and wherein said user selects a portion of said audio included within said plurality of different segments, in response thereto, said system presents said portion of said audio within said plurality different of segments.
  • 61. The method of claim 60 further comprising (a) indicating with an indicator the current position within said temporal information of a currently displayed portion of said audio; and (b) modifying the position of said indicator within said temporal information which modifies the displayed portion of said audio.
  • 62. The method of claim 61 wherein said indicator is modified to a portion of said audio that is not included within said plurality of segments.
  • 63. The method of claim 60 wherein at least two of said plurality of segments are temporally overlapping.
  • 64. The method of claim 63 wherein said temporally overlapping segments are visually indicated in a manner such that each of said overlapping segments are independently identifiable.
  • 65. The method of claim 60 wherein a user selects a portion of said audio not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments.
  • 66. The method of claim 65 wherein said one of said plurality of segments is the segment most temporally adjacent to said portion of said audio.
  • 67. The method of claim 65 wherein said one of said plurality of segments is the next temporally related segment.
  • 68. The method of claim 62 wherein said one of said plurality of segments is the previous temporally related segment.
  • 69. The method of claim 60 wherein a user selects a portion of said audio included within said plurality of segments, wherein in response thereto, said system presents said portion of said audio from the start thereof.
  • 70. The method of claim 60 wherein a user selects a portion of said audio not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments, and wherein said user selects a portion of said audio included within said plurality of segments, wherein in response thereto, said system presents said portion of said audio within said plurality of segments.
  • 71. The method of claim 60 wherein a user selects a portion of said audio not included within said plurality of segments, wherein in response thereto, said system presents one of said plurality of segments, and wherein said user selects a portion of said audio included within said plurality of segments, wherein in response thereto, said system presents said portion of said audio within said plurality of segments starting from the beginning thereof.
  • 72. The method of claim 60 wherein said temporal information is hierarchical and is displayed in such a manner to retain a portion of its hierarchical structure.
  • 73. The method of claim 60 wherein said temporal information relates to overlapping time periods and said temporal information is displayed in such a manner to maintain the differentiation of said overlapping time periods.
  • 74. The method of claim 60 wherein said temporal information is displayed within a time line, wherein the temporal information is presented in a plurality of layers in a direction perpendicular to the length of said time line.
  • 75. The method of claim 60 wherein said temporal information is displayed within a time line, wherein textual information is included within said time line.
  • 76. The method of claim 60 wherein said temporal information is displayed within a time line, wherein additional textual information is displayed upon selecting a portion of said time line.
  • 77. The method of claim 60 wherein said temporal information is displayed together with a time line, wherein additional textual information is displayed together with selecting a portion of said time line.
  • 78. The method of claim 60 wherein said temporal information is displayed within a time line, wherein additional audio annotation is presented upon presenting a portion of said time line.
  • 79. The method of claim 60 wherein the presentation of said different segments may be modified by a plurality of different functions, and wherein the user may customize another function, not previously explicitly provided, by combining a plurality of said plurality of different functions into a single function.
  • 80. A method of presenting information regarding a video comprising a plurality of frames comprising: (a) identifying a plurality of different segments of said video, where each of said segments includes a plurality of frames of said video;(b) displaying, simultaneously with a said segment of said video, a graphical user interface including information regarding the temporal location of a said segment relative to at least one other of said segments of said video;(c) displaying in an interactive display said temporal location for a first semantic characterization of an event in said video using a first visual indication and displaying said temporal location for a second semantic characterization of an event in said video using a second visual indication different from said first visual indication;(d) displaying to a user at least one selector by which said user may interact with said interactive display to select for viewing selective identified ones of said plurality of segments;(e) receiving user-selections of identified ones of said plurality of segments;(f) presenting user-selected ones of said plurality of different segments; wherein(g) wherein a user selects a portion of said video not included within said plurality of segments, and wherein in response thereto, said system presents said selected portion not included within said plurality of segments, and wherein after presenting said selected portion not included within said plurality of segments presents said selected plurality of segments in temporal order without portions of said video not included within said plurality of segments, and wherein said user selects a portion of said video included within said plurality of segments, wherein in response thereto, said system presents said portion of said video within said plurality of segments.
BACKGROUND OF THE INVENTION

This application claims the benefit of U.S. Patent Application Ser. No. 60/285,553 filed Apr. 19, 2001; U.S. Patent Application Ser. No. 60/297,091 filed Jun. 7, 2001; and U.S. Patent Application Ser. No. 60/329,771 filed Oct. 16, 2001.

US Referenced Citations (359)
Number Name Date Kind
4183056 Evans et al. Jan 1980 A
4253108 Engel Feb 1981 A
4298884 Reneau Nov 1981 A
4321635 Tsuyuguchi Mar 1982 A
4324402 Klose Apr 1982 A
4520404 Von Kohorn May 1985 A
4729044 Kiesel Mar 1988 A
4937685 Barker et al. Jun 1990 A
5012334 Etra Apr 1991 A
5027400 Baji et al. Jun 1991 A
5101364 Davenport et al. Mar 1992 A
5109482 Bohrman Apr 1992 A
5148154 MacKay et al. Sep 1992 A
5200825 Perine Apr 1993 A
5222924 Shin et al. Jun 1993 A
5241671 Reed et al. Aug 1993 A
5288069 Matsumoto Feb 1994 A
D348251 Hendricks Jun 1994 S
5333091 Iggulden et al. Jul 1994 A
5339393 Duffy et al. Aug 1994 A
D354059 Hendricks Jan 1995 S
5381477 Beyers, II et al. Jan 1995 A
5404316 Klingler et al. Apr 1995 A
5410344 Graves et al. Apr 1995 A
5424770 Schmelzer et al. Jun 1995 A
5434678 Abecassis Jul 1995 A
5444499 Saitoh Aug 1995 A
5452016 Ohara et al. Sep 1995 A
5459830 Ohba et al. Oct 1995 A
5483278 Strubbe et al. Jan 1996 A
D368263 Hendricks Mar 1996 S
5521841 Arman et al. May 1996 A
5550965 Gabbe et al. Aug 1996 A
5559549 Hendricks et al. Sep 1996 A
5589945 Abecassis Dec 1996 A
5600364 Hendricks et al. Feb 1997 A
5600573 Hendricks et al. Feb 1997 A
5600781 Root et al. Feb 1997 A
5610653 Abecassis Mar 1997 A
5634849 Abecassis Jun 1997 A
5635982 Zhang et al. Jun 1997 A
D381991 Hendricks Aug 1997 S
5654769 Ohara et al. Aug 1997 A
5659350 Hendricks et al. Aug 1997 A
5664046 Abecassis Sep 1997 A
5664227 Mauldin et al. Sep 1997 A
5675752 Scott et al. Oct 1997 A
5682195 Hendricks et al. Oct 1997 A
5682460 Hyziak et al. Oct 1997 A
5684918 Abecassis Nov 1997 A
5694163 Harrison Dec 1997 A
5696869 Abecassis Dec 1997 A
5696965 Dedrick Dec 1997 A
5710884 Dedrick Jan 1998 A
5717814 Abecassis Feb 1998 A
5717879 Moran et al. Feb 1998 A
5717923 Dedrick Feb 1998 A
5724472 Abecassis Mar 1998 A
5727129 Barrett et al. Mar 1998 A
5732216 Logan et al. Mar 1998 A
5734853 Hendricks et al. Mar 1998 A
5751953 Shiels et al. May 1998 A
5758257 Herz et al. May 1998 A
5758259 Lawler May 1998 A
5761881 Wall Jun 1998 A
5764916 Busey et al. Jun 1998 A
5774357 Hoffberg et al. Jun 1998 A
5774666 Portuesi Jun 1998 A
5778108 Coleman, Jr. Jul 1998 A
5781188 Amiot et al. Jul 1998 A
5794210 Goldhaber et al. Aug 1998 A
5797001 Augenbraun et al. Aug 1998 A
5798785 Hendricks et al. Aug 1998 A
5805733 Wang et al. Sep 1998 A
5809426 Radojeric et al. Sep 1998 A
5821945 Yeo et al. Oct 1998 A
5822537 Katseff et al. Oct 1998 A
5828809 Chang et al. Oct 1998 A
5828839 Moncreiff Oct 1998 A
5835087 Herz et al. Nov 1998 A
D402310 Hendricks Dec 1998 S
5848396 Gerace Dec 1998 A
5857190 Brown Jan 1999 A
5861881 Freeman et al. Jan 1999 A
5867226 Wehmeyer et al. Feb 1999 A
5867386 Hoffberg et al. Feb 1999 A
5875107 Nagai et al. Feb 1999 A
5875108 Hoffberg et al. Feb 1999 A
5877821 Newlin et al. Mar 1999 A
5878222 Harrison Mar 1999 A
5892536 Logan et al. Apr 1999 A
5900867 Schindler et al. May 1999 A
5901246 Hoffberg et al. May 1999 A
5903454 Hoffberg et al. May 1999 A
5907324 Larson et al. May 1999 A
5913013 Abecassis Jun 1999 A
5913030 Lotspiech et al. Jun 1999 A
5920300 Yamazaki et al. Jul 1999 A
5920360 Coleman, Jr. Jul 1999 A
5920477 Hoffberg et al. Jul 1999 A
5923365 Tamir et al. Jul 1999 A
5926624 Katz et al. Jul 1999 A
5930783 Li et al. Jul 1999 A
5933811 Angles et al. Aug 1999 A
5945988 Williams et al. Aug 1999 A
5956026 Ratakonda Sep 1999 A
5956037 Osawa et al. Sep 1999 A
5958006 Eggleston et al. Sep 1999 A
5959681 Cho Sep 1999 A
5959697 Coleman, Jr. Sep 1999 A
5969755 Courtney Oct 1999 A
5973683 Cragun et al. Oct 1999 A
5977964 Williams et al. Nov 1999 A
5986690 Hendricks Nov 1999 A
5986692 Logan et al. Nov 1999 A
5987211 Abecassis Nov 1999 A
5990927 Hendricks et al. Nov 1999 A
5990980 Golin Nov 1999 A
5995094 Eggen et al. Nov 1999 A
5995095 Ratakonda Nov 1999 A
6002833 Abecassis Dec 1999 A
6005565 Legall et al. Dec 1999 A
6005597 Barrett et al. Dec 1999 A
6006265 Rangan et al. Dec 1999 A
6011895 Abecassis Jan 2000 A
6014183 Hoang Jan 2000 A
6020883 Herz et al. Feb 2000 A
6029195 Herz Feb 2000 A
6038367 Abecassis Mar 2000 A
6041323 Kubota Mar 2000 A
6049821 Theriault et al. Apr 2000 A
6052554 Hendricks et al. Apr 2000 A
6055018 Swan Apr 2000 A
6055569 O'Brien et al. Apr 2000 A
6060167 Morgan et al. May 2000 A
6064385 Sturgeon et al. May 2000 A
6064449 White et al. May 2000 A
6067401 Abecassis May 2000 A
6070167 Qian et al. May 2000 A
6072934 Abecassis Jun 2000 A
6076166 Moshfeghi et al. Jun 2000 A
6078917 Paulsen, Jr. et al. Jun 2000 A
6078928 Schnase et al. Jun 2000 A
6081278 Chen Jun 2000 A
6081750 Hoffberg et al. Jun 2000 A
6088455 Logan et al. Jul 2000 A
6088722 Herz et al. Jul 2000 A
6091886 Abecassis Jul 2000 A
RE36801 Logan et al. Aug 2000 E
6100941 Dimitrova et al. Aug 2000 A
6115709 Gilmour et al. Sep 2000 A
6122657 Hoffman, Jr. et al. Sep 2000 A
6128624 Papierniak et al. Oct 2000 A
6133909 Schein et al. Oct 2000 A
6137486 Yoshida et al. Oct 2000 A
6141041 Carlbom et al. Oct 2000 A
6141060 Honey et al. Oct 2000 A
6144375 Jain et al. Nov 2000 A
6151444 Abecassis Nov 2000 A
D435561 Pettigrew et al. Dec 2000 S
6157377 Shah-Nazaroff et al. Dec 2000 A
6160989 Hendricks et al. Dec 2000 A
6161142 Wolfe et al. Dec 2000 A
6163316 Killian Dec 2000 A
6163779 Mantha et al. Dec 2000 A
6169542 Hooks et al. Jan 2001 B1
6177931 Alexander et al. Jan 2001 B1
6181335 Hendricks et al. Jan 2001 B1
6185625 Tso et al. Feb 2001 B1
6195497 Nagasaka et al. Feb 2001 B1
6198767 Greenfield et al. Mar 2001 B1
6199076 Logan et al. Mar 2001 B1
6201536 Hendricks et al. Mar 2001 B1
6208805 Abecassis Mar 2001 B1
6212527 Gustman Apr 2001 B1
6215526 Barton et al. Apr 2001 B1
6216129 Eldering Apr 2001 B1
6219837 Yeo et al. Apr 2001 B1
6226678 Mattaway et al. May 2001 B1
6230172 Purnaveja et al. May 2001 B1
6230501 Bailey, Sr. et al. May 2001 B1
6233289 Fredrickson May 2001 B1
6233389 Barton et al. May 2001 B1
6233586 Chang et al. May 2001 B1
6233590 Shaw et al. May 2001 B1
6236395 Sezan et al. May 2001 B1
6240406 Tannen May 2001 B1
6252544 Hoffberg Jun 2001 B1
6253203 O'Flaherty et al. Jun 2001 B1
6269216 Abecassis Jul 2001 B1
6275268 Ellis et al. Aug 2001 B1
6286140 Ivanyi Sep 2001 B1
6286141 Browne et al. Sep 2001 B1
6289165 Abecassis Sep 2001 B1
6298482 Seidman et al. Oct 2001 B1
6304665 Cavallaro et al. Oct 2001 B1
6304715 Abecassis Oct 2001 B1
6311189 DeVries et al. Oct 2001 B1
6317718 Fano Nov 2001 B1
6317881 Shah-Nazaroff et al. Nov 2001 B1
6320624 Ayer et al. Nov 2001 B1
6324338 Wood et al. Nov 2001 B1
6339842 Fernandez et al. Jan 2002 B1
6342904 Vasudevan et al. Jan 2002 B1
6353444 Katta et al. Mar 2002 B1
6363160 Bradski et al. Mar 2002 B1
6363380 Dimitrova Mar 2002 B1
6370504 Zick et al. Apr 2002 B1
6370688 Hejna, Jr. Apr 2002 B1
6374404 Brotz et al. Apr 2002 B1
6405371 Oosterhout et al. Jun 2002 B1
6412008 Fields et al. Jun 2002 B1
6418168 Narita Jul 2002 B1
6421680 Kumhyr et al. Jul 2002 B1
6425133 Leary Jul 2002 B1
6426761 Kanevsky et al. Jul 2002 B1
6426974 Takahashi et al. Jul 2002 B2
6438579 Hosken Aug 2002 B1
6439572 Bowen Aug 2002 B1
6446261 Rosser Sep 2002 B1
6470378 Tracton et al. Oct 2002 B1
6480667 O'Connor Nov 2002 B1
6487390 Virine et al. Nov 2002 B1
6490320 Vetro et al. Dec 2002 B1
6498783 Lin Dec 2002 B1
6522342 Gagnon et al. Feb 2003 B1
6530082 Del Sesto et al. Mar 2003 B1
6535639 Uchihachi et al. Mar 2003 B1
6542546 Vetro et al. Apr 2003 B1
6543053 Li et al. Apr 2003 B1
6546555 Hjelsvold et al. Apr 2003 B1
6549643 Toklu et al. Apr 2003 B1
6553178 Abecassis Apr 2003 B2
6556767 Okayama et al. Apr 2003 B2
6571279 Herz et al. May 2003 B1
6578075 Nieminen et al. Jun 2003 B1
6581207 Sumita et al. Jun 2003 B1
6587127 Leeke et al. Jul 2003 B1
6593936 Huang et al. Jul 2003 B1
6594699 Sahai et al. Jul 2003 B1
6597859 Leinhart et al. Jul 2003 B1
6611876 Barrett et al. Aug 2003 B1
6614987 Ismail et al. Sep 2003 B1
6621895 Giese Sep 2003 B1
6628302 White et al. Sep 2003 B2
6637032 Feinleib Oct 2003 B1
6658095 Yoakum et al. Dec 2003 B1
6665423 Mehrotra et al. Dec 2003 B1
6675158 Rising et al. Jan 2004 B1
6678635 Tovinkere et al. Jan 2004 B2
6678659 Van Kommer Jan 2004 B1
6681395 Nishi Jan 2004 B1
6691126 Syeda-Mahmood Feb 2004 B1
6697523 Divakaran et al. Feb 2004 B1
6704929 Ozer et al. Mar 2004 B1
6724933 Lin et al. Apr 2004 B1
6741655 Chang et al. May 2004 B1
6754904 Cooper et al. Jun 2004 B1
6754906 Finseth et al. Jun 2004 B1
6766362 Miyasaka et al. Jul 2004 B1
6774917 Foote et al. Aug 2004 B1
6810200 Aoyama et al. Oct 2004 B1
6813775 Finseth et al. Nov 2004 B1
6820278 Ellis Nov 2004 B1
6829781 Bhagavath et al. Dec 2004 B1
6868440 Gupta et al. Mar 2005 B1
6880171 Ahmad et al. Apr 2005 B1
6898762 Ellis et al. May 2005 B2
6925455 Gong et al. Aug 2005 B2
6931595 Pan et al. Aug 2005 B2
6934964 Schaffer et al. Aug 2005 B1
6961954 Maybury et al. Nov 2005 B1
6970510 Wee et al. Nov 2005 B1
6971105 Weber et al. Nov 2005 B1
6981129 Boggs et al. Dec 2005 B1
6983478 Grauch et al. Jan 2006 B1
6990679 Curreri Jan 2006 B2
6993245 Harville Jan 2006 B1
7003792 Yuen Feb 2006 B1
7026964 Baldwin et al. Apr 2006 B2
7055168 Errico et al. May 2006 B1
7065709 Ellis et al. Jun 2006 B2
7096486 Ukai et al. Aug 2006 B1
7127735 Lee et al. Oct 2006 B1
7130866 Schaffer Oct 2006 B2
7136710 Hoffberg et al. Nov 2006 B1
7146626 Arsenault et al. Dec 2006 B1
7150030 Eldering et al. Dec 2006 B1
7185355 Ellis et al. Feb 2007 B1
7199798 Echigo et al. Apr 2007 B1
7249366 Flavin Jul 2007 B1
7296285 Jun et al. Nov 2007 B1
7343381 Shin Mar 2008 B2
7380262 Wang et al. May 2008 B2
7454775 Schaffer et al. Nov 2008 B1
20010030664 Shulman et al. Oct 2001 A1
20010043744 Hieda Nov 2001 A1
20020012526 Sai et al. Jan 2002 A1
20020013943 Haberman et al. Jan 2002 A1
20020018594 Xu et al. Feb 2002 A1
20020026345 Juels Feb 2002 A1
20020069218 Sull et al. Jun 2002 A1
20020079165 Wolfe Jun 2002 A1
20020080162 Pan et al. Jun 2002 A1
20020083473 Agnihotri et al. Jun 2002 A1
20020087967 Conkwright et al. Jul 2002 A1
20020093591 Gong et al. Jul 2002 A1
20020097165 Hulme Jul 2002 A1
20020104087 Schaffer et al. Aug 2002 A1
20020108112 Wallace et al. Aug 2002 A1
20020120929 Schwalb et al. Aug 2002 A1
20020133412 Oliver et al. Sep 2002 A1
20020140719 Amir et al. Oct 2002 A1
20020141619 Standridge et al. Oct 2002 A1
20020156909 Harrington Oct 2002 A1
20020178135 Tanaka Nov 2002 A1
20020184220 Teraguchi et al. Dec 2002 A1
20020190991 Efran et al. Dec 2002 A1
20020194589 Cristofalo et al. Dec 2002 A1
20030001880 Holtz et al. Jan 2003 A1
20030007555 Divakaran et al. Jan 2003 A1
20030026592 Kawahara et al. Feb 2003 A1
20030033288 Shanahan et al. Feb 2003 A1
20030066068 Gutta et al. Apr 2003 A1
20030067554 Klarfeld et al. Apr 2003 A1
20030072440 Murray et al. Apr 2003 A1
20030081937 Li May 2003 A1
20030084448 Soundararajan May 2003 A1
20030084450 Thurston et al. May 2003 A1
20030088872 Maissel et al. May 2003 A1
20030093792 Labeeb et al. May 2003 A1
20030105682 Dicker et al. Jun 2003 A1
20030172374 Vinson et al. Sep 2003 A1
20030177503 Sull et al. Sep 2003 A1
20030182663 Gudorf et al. Sep 2003 A1
20030187650 Moore et al. Oct 2003 A1
20030229900 Reisman Dec 2003 A1
20040003041 Moore et al. Jan 2004 A1
20040015569 Lonnfors et al. Jan 2004 A1
20040017389 Pan et al. Jan 2004 A1
20040030750 Moore et al. Feb 2004 A1
20040032486 Shusman Feb 2004 A1
20040088289 Xu et al. May 2004 A1
20040098754 Vella et al. May 2004 A1
20040125124 Kim et al. Jul 2004 A1
20040125877 Chang et al. Jul 2004 A1
20040197088 Ferman et al. Oct 2004 A1
20040227768 Bates et al. Nov 2004 A1
20040231003 Cooper et al. Nov 2004 A1
20040250272 Durden et al. Dec 2004 A1
20050021784 Prehofer Jan 2005 A1
20050028194 Elenbaas et al. Feb 2005 A1
20050055713 Lee et al. Mar 2005 A1
20050102202 Linden et al. May 2005 A1
20050131906 Shin Jun 2005 A1
20050251827 Ellis et al. Nov 2005 A1
20050262542 DeWeese et al. Nov 2005 A1
20060129544 Yoon et al. Jun 2006 A1
20070011148 Burkey et al. Jan 2007 A1
Foreign Referenced Citations (18)
Number Date Country
0 854 645 Jul 1998 EP
0 878964 Nov 1998 EP
1250807 Oct 2002 EP
2 325 537 Nov 1998 GB
08125957 May 1996 JP
09322154 Dec 1997 JP
11-032267 Feb 1999 JP
11-261908 Sep 1999 JP
2000-013755 Jan 2000 JP
2001-036861 Feb 2001 JP
2002-503896 Feb 2002 JP
WO 9414284 Jun 1994 WO
WO 9856188 Dec 1998 WO
WO 9901984 Jan 1999 WO
WO 9904143 Jan 1999 WO
WO 9912194 Mar 1999 WO
WO 9965237 Dec 1999 WO
WO 0150753 Jul 2001 WO
Related Publications (1)
Number Date Country
20020180774 A1 Dec 2002 US
Provisional Applications (3)
Number Date Country
60285553 Apr 2001 US
60297091 Jun 2001 US
60329771 Oct 2001 US