Some interactive programs include digital video discs (DVDS) or resident client computer applications, which enable users to view ancillary content that relates to the primary content being viewed. For example, DVD movies (the primary content) can include other footage relating to the movie, such as interviews with the cast, the movie trailer, and outtakes. DVDs also exist that permit the user to connect to a universal resource locator (URL) through a browser when viewed on a personal computer. This experience, however, is limited in that the user must use the DVD to make such a connection and the user is unable to pause the video to interact with any ancillary content. In the case of DVDs without such Internet connectivity, the user can only navigate through the ancillary content embedded in the DVD, and cannot perform any real-time transactions (e.g., buy a copy of a movie soundtrack through an e-commerce transaction) or other user interaction.
Other interactive programs permit users to watch content (e.g., a television episode) and log onto a website afterwards to purchase items viewed during the show (e.g., a wristwatch worn by an actor in the show).
Some interactive programs in a broadcast environment utilize the vertical blanking interval (VBI) to insert data into the broadcast stream, thus enabling interactive functionality. For example, with web television, users may play along with game shows as they are being broadcast. However, this approach is limited to the broadcast arena where the primary content (e.g., a television show) cannot be interrupted while the user interacts with ancillary content.
While the foregoing interactive programs provide users with an enhanced experience, they are limited in providing real-time interactivity between the user and the content while the user is viewing the primary content and do not provide a user-friendly experience. Therefore there exists a need for interactive video content programming that permits the user to stop the video play to view ancillary content, and then continue video play from the point in time where play was stopped.
The present invention is directed to a system and methods for creating and distributing interactive video content (IVC). IVC includes the creation of interactive content using software tools (e.g., Flash™ and Shockwave®), and digital assets (e.g., a movie or television commercial), and distributing the created interactive content in real-time to a user over an Internet Protocol (IP)-based network (e.g., Internet and intranet), or other network supporting two-way communication, to provide an interactive user experience.
As used herein, the term “content” is meant to include all forms of viewable electronic information including, but not limited to, advertisements, promotions, music videos, motion pictures, and television programs. A preferred embodiment of the present invention is directed to a method for using an interactive video including displaying a video on a visual display, the video having at least one interface link associated therewith, the interface link adapted to be displayed on the visual display and being linked to ancillary content accessible over a network (wire or wireless); interacting with the interface link to access the ancillary content; interrupting the display of the video at a point in time; delivering the ancillary content to the visual display; and continuing the display of the video from the point in time where the display of the video was interrupted.
As used herein, the phrase “ancillary content” is meant to include any content or page of content linked to the primary content or content linked therefrom. Also as used herein, the phrase “visual display” is meant to include all types of video or audio-visual devices including, but not limited to, screens for computer and televisions, personal digital assistants, or any other device that provides visual content to a user. As used herein, the phrase “interface link” is meant to include any means that functions as a link between video content and another piece of content, for example, a hypertext link under an Internet protocol regime.
Each interface link is preferably associated with, or related to, content being displayed on the screen. For example, if the user is watching a basketball game, and the user is interested in a particular shoe worn by a basketball player, the user may select the interface link associated with the basketball player's shoe. Interacting with the interface link associated with the basketball shoe allows the user to access one or more pages of information or media content related to the shoe of interest, including retail information. During the user's interaction with the interface link, the video stream is paused until the user returns to or continues the video stream delivery. Thus, a user may freely interact with one or more interface links to gain more information about an object of interest being displayed without missing any of the primary content video. As used herein, the phrase “primary content” is meant to include any content first requested by or to be shown to the user.
The present invention is also directed to a method for creating an interactive video, including creating a link program adapted to interrupt the delivery of video to a visual display and provide access to ancillary content accessible over a network; encoding the video onto a storage medium adapted to store video content; associating the link program with the video; delivering the video to the visual display; and displaying the video on the visual display.
Once an interactive video has been created, it may be distributed in several ways. A preferred distribution channel is to stream the video over an Internet Protocol (IP)-based network (e.g., Internet and intranet). Interface links may be displayed with the video stream in several ways. For example, interface links may be delivered separately from the video stream such that the links overlay the video stream content when displayed to the user (a “floating” interface link), or the interface links may be embedded in the video stream itself. Delivering interface links separately from the video stream eliminates any need to modify the original video content to support one or more interface links. Interaction with the interface link provides the user access to at least one IP address, for example, a web page address.
The present invention provides real-time interactivity that permits the user to effortlessly make a real-time transaction during the viewing of the program. The present invention may also be used for advertisements and specialized e-commerce opportunities.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention.
Reference will now be made in detail to the present preferred embodiments (exemplary embodiments) of the invention, examples of which are illustrated in the accompanying drawings.
The present invention is directed to a system and methods for creating and distributing interactive video content. Unless otherwise stated, the present invention will be described in relation to using streamed video over an IP-based network such as the Internet, although a person of ordinary skill in the art will appreciate that other means of video delivery are possible and within the scope of the present invention.
Endpoint servers 100 preferably include a video server 106, a web server 108, and a content database 110. It should be understood that endpoint servers 100 may include only one server. Video server 106 may be any server adapted to store and provide access to video content suitable for streaming to users. Web server 108 may be any server adapted to serve static images (e.g., JPEG or GIF), HTML assets (e.g., a retail website), text, and other IP-based file types (e.g., Flash™ and Shockwave®). A preferred form of web server 108 is a HTML server. Content database 110 preferably stores data for use with web server 108, and metadata associated with video content stored on video server 106 and may have a storage capacity expandable by known methods. It will be appreciated by those of ordinary skill in the art that in any of the embodiments of the present invention, the number of servers may range from one to many depending upon the system requirements to be met. Likewise, the system architecture between individual servers may be varied and load-balanced in known ways in order to provide optimal system efficiency.
In step 310, video and/or audio assets are encoded. Encoding step 310 includes converting video and/or audio assets into computer files that are readable by an application adapted to show video to a user, for example, a media player application. Video content is preferably encoded as ASF, MPEG4 files to take advantage of encryption opportunities. However, interactive media files may also be encoded as, for example, Quick-time™ files or AVI video files. A preferred encoding software is Windows® Media Encoder 7.0™. Preferably, both primary video content (i.e., the video initially requested by the user) and any ancillary video content (i.e., video that may be viewed while the primary video content is paused) are encoded onto the same storage medium. As a person of ordinary skill in the art will recognize, various software applications may be used to encode content without departing from the scope of the present invention.
In step 320, one or more hub pages may be created. Hub pages are created using any software tool adapted to create and populate IP-based pages (e.g., web pages). Preferred software includes, for example, Flash™, Shockwave®, HTML, and DHTML. In step 330, one or more sub-pages are created using tools such as those used to create hub pages. Hub pages preferably include a link back to the point at which the video content stream was paused or interrupted, and one or more links to sub-pages. Sub-pages themselves may include links to more sub-pages, or a back link to the hub page or to the interruption point of the video. Each hub page or sub-page may include one or more links to commerce sites. As used herein, the phrase “commerce site” is meant to include a site residing at an electronic address that is adapted to handle commercial transactions, for example only, a retailer website using an IP address.
After the hub pages and sub pages are created, they are preferably entered into a content management system for tracking and display purposes. “Tracking” includes accounting for the exhibition of the video asset by means of a unique identifier. A preferred example of a content management system operable with the present invention is taught in U.S. application Ser. No. 09/921,100, titled “Content Management System,” filed Jul. 31, 2001, which claims priority to U.S. application No. 60/280,691, the disclosures of which are hereby incorporated by reference herein.
The creation of links is described in more detail below. Hub pages and sub-pages may be stored, for example, on web server 108, 208, a retailer site, or on client software at the user's location.
In steps 340 and 350, interface links are programmed according to the intended method of presentation and associated with a piece of video content, whether primary or ancillary.
Interface links may be presented in several ways on a user's visual display. For example, interface links may be embedded in the video content such that the links are streamed with the video content from video server 106, 206. Embedded interface links may be created by on-line software such as Smoke® (available from Discreet Logic™), Final Cut Pro® (available from Apple Computer™), or Avid® (available from Avid Technology™). Preferably, the embedded interface link is located in the lower left hand corner, inside the video delivery area intended for the visual display, just outside the intended delivery area for Internet protocol.
Interface links may also be hidden from view such that no icons are visible. In this instance a user may, for example, when selecting with a mouse, run the cursor over an object of interest (an on-screen object that the viewer is interested in learning additional information about). When the cursor contacts the on-screen object (e.g., a shoe worn by a player during a basketball game) an icon or other visual effect may appear signifying that an interface link is available for the object of interest. Hidden interface links may be created by embedding an interface link as an invisible layer on top of the streaming video with known editing applications that can generate navigational instructions via, for example, Lingo™ (available from Macromedia®), Visual Basic® (“VB”; available from Microsoft®), ActiveX® (available from Microsoft®), Coms, or DirectX™ (available from Microsoft®).
Hidden interface links may be placed on the canvass of a video stream over a single pixel, or over a greater number of pixels about the display area. Hidden interface links may be adapted to serve a number of purposes such as detecting a full screen event like an indiscriminate key stroke or mouse function to trigger an event such as an HTML page call or a chapter advance to another video sequence. A hidden interface link may also be used with a single pixel to make a cursor change indicate a hot mouse event on a specific part of the video stream, or be used for marketing and/or user feedback.
Interface links may also be delivered from web server 108, 208 and shown on the user's display as an overlay to the streaming video, for example, as a visible, translucent icon (e.g., “floating bug”) or other user interface (UI). User 104, 204 would then be receiving two simultaneous transmissions: one from video server 106, 206 and one from web server 108, 208. Interface links delivered from web server 106, 206 may be delivered as a timed program that coincides with the video content being streamed. In such an instance, interface links may be preprogrammed to interact with, for example, time code markers embedded in the video stream, such that one or more interface links may appear or disappear based on the time elapsed. The association of interface links with time code markers may be achieved by known video editing or encoding applications. The appearance of a time code marker may be triggered when a time code window of the application delivering the video, for example, a media player, reaches a selected frame. For example, an interface link may appear in the right hand corner of the user's display after five minutes have elapsed during a video presentation to coincide with the entrance of an object of interest (e.g., an automobile coming from the right corner of the display). This process is akin to laying a template over the user's display, rather than embedding interface links in the video stream.
It is appreciated that an interface link program may be delivered to client software operatively connected to the user's visual display to interact with video delivered from video server 106, 206. In this embodiment, delivery of the interface link program need not be simultaneously delivered with the video to the user since the interface link program would already be at the user's visual display. Links to and between non-video content between pages may also be programmed as needed. Multiple links may be associated with the video using a variety of formats (e.g., hidden or translucent icons) which change with both time and location as the video plays.
In step 360, video content is distributed to one or more video servers 106, 206. Video may be distributed by any means adapted to deliver video content from one location to another, for example, manual delivery, satellite transmission, wireless delivery, digital subscriber line, and cable. In step 370, hub pages and sub-pages are distributed to web server 108, 208. In step 380, metadata is distributed to content database 110, 210. Distribution mediums may be the same as those already mentioned in conjunction with video content distribution. It should be understood that the aforementioned steps need not be performed in a particular order. For example, the video assets may be encoded before the creation of the storyboard in step 300. The creation and distribution of the hub and sub-pages to web server 108, 208 may occur independently of the creation and distribution of the video assets to video server 106, 206. In addition, the creation and distribution of metadata to content database 110, 210 may occur independently of either of the above.
As a user is receiving a requested video stream, the user may be presented with one or more interface links. If the user decides to interact with an interface link in step 404, then the user selects a desired interface link corresponding to an object of interest. A user may interact with an interface link by, for example, touching an area of the display, voicing a command, pointing and clicking with a mouse, using a beam of light aimed at an area of the display, or any other interaction that conveys the user's desire to interact with an interface link. Once an interaction with the interface link has been detected, in step 408 the video stream is paused or interrupted.
In step 410, the IP address associated with the interacted interface link is accessed. For example, in a web setting, a web page address is accessed and the user request is sent to the URL for the hub page, which is preferably served from a centrally located HTML server. In step 412, a hub page and any associated metadata with the accessed address are delivered to the user. The user may then view the hub page and make a sub-page selection in step 414. If the user selects a sub-page, then in step 416 a selected sub-page and any associated metadata is delivered to the user.
In step 418, the user may decide whether to request a different hub page, a previous hub page, or a new sub-page (though not illustrated, step 418 may loop to step 416 for as many times as a user desires to access a different sub-page). If the user decides not to select any further pages, then in step 420 the user may decide whether to continue the video stream in step 422. If the user does not elect to continue the video stream in step 420, then the user may continue to view the page that the user is viewing, or select one or more new pages and continue the video stream at anytime. Choosing to continue or return to the video stream will bring the user back to the point where the video streaming was interrupted. This may be done by a user action which activates the browser window containing the video stream.
After the video streaming has been continued, the user may select another link and thus repeat steps 404-422. It should be understood that the aforementioned steps need not occur in a particular order, or include all steps. For example, hub pages are not required to have sub-pages associated therewith. Therefore, in instances where a hub page has no associated sub-page, steps 414-418 may be omitted.
Each hub page or sub-page may contain e-commerce opportunities, i.e., retail information and/or links to retail sites for ordering desired items and completing commercial transactions. For example, during the presentation of an action film showing a snow-ski chase, a user might be interested in the brand of skis that a particular actor may be wearing. The user may select an interface link associated with the ski shown on the user's display. If the user is using a computer with a mouse, the user may simply point and click on the ski of interest, thereby pausing or interrupting the movie and delivering a hub page showing retail information regarding the particular ski of interest. The user may then choose among different sub-pages showing more information about the desired ski, or may order the ski from one or more of the pages.
Sub-pages may themselves contain video assets. For example, if a hub page contains information about an automobile, a sub-page link might lead to a video demonstration of the automobile's performance in various conditions. After exploring the hub page and any of various sub-pages, the user may elect to return to the primary video content at the point of interruption. It will be appreciated that the hub pages may or may not include ancillary video assets depending upon, for example, the system requirements and the storyboard intended to be designed around the primary content.
Interface link graphical images or icons (for visible icons) are preferably translucent to provide little distraction to the user during the video content presentation. For example, a preferred interface link includes a graphic that is slightly beveled, fifty percent transparent, and approximately 60 by 50 m pixels in size. The icons may be created software such as, for example, Adobe® Photoshop™ and others.
Interactive Content Programming (ICP) includes several features:
1. ICP-enabled content will be visually distinguishable from other content via the translucent “bug” or other user interface (UI) linking element floating over the video content. The “floating bug” provides an interactive experience without modifying the primary content. This UI element both signals ICP availability, and accepts user “clicks” to trigger transition to the linked content.
2. If the ICP linking element is clicked, the UI is redirected to a programmed “place” (e.g., web page) that may include a variety of interactive content options. The place will be specified as a URL to be loaded over a current frame, or in place of the current page. The place may be an e-commerce opportunity, another video segment, or the like. In the most general case, any arbitrary URL (in any web-friendly format) may be the target. Preferably, multiple linkages from the video will be offered in a variety of formats (e.g., other than the translucent “bug”), which may be adapted to change with both time and space as the video plays.
3. When the user is done with the linked content and returns to the original video, it resumes at the point at which the user left it (i.e., the user does not miss a frame of original video).
An example of creating an ICP includes the following steps:
1. Create a non-linear storyboard. An example of a non-linear storyboard is found above in
2. Encode the video assets and/or audio assets (e.g., the primary content and any ancillary content used on sub and hub pages) using encoding software (such as Windows Media Encoder 7.0™).
3. Utilizing software tools (e.g., Flash™, Shockwave®, HTML), create the hub page (e.g., a web page using graphics, text).
4. Utilizing software tools (e.g., Flash™, Shockwave®, HTML), create one or more sub pages (e.g., a web page using graphics, text).
An example of deploying an ICP includes the following steps:
1. For deployment of an ICP, the content management system allows content to include a URL for linking and accommodates “floating bug” insertion. For each video asset associated with the ICP, the hub pages and sub pages are entered into the content management system for tracking and display purposes.
2. All HTML assets are placed on a central HTML server, which can be accessed by multiple versions of client applications (e.g., “.com” and “.tv” entities).
3. All of the video assets are distributed into the video server network (e.g., Akamai™). The video server network distributes the video content nationwide to individual users (i.e., client applications).
4. The client application is enabled to exhibit the ICP by modifying the content database accessible by the public to include the ICP.
Now the user can select the ICP and receive an interactive experience by requesting the primary content (e.g., movie). When such a request is made, the primary content is streamed from the video server network to the client application with the “floating bug.” Upon the user selecting the bug (e.g., by clicking on the “bug”), the primary content video stream is paused and the user request is sent to the URL for the hub page located on the central HTML server. The hub page is then served to the client application from that central HTML server. The hub page may or may not include ancillary video assets also served from the video server network. A user may explore the hub pages and sub pages including any ancillary video assets and at any point click to return to the primary content where they left.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
This application claims the benefit of U.S. Provisional Application No. 60/255,541, filed Dec. 14, 2000, incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5027400 | Baji et al. | Jun 1991 | A |
5191573 | Hair | Mar 1993 | A |
5235680 | Bijnagte | Aug 1993 | A |
5253275 | Yurt et al. | Oct 1993 | A |
5289371 | Abel et al. | Feb 1994 | A |
5307495 | Seino et al. | Apr 1994 | A |
5347632 | Filepp et al. | Sep 1994 | A |
5371532 | Gelman et al. | Dec 1994 | A |
5408630 | Moss | Apr 1995 | A |
5446919 | Wilkins | Aug 1995 | A |
5539450 | Handelman | Jul 1996 | A |
5539735 | Moskowitz | Jul 1996 | A |
5553281 | Brown et al. | Sep 1996 | A |
5557541 | Schulhof et al. | Sep 1996 | A |
5576951 | Lockwood | Nov 1996 | A |
5584025 | Keithley et al. | Dec 1996 | A |
5616876 | Cluts | Apr 1997 | A |
5636346 | Saxe | Jun 1997 | A |
5652615 | Bryant et al. | Jul 1997 | A |
5696965 | Dedrick | Dec 1997 | A |
5706448 | Blades | Jan 1998 | A |
5710884 | Dedrick | Jan 1998 | A |
5710887 | Chelliah et al. | Jan 1998 | A |
5717923 | Dedrick | Feb 1998 | A |
5724521 | Dedrick | Mar 1998 | A |
5729594 | Klingman | Mar 1998 | A |
5734961 | Castille | Mar 1998 | A |
5752238 | Dedrick | May 1998 | A |
5754787 | Dedrick | May 1998 | A |
5754938 | Herz et al. | May 1998 | A |
5758257 | Herz et al. | May 1998 | A |
5765152 | Erickson | Jun 1998 | A |
5767845 | Oashi et al. | Jun 1998 | A |
5790423 | Lau et al. | Aug 1998 | A |
5790426 | Robinson | Aug 1998 | A |
5790935 | Payton | Aug 1998 | A |
5802518 | Karaev et al. | Sep 1998 | A |
5805154 | Brown | Sep 1998 | A |
5815665 | Teper et al. | Sep 1998 | A |
5818935 | Maa | Oct 1998 | A |
5819271 | Mahoney et al. | Oct 1998 | A |
5825876 | Peterson, Jr. | Oct 1998 | A |
5826243 | Musmanno et al. | Oct 1998 | A |
5848396 | Gerace | Dec 1998 | A |
5864823 | Levitan | Jan 1999 | A |
5864871 | Kitain et al. | Jan 1999 | A |
5892508 | Howe et al. | Apr 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5894589 | Reber et al. | Apr 1999 | A |
5910987 | Ginter et al. | Jun 1999 | A |
5918014 | Robinson | Jun 1999 | A |
5926624 | Katz et al. | Jul 1999 | A |
5929849 | Kikinis | Jul 1999 | A |
5933811 | Angles et al. | Aug 1999 | A |
5953710 | Fleming | Sep 1999 | A |
5956693 | Geerlings | Sep 1999 | A |
5956700 | Landry | Sep 1999 | A |
5966440 | Hair | Oct 1999 | A |
5973683 | Cragun et al. | Oct 1999 | A |
5974396 | Anderson et al. | Oct 1999 | A |
5987509 | Portuesi | Nov 1999 | A |
6009407 | Garg | Dec 1999 | A |
6011537 | Slotznick | Jan 2000 | A |
6018768 | Ullman et al. | Jan 2000 | A |
6026369 | Capek | Feb 2000 | A |
6047296 | Wilmott et al. | Apr 2000 | A |
6058424 | Dixon et al. | May 2000 | A |
6065042 | Reimer et al. | May 2000 | A |
6094677 | Capek et al. | Jul 2000 | A |
6134593 | Alexander et al. | Oct 2000 | A |
6154738 | Call | Nov 2000 | A |
6157929 | Zamiska et al. | Dec 2000 | A |
6163272 | Goode et al. | Dec 2000 | A |
6163795 | Kikinis | Dec 2000 | A |
6166730 | Goode et al. | Dec 2000 | A |
6178407 | Lotvin et al. | Jan 2001 | B1 |
6184878 | Alonso et al. | Feb 2001 | B1 |
6185541 | Scroggie et al. | Feb 2001 | B1 |
6189008 | Easty et al. | Feb 2001 | B1 |
6199082 | Ferrel et al. | Mar 2001 | B1 |
6202056 | Nuttall | Mar 2001 | B1 |
6205432 | Gabbard et al. | Mar 2001 | B1 |
6226618 | Downs et al. | May 2001 | B1 |
6229895 | Son et al. | May 2001 | B1 |
6237022 | Bruck et al. | May 2001 | B1 |
6243725 | Hempleman et al. | Jun 2001 | B1 |
6247130 | Fritsch | Jun 2001 | B1 |
6269275 | Slade | Jul 2001 | B1 |
6269394 | Kenner et al. | Jul 2001 | B1 |
6292785 | McEvoy et al. | Sep 2001 | B1 |
6292797 | Tuzhilin et al. | Sep 2001 | B1 |
6314451 | Landsman et al. | Nov 2001 | B1 |
6317780 | Cohn et al. | Nov 2001 | B1 |
6334116 | Ganesan et al. | Dec 2001 | B1 |
6337901 | Rome et al. | Jan 2002 | B1 |
6338044 | Cook et al. | Jan 2002 | B1 |
6338094 | Scott et al. | Jan 2002 | B1 |
6345256 | Milsted et al. | Feb 2002 | B1 |
6381747 | Wonfor et al. | Apr 2002 | B1 |
6385596 | Wiser et al. | May 2002 | B1 |
6389403 | Dorak | May 2002 | B1 |
6418421 | Hurtado et al. | Jul 2002 | B1 |
6424998 | Hunter | Jul 2002 | B2 |
6457010 | Eldering et al. | Sep 2002 | B1 |
6483986 | Krapf | Nov 2002 | B1 |
6496802 | Van Zoest | Dec 2002 | B1 |
6526438 | Bienvenu et al. | Feb 2003 | B1 |
6535856 | Tal | Mar 2003 | B1 |
6574424 | Dimitri et al. | Jun 2003 | B1 |
6604224 | Armstrong et al. | Aug 2003 | B1 |
6615251 | Klug et al. | Sep 2003 | B1 |
6628302 | White et al. | Sep 2003 | B2 |
6637032 | Feinleib | Oct 2003 | B1 |
6640145 | Hoffberg et al. | Oct 2003 | B2 |
6718551 | Swix et al. | Apr 2004 | B1 |
6763345 | Hempleman et al. | Jul 2004 | B1 |
6799165 | Boesjes | Sep 2004 | B1 |
6801576 | Haldeman et al. | Oct 2004 | B1 |
6810527 | Conrad et al. | Oct 2004 | B1 |
6944585 | Pawson | Sep 2005 | B1 |
6959288 | Medina et al. | Oct 2005 | B1 |
6986156 | Rodriguez et al. | Jan 2006 | B1 |
7017173 | Armstrong et al. | Mar 2006 | B1 |
7103905 | Novak | Sep 2006 | B2 |
7203758 | Cook et al. | Apr 2007 | B2 |
7275254 | Jutzi | Sep 2007 | B1 |
20010025255 | Gaudian | Sep 2001 | A1 |
20010042043 | Shear et al. | Nov 2001 | A1 |
20020007493 | Butler et al. | Jan 2002 | A1 |
20020016736 | Cannon et al. | Feb 2002 | A1 |
20020056118 | Hunter et al. | May 2002 | A1 |
20020059574 | Tudor et al. | May 2002 | A1 |
20020062393 | Borger et al. | May 2002 | A1 |
20020065715 | Tennyson et al. | May 2002 | A1 |
20020072997 | Colson et al. | Jun 2002 | A1 |
20020083006 | Headings et al. | Jun 2002 | A1 |
20020087976 | Kaplan et al. | Jul 2002 | A1 |
20020095606 | Carlton | Jul 2002 | A1 |
20020112235 | Ballou et al. | Aug 2002 | A1 |
20020120564 | Strietzel | Aug 2002 | A1 |
20020138436 | Darling | Sep 2002 | A1 |
20020172362 | Wonfor et al. | Nov 2002 | A1 |
20020184255 | Edd et al. | Dec 2002 | A1 |
20030014328 | Lindner | Jan 2003 | A1 |
20030070167 | Holtz et al. | Apr 2003 | A1 |
20030120549 | Lindner | Jun 2003 | A1 |
20030120557 | Evans et al. | Jun 2003 | A1 |
20030126033 | Evans et al. | Jul 2003 | A1 |
20030191816 | Landress et al. | Oct 2003 | A1 |
20040002903 | Stolfo et al. | Jan 2004 | A1 |
Number | Date | Country |
---|---|---|
1016990 | Jun 2000 | EP |
WO 0002143 | Jan 2000 | WO |
WO 0127773 | Apr 2001 | WO |
WO 0161592 | Aug 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20020078456 A1 | Jun 2002 | US |
Number | Date | Country | |
---|---|---|---|
60255541 | Dec 2000 | US |