The present invention relates to network synchronization and more particularly to synchronizing the playback of a multimedia event on a plurality of client apparatuses.
Systems such as the Internet typically are point-to-point (or unicast) systems in which a message is converted into a series of addressed packets which are routed from a source node through a plurality of routers to a destination node. In most as communication protocols the packet includes a header which contains the addresses of the source and the destination nodes as well as a sequence number which specifies the packet's order in the message.
In general, these systems do not have the capability of broadcasting a message from a source node to all the other nodes in the network because such a capability is rarely of much use and could easily overload the network. However, there are situations where it is desirable for one node to communicate with some subset of all the nodes. For example, multi-party conferencing capability analogous to that found in the public telephone system and broadcasting to a limited number of nodes are of considerable interest to users of packet-switched networks. To satisfy such demands, packets destined for several recipients have been encapsulated in a unicast packet and forwarded from a source to a point in a network where the packets have been replicated and forwarded on to all desired recipients. This technique is known as IP Multicasting and the network over which such packets are routed is referred to as the Multicast Backbone or MBONE. More recently, routers have become available which can route the multicast addresses (class D addresses) provided for in communication protocols such as TCP/IP and UDP/IP. A multicast address is essentially an address for a group of host computers who have indicated their desire to participate in that group. Thus, a multicast packet can be routed from a source node through a plurality of multicast routers (or mrouters) to one or more devices receiving the multicast packets. From there the packet is distributed to all the host computers that are members of the multicast group.
These techniques have been used to provide on the Internet audio and video conferencing as well as radio-like broadcasting to groups of interested parties. See, for example, K. Savetz et al. MBONE Multicasting Tomorrow's Internet (IDG Books WorldWide Inc., 1996).
Further details concerning technical aspects of multicasting may be found in the Internet documents Request for Comments (RFC) 1112 and 1458 which are reproduced at Appendices A and B of the Savetz book and in D. P. Brutaman et al., “MBONE provides Audio and Video Across the Internet,” IEEE Computer, Vol. 27, No. 4, pp. 30-36 (April 1994), all of which are incorporated herein by reference.
Multimedia computer systems have become increasingly popular over the last several years due to their versatility and their interactive presentation style. A multimedia computer system can be defined as a computer system having a combination of video and audio outputs for presentation of audio-visual displays. A modem multimedia computer system typically includes one or more storage devices such as an optical drive, a CD-ROM, a hard drive, a videodisc, or an audiodisc, and audio and video data are typically stored on one or more of these mass storage devices. In some file formats the audio and video are interleaved together in a single file, while in other formats the audio and video data are stored in different files, many times on different storage media. Audio and video data for a multimedia display may also be stored in separate computer systems that are networked together.
In this instance, the computer system presenting the multimedia display would receive a portion of the necessary data from the other computer system via the network cabling.
Graphic images used in Windows multimedia applications can be created in either of two ways, these being bit-mapped images and vector-based images. Bit-mapped images comprise a plurality of picture elements (pixels) and are created by assigning a color to each pixel inside the image boundary. Most bit-mapped color images require one byte per pixel for storage, so large bit-mapped images create correspondingly large files. For example, a full-screen, 256-color image in 640-by-480-pixel VGA mode requires 307,200 bytes of storage, if the data is not compressed. Vector-based images are created by defining the end points, thickness, color, pattern and curvature of lines and solid objects comprised within the image. Thus, a vector-based image includes a definition which consists of a numerical representation of the coordinates of the object, referenced to a corner of the image.
Bit-mapped images are the most prevalent type of image storage format, and the most common bit-mapped-image file formats are as follows. A file format referred to as BMP is used for Windows bit-map files in 1-, 2-, 4-, 8-, and 24-bit color depths. BMP files contain a bit-map header that defines the size of the image, the number of color planes, the type of compression used (if any), and the palette used. The Windows DIB (device-independent bit-map) format is a variant of the BMP format that includes a color table defining the RGB (red green blue) values of the colors used. Other types of bit-map formats include the TIF (tagged image format file), the PCX (Zsoft Personal Computer Paintbrush Bitmap) file format, the GIF (graphics interchange file) format, and the TGA (Texas Instruments Graphic Architecture) file format.
The standard Windows format for bit-mapped images is a 256-color device independent bit map (DIB) with a BMP (the Windows bit-mapped file format) or sometimes a DIB extension. The standard Windows format for vector-based images is referred to as WMF (Windows meta file).
Full-motion video implies that video images shown on the computer's screen simulate those of a television set with identical (30 frames-per-second) frame rates, and that these images are accompanied by high-quality stereo sound. A large amount of storage is required for high-resolution color images, not to mention a full-motion video sequence. For example, a single frame of NTSC video at 640-by-400-pixel resolution with 16-bit color requires 512K of data per frame. At 30 flames per second, over 15 Megabytes of data storage are required for each second of full motion video. Due to the large amount of storage required for full motion video, various types of video compression algorithms are used to reduce the amount of necessary storage. Video compression can be performed either in real-time, i.e., on the fly during video capture, or on the stored video file after the video data has been captured and stored on the media In addition, different video compression methods exist for still graphic images and for full-motion video.
Examples of video data compression for still graphic images are RLE (run-length encoding) and JPEG (Joint Photographic Experts Group) compression. RLE is the standard compression method for Windows BMP and DIB files. The RLE compression method operates by testing for duplicated pixels in a single line of the bit map and stores the number of consecutive duplicate pixels rather than the data for the pixel itself JPEG compression is a group of related standards that provide either lossless (no image quality degradation) or lossy (imperceptible to severe degradation) compression types. Although JPEG compression was designed for the compression of still images rather than video, several manufacturers supply JPEG compression adapter cards for motion video applications.
In contrast to compression algorithms for still images, most video compression algorithms are designed to compress full motion video. Video compression algorithms for motion video generally use a concept referred to as interframe compression, which involves storing only the differences between successive frames in the data file. Interframe compression begins by digitizing the entire image of a key frame. Successive frames are compared with the key frame, and only the differences between the digitized data from the key frame and from the successive frames are stored. Periodically, such as when new scenes are displayed, new key frames are digitized and stored, and subsequent comparisons begin from this new reference point. It is noted that interframe compression ratios are content-dependent, i.e., if the video clip being compressed includes many abrupt scene transitions from one image to another, the compression is less efficient. Examples of video compression which use an interframe compression technique are MPEG, DVI and Indeo, among others.
MPEG (Moving Pictures Experts Group) compression is a set of methods for compression and decompression of full motion video images that uses the interframe compression technique described above. The MPEG standard requires that sound be recorded simultaneously with the video data, and the video and audio data are interleaved in a single file to attempt to maintain the video and audio synchronized during playback. The audio data is typically compressed as well, and the MPEG standard specifies an audio compression method referred to as ADPCM (Adaptive Differential Pulse Code Modulation) for audio data.
A standard referred to as Digital Video Interactive (DVI) format developed by Intel Corporation is a compression and storage format for full-motion video and high-fidelity audio data. The DVI standard uses interframe compression techniques similar to that of the MPEG standard and uses ADPCM compression for audio data. The compression method used in DVI is referred to as RTV 2.0 (real time video), and this compression method is incorporated into Intel's AVK (audio/video kernel) software for its DVI product line. IBM has adopted DVI as the standard for displaying video for its Ultimedia product line. The DVI file format is based on the Intel i750 chipset and is supported through the Media Control Interface (MCI) for Windows. Microsoft and Intel jointly announced the creation of the DV MCI (digital video media control interface) command set for Windows 3.1 in 1992.
The Microsoft Audio Video Interleaved (AVI) format is a special compressed file structure format designed to enable video images and synchronized sound stored on CD-ROMs to be played on PCs with standard VGA displays and audio adapter cards. The AVI compression method uses an interframe method, i.e., the differences between successive frames are stored in a manner similar to the compression methods used in DVI and MPEG. The AVI format uses symmetrical software compression-decompression techniques, i.e., both compression and decompression are performed in real time. Thus AVI files can be created by recording video images and sound in AVI format from a VCR or television broadcast in real time, if enough free hard disk space is available.
Despite these compression algorithms, it is very difficult to simultaneously multicast multimedia material due to bandwidth restraints. This problem is unavoidable with present technology since such large amounts of data must be transferred over networks such as the Internet from a single host server to numerous client computers.
A system, method and article of manufacture are provided for synchronizing an event on a plurality of client apparatuses. First, a plurality of client apparatuses are connected via a network. Next, an application program is embedded on a site on the network. In use, information is requested from a server on the network utilizing the application program. Such information relates to an event to be played back simultaneously on the client apparatuses. In response to such request, a script is received for displaying the information.
In one embodiment of the present invention, the application program is further adapted to send a request to retrieve commands from the server for use with a playback device of one of the client apparatuses. In accordance with a primary aspect of the present invention, the playback device includes a digital video disc (DVD) player.
In another embodiment of the present invention, the commands may be adapted to playback the event on the playback device simultaneous with the playback of the event on the remaining client apparatuses. Further, the command may include a start time when the playback of the event is to begin on each of the client apparatuses. In one aspect of the present invention, the application program is a JAVA applet and the script is JAVAscript.
The invention will be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein:
In various embodiments, the client apparatuses may take the form of computers, televisions, stereos, home appliances, or any other types of devices. In one embodiment, the client apparatuses and the host computer each include a computer such as an IBM compatible computer, Apple Macintosh computer or UNIX based workstation.
A representative hardware environment is depicted in
A preferred embodiment is written using JAVA, C, and the C++ language and utilizes object oriented programming methodology. Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP. A need exists for these principles of OOP to be applied to a messaging interface of an electronic messaging system such that a set of OOP classes and objects for the messaging interface can be provided.
OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures. Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.
In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each others capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point. An object is a single instance of the class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.
OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.
OOP also allows creation of an object that “depends from” another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine “depends from” the object representing the piston engine. The relationship between these objects is called inheritance.
When the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons. Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with it (e.g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.
With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, one's logical perception of the reality is the only limit on determining the kinds of things that can become objects in object-oriented software. Some typical categories are as follows:
With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system, or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.
If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% of the new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% of the program. As a result, OOP enables software developers to build objects out of other, previously built objects.
This process closely resembles complex machinery being built out of assemblies and sub-assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality of the software as well as an increased speed of its development.
Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymorphism, and composition-relationship. With the advent of the C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.
The benefits of object classes can be summarized, as follows:
Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.
Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.
The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts of the programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces of the program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still “sits on top of” the system.
Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making these things all work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.
Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e.g., to create or manipulate a proprietary data structure).
A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.
Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e.g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.
There are three main differences between frameworks and class libraries:
Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-purpose secure communication protocol for a transport medium between the client and the Newco. HTTP or other protocols could be readily substituted for HTML without undue experimentation. Information on these products is available in T. Berners-Lee, D. Connoly, “RFC 1866: Hypertext Markup Language—2.0” (Nov. 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J. C. Mogul, “Hypertext Transfer Protocol—HTTP/1.1: HTTP Working Group Internet Draft” (May 2, 1996). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains. HTML has been in use by the World-Wide Web global information initiative since 1990. HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).
To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources. Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas:
Sun Microsystem's Java language solves many of the client-side problems by:
With Java, developers can create robust User Interface (UI) components. Custom “widgets” (e.g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the abovementioned custom UI components, dynamic Web pages can also be created.
Sun's Java language has emerged as an industry-recognized language for “programming the Internet.” Sun defines Java as: “a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword-compliant, general-purpose programming language. Java supports programming for the Internet in the form of platform-independent Java applets.” Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add “interactive content” to Web documents (e.g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e.g., Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically, “C++ with extensions from Objective C for more dynamic method resolution.”
Another technology that provides similar function to JAVA is provided by Microsoft and ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies. The group's building blocks are called ActiveX Controls, small, fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named “Jakarta” ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention.
Synchronization Overview
It should be noted that the event need not be necessarily stored in memory on all of the client apparatuses, but rather stored on one or some of the client apparatuses and streamed to the remaining client apparatuses at variant rates. This may be feasibly accomplished if the client apparatus(es) containing the stored event has a high-bandwidth connection with the remaining client apparatuses. For example, the client apparatus(es) containing the stored event may include a server that has a connection to a plurality of televisions via a cable network, i.e. WEBTV. Similar functionality may be achieved via a broadcast medium. The present invention is thus flexible by having an ability to host user events and corporative events.
In one embodiment, the event includes a video and audio presentation such as movie, a concert, and/or a theatrical event. It should be noted, however, that the event may included any recording capable of being played back for entertainment, education, informative or other similar purposes.
In use, the client apparatuses and a host computer are adapted to be connected to a network. Such network may include a wide, local or any other type of communication network. For example, a wide area network such as the Internet may be employed which operates using TCP/IP or IPX protocols.
In operation 202, information is transmitted from the host computer to the appropriate client apparatuses utilizing the network. This information allows for the simultaneous and synchronous playback of the event on each of the client apparatuses. In one embodiment, the information may also include a start time when the playback of the event is to begin on each of the client apparatuses. Further, an ending time may be included when the playback of the event is to end on each of the client apparatuses. Still yet, “play” command information may be sent to the client apparatuses at the start time. As an option, input may be received from the user, and used to alter the playback of the event. The host server, or synchronization server, can also control various streams of a variant rate and different hardware associated with those streams.
The present invention thus has the ability to synchronize video playback for one or multiple (thousands) users from one or multiple physical locations, and to synchronize with external video, audio and/or data streams.
Users of the present invention are at multiple physical locations and host servers may also be at different locations. The present invention is thus a scalable system which is capable of servicing an unlimited number of users. Since the content is local to the user machine, no high network bandwidth is required.
History Download Capabilities
In operation 302, information is stored on the host computer(s) for allowing the simultaneous playback of the event on each of the client apparatuses. In one embodiment, the information may include a history and data associated with the synchronous playback. In particular, the history may include any overlaid material (as will be described hereinafter in greater detail), any specific commands affecting the playback of the information, or any other type of general information, i.e. start time, end time, etc.
In operation 304, the information may be downloaded utilizing the network at any time after the synchronous playback of the event. Such downloaded information may then be used for playback after the simultaneous playback of the event. As such, the present invention has the ability to allow users to download a history and data associated with a particular synchronization event and play it later.
Overlay Synchronization
During the playback of the event, visual and/or audio material may also be overlaid on the event based on input received from at least one of the client apparatuses. See operation 404. This may be accomplished by transmitting the overlay material from one of the client apparatuses to the host computer or any other server, and multicasting the same to the remaining client apparatuses.
As an option, the overlay material may include annotations on a display of the client apparatus. For example, the overlay material may include sketches which are inputted by way of a stylus-based input screen or a keyboard or the like, along with a voiceover inputted by way of a microphone or voice synthesizer. Such capability may also be quite valuable in an educational environment.
In one embodiment, the overlay material may also be displayed on each of the client apparatuses utilizing the network. This allows each of the users to experience the overlay in real-time during the simultaneous playback of the event. As an option, the user inputting the overlay material may select which users may experience the overlay material. The client apparatus that provided the overlay material may also be identified to the users experiencing the overlay material.
It should be noted that various bi-directional communication may be enabled for allowing data to travel to and from the server. For instance, the playback of the event on the client apparatuses may be altered in any feasible way based on input from a user.
Late Synchronization
During the simultaneous playback, a request may be received from one of the client apparatuses for that particular to be included in the synchronized event, as set forth in operation 504. This request may be received after the synchronized event has already begun while it is still playing. Further, the request may be submitted via a site on a network, i.e. website.
In response to the request, information is transmitted in operation 506 to the requesting client apparatus utilizing the network. This information is adapted for identifying a location in the memory where the event is currently being played back. This allows the simultaneous playback of the event on the requesting client apparatus.
The end users are thus able to come in at a later time and to be synchronized with the event. Targeted synchronization and various filters criteria can be applied to target different audiences. Also language and cultural differences can be taken into account. Still yet, the present invention may be adapted to address users on different hardware platforms (MAC, PC, set-top boxes). This may be accomplished by identifying the user using a cookie, a user profile which is identified by way of a log in, or a Bum Cut Area (BCA) of the disc.
An example setting forth details relating to identifying DVDs will now be set forth. First, a content owner (such as studio) requests use of the BCA on their DVDs. Based on request, the replicator (examples include WAMO, Panasonic, Nimbus, Technicolor, Pioneer, Crest) adds unique BCA number to every DVD. Adding BCA number to each DVD requires a special (YAG) laser. This may be the very last step in the manufacturing process. The BCA numbers for a specific DVD must then be entered into InterActual's BCA database. Information to track includes: DVD title, i.e. “Lost in Space”; BCA #/range, i.e. 12345687890; and Shipping Packaging/Tracking Container, i.e. Box 52221 to Hollywood Video.
After the BCA number is added to the DVDs, the DVDs are packaging/boxed for distribution to either the Distributor or the Retailer. It should be noted that many companies take multiple forms, so the replicator and distributor may be one in the same. Also, some retailers are large/important enough to get shipments directly from replicator. The way in which the DVDs are packaging/shipped is very important because one must track the BCA numbers to actual shipping containers (box, etc.). Therefore tracking information must also be added to the BCA database.
If packaged DVDs are then sent to distributor, the distributor also has mechanisms, i.e. scanners, input device, and monitoring devices, in place for tracking based on their distribution. For example, Deluxe may receive a “package” of 100,000 copies of “Lost in Space.” However, the distributor ships 10,000 to Retailer A and 5,000 to Retailer B. The distributor should be able to “input” retailer A and B's distribution information into the system. Ideally, this becomes a seamless/automated process.
Once the DVDs reach the retailer (either from the replicator or distributor), then DVDs may be further divided and distributed to local stores/outlets. In such a situation, the retailer should be able to automatically “track” distribution of these DVDs through to their stores. Over time, all three entitities (replicator, distributor, and retailer) are able to add tracking information to BCA database. Due to complexity and dependencies on existing business systems, the retail tracking concept will be rolled out in phases: replicator first most likely with key retail accounts. The distributors will be brought in. Retailers will then begin to embrace the ability to track based on local outlet/store.
By the foregoing design, easy deployment is thus afforded and minimal hardware is required to allow the synchronization of content without significant capital investments and with a very efficient control mechanism. The content delivery does not rely on high network bandwidth and is independent from the synchronization.
Internet Server Application Program Interface (ISAPI) extensions will be used on the server. ISAPI extensions provide a mechanism to maintain a temporary or permanent connection with the users. These connections allow the Synchronization Server to process request and to send the appropriate DVD commands. The permanent connections are known as “Keep Alive” connections. ISAPI extension can also be used as an HTTP interface to a more traditional server, with all data returned as text.
On the client side the approach is to use, but not limited to Java 1.1 applets, to initiate event start-up for the Synchronization server. The advantage of using Java 1.1 applets is to achieve platform independence for existing and future Java-enabled devices. JavaScript will be used to provide user interface navigation by “wrapping” the applet.
An ISAPI (Internet Server Application Program Interface) is a set of Windows program calls that let one write a Web server application that will run faster than a Common Gateway Interface (CGI) application. A disadvantage of a CGI application (or “executable file,” as it is sometimes called) is that each time it is run, it runs as a separate process with its own address space, resulting in extra instructions that have to be performed, especially if many instances of it are running on behalf of users.
Using ISAPI, you create a Dynamic Link Library (DLL) application file that can run as part of the Hypertext Transport Protocol (HTTP) application's process and address space. The DLL files are loaded into the computer when HTTP is started and remain there as long as they are needed; they don't have to be located and read into storage as frequently as a CGI application.
Existing CGI applications can be converted into ISAPI application DLLs without having to rewrite their logic. However, they do need to be written to be thread-safe so that a single instance of the DLL can serve multiple users.
A special kind of ISAPI DLL is called an ISAPI filter, which can be designated to receive control for every HTTP request. One can create an ISAPI filter for encryption or decryption, for logging, for request screening, or for other purposes.
One can write ISAPI server extension DLLs (ISAs) that can be loaded and called by the HTTP server. Users can fill out forms and click a submit button to send data to a Web server and invoke an ISA, which can process the information to provide custom content or store it in a database. Web server extensions can use information in a database to build Web pages dynamically, and then send them to the client computers to be displayed. An application can add other custom functionality and provide data to the client using HTTP and HTML.
One can write an ISAPI filter. The filter is also a DLL that runs on an ISAPI-enabled HTTP server. The filter registers for notification of events such as logging on or URL mapping. When the selected events occur, the filter is called, and one can monitor and change the data (on its way from the server to the client or vice versa). ISAPI filters can be used to provide custom encryption or compression schemes, or additional authentication methods.
Both server extensions and filters run in the process space of the Web server, providing an efficient way to extend the server's capabilities.
Overall Component Design
The various functional components of the software associated with the present invention will now be set forth. Such components include a Java/JavaScript Component, Synchronizer Component, LayerImpl Component, Business Layer Component, Configuration Manager Component, and DBConnect Component.
Java/JavaScript Component
In use, information is requested from a server on the network utilizing the application program. See operation 604. Such information relates to an event to be played back simultaneously on the client apparatuses and may include general information such as a start and stop time of the event, or more specific information about the event itself.
In response to such request, a script is received for displaying the information. Note operation 606. The script may take any form such as Perl, REXX (on IBM mainframes), and Tcl/Tk, and preferably includes a JAVAscript.
In one embodiment of the present invention, the JAVA applet may be further adapted to send a request to retrieve command information from the server for use with a playback device of one of the client apparatuses. The commands may be adapted to playback the event on the playback device simultaneous with the playback of the event on the remaining client apparatuses. Further, the commands may include a start time when the playback of the event is to begin on each of the client apparatuses.
The JAVA applets and JAVAscript are used to communicate with the playback device of the client apparatuses. In one embodiment, the playback device includes a PCFriendly TM video player manufactured by Interactual®.
The Java applet is embedded within a web page and uses HTTP protocol to communicate to the synchronization server. The applet could request event information from the server, and display it to the user via JavaScript. The applet could also send a “Broadcast VideoEvent” request to retrieve DVD commands that can be passed to the video component, as set forth hereinabove.
Synchronizer Component
In response to the request, in operation 704, an object is created which is adapted to playback the event on a client apparatus simultaneous with the playback of the event on the remaining client apparatuses upon the receipt of activation signal. As an option, the activation signal may be provided using a clock of the client apparatus, or located at a different location, i.e. server. To accomplish this, the object identifies a start time when the playback of the event is to begin on each of the client apparatuses.
In operation 706, the object is sent to one of the client apparatuses utilizing the network for being stored therein. In accordance with a primary aspect of the present invention, the object may be adapted to playback the event which is stored in memory of the client apparatus. This may be accomplished by activating a digital video disc (DVD) player.
In summary, when the Synchronizer component receives a “BroadcastVideoEvent” from the applet, it then places the request in the thread queue for processing. To process a request, the thread creates a “call back” object, if one does not exist for this event. The thread then adds the request to the “call back” object queue. This “call back” object will be invoked when it is time to play the DVD. The Synchronizer component creates a Call Back COM object, LayerSink. The Synchronizer component is also responsible for creating the LayerFactory interface which will be set forth hereinafter in greater detail.
LayerImpl Component
First, in operation 800, various values are determined including a current time, a start time when an event is to start, and a stop time when the event is to end. Thereafter, a length of the event is calculated based on the start time and the stop time in operation 802. As an option, the current time is determined by querying a clock of one of the client apparatuses.
If any portion of the length of the event takes place during a predetermined threshold period, a command is stored in memory in operation 804. The command may be adapted to automatically begin playing back the event at the start time. In one embodiment, the threshold period includes the time the users can be queued before the event. As an option, chapter information may be stored in the memory if any portion of the length of the event takes place during the predetermined threshold period. This allows the command to automatically begin playing back the event at a predetermined chapter.
In operation 806, a loop is created at the start time during which a lapsed time of the event is tracked. This information may be used for various tracking purposes to decide when to issue commands to the user. In another embodiment, a second loop may be created upon the beginning of a chapter during which information on a next chapter is retrieved.
The “call back” object (LayerSink) is thus responsible for creating and communicating with the LayerImpl component. The LayerImpl component acts as a scheduler, determining when to issue commands to the user.
LayerImpl will issue different DVD commands, based on the type of decoder the user has in their PC. LayerImpl will differentiate between the decoders by using the decoder information submitted from the client. The LayerImpl will pass the correct DVD command to the client, based on the decoder's capabilities. For example, if the decoder does not support the TimePlay event, then the server may send a ChapterPlay event and wait appropriately.
The following is an enumerated summary of the steps the component uses to determine when the users will receive the DVD commands:
First, in operation 900, a plurality of events are stored in memory on a plurality of client apparatuses. Each of the events is assigned a unique identifier which is stored in the memory.
In operation 902, the client apparatuses are adapted to be coupled to a host computer via a network, as set forth hereinabove. In operation 904, the identifier of the event which is stored in the memory of the client apparatuses is then retrieved utilizing the network. Such identifier is subsequently compared with an identifier of a scheduled event, as set forth in operation 906. If the comparison renders a match, the playback of the event is begun on the appropriate client apparatuses. Note operation 908.
CbusinessLayer thus differentiates events by the disk and location ids, uploaded by the client to guarantee backwards compatibility. As set forth earlier, late arrivals can always re-sync with the event.
Configuration Manager Component
In operation 1000, a type of the playback devices of the client apparatuses is first identified. Such “type” may refer to a make, model, or any other distinguishing characteristic of the particular playback devices. A command associated with the identified type of the playback device is then looked up in a look-up table. Note operation 1002. Such table may be located at the host server, or at any other location such as the client apparatuses.
Thereafter, in operation 1004, the command is sent to the corresponding client apparatus for beginning the playback of the event simultaneously with the playback of the event on each of the remaining client apparatuses.
This component is thus responsible for identifying what type of reference player is hosting the event. The reference player can be the database, which contains the DVD commands or a real time player. When the initial DVD is command is requested, the “Synchronizer” table is queried for the host type. From that point forward, the scheduler would know from whom to receive data.
DBConnect Component
This component is responsible for communicating with the Synchronizer tables, and for providing access methods for the retrieved data. All interaction from the tables is on a read-only basis. The LayerImpl component communicates with this component to retrieve DVD commands and event information.
Even though current implementation may be based on a Microsoft platform, hard dependencies on Microsoft or any other 3rd-party development tools may be avoided. To address such issues, the following considerations may be made throughout the code:
MFC specific code may be avoided. Instead, STL may be used. ATL and/or MFC code may be encapsulated into separate classes and portioned from the rest of the code. Class implementations may use aggregation pattern to delegate business logic to the portable classes. Database connection classes maybe separated and the communication protocol may be separated with respect to portability to Oracle and other platforms.
Alternate Embodiments
To support future enhancements, further components may be included with extendibility as the major objective. Various future enhancements of the product and how they will be addressed will now be set forth.
While spirals may retrieve pre-recorded DVD commands from the database, alternate spirals may support a consumer as a host. The architecture may also support plug-in components. Alternate spirals may support the RealTimeConnector component, which accepts host user request and forwards them to the clients. The instant architecture supports the DBConnector which accepts events from the database.
Clients may maintain connections throughout the event. This allows the host to send a various number of commands to the client of the event. Although the spiral disconnects users once a PLAY command has been issued, the Synchronizer class (which will be set forth later) adds each connection to a Thread Pool. This pool of connections can be left open during the life of the event.
Each request may be logged into the database to provide a reference for the future.
As an option, connections may be pooled to allow the synchronization server to direct consumer's machines to the certain locations throughout the entire event.
Synchronization events in alternate spirals may be defined as a combination of play from location event and the actual event. This way, one describes each event in the unambiguous way on the client side and synchronizes it with the server. For example, a situation may be considered where one fast forwards after a movie is played for 15 min and thereafter plays the scene in the movie. In such situation, one has to submit the information to the client player, indicating that it (player) has to start time play from 15 min into the movie and fast-forward to the certain location. A better way would be to analyze what is the next event after fast forwarding occurred and perform a combination for the play from location and next event. This design would require significant changes to the client infrastructure, including video object, remoteagent and provider and should be taken into consideration in any alternate client design.
Classes/Component Diagrams
Sequence Diagrams
If the date/time of the user request lies within the event start threshold, the user is put into wait queue and receive the appropriate data when the time elapses. Note steps 1,2,3,5,6,7 of the Logical Sequence diagram. Otherwise, a message is sent informing the user when the event will occur. Note step 4 of the Logical Sequence diagram.
Server Side Collaboration Diagram
At step 6, ISAPI extension will call IA_BusinessServer CompareTime method and based on the results will send to the user a predefined web page indicating to retry later or return control to the web server, notifying it (web server) to keep the connection open. At this point connection is pooled and will be processed by the IA_BusinessServer at a time of the event.
Client Collaboration Diagram
Classes/Interfaces Definition
Definitions of one embodiment of the various classes associated with the software which implements the present invention will now be set forth.
Class Applet1
This is the class that implements the applet. The browser will use it to bootstrap our applet.
BroadCastEvent, CITIEncrypt
Javax.Applet
This is the class that invokes the Synchronizer.
CITEEncrypt
Java.Thread
Class CDBConnect
This is the class provides a public interface for components to request information from the DB tables.
This is the class provides a public interface for components to determine the type of reference player hosting the event.
This class provides a threading model that classes can use to derive.
This creates an ISAPI thread.
Manages the layerSink and businessLayerProp objects.
layerSink represents a sink interface and stores a queue of requests. It creates a connection point object.
This call back object, allows asynchronously processing.
Creates a layerthread object. This object is responsible for providing access methods, which provide event information.
This object acts as a scheduler, processing request from its queue.
This object manages businesslayer objects. Business layer objects communicate with the reference player and notify the user which DVD command to play.
Although only a few embodiments of the present invention have been described in detail herein, it should be understood that the present invention may be embodied in many other specific forms without departing from the spirit or scope of the invention. Therefore, the present examples and embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4672572 | Alsberg | Jun 1987 | A |
4709813 | Wildt | Dec 1987 | A |
4710754 | Montean | Dec 1987 | A |
4739510 | Jeffers et al. | Apr 1988 | A |
4888638 | Bohn | Dec 1989 | A |
4967185 | Montean | Oct 1990 | A |
4993068 | Piosenka et al. | Feb 1991 | A |
5023907 | Johnson et al. | Jun 1991 | A |
5109482 | Bohrman | Apr 1992 | A |
5128752 | Von Kohorn | Jul 1992 | A |
5289439 | Koulopoulos et al. | Feb 1994 | A |
5305195 | Murphy | Apr 1994 | A |
5305197 | Axler et al. | Apr 1994 | A |
5347508 | Montbriand et al. | Sep 1994 | A |
5353218 | De Lapa et al. | Oct 1994 | A |
5400402 | Garfinkle | Mar 1995 | A |
5410343 | Coddington et al. | Apr 1995 | A |
5413383 | Laurash et al. | May 1995 | A |
5420403 | Allum et al. | May 1995 | A |
5457746 | Dolphin | Oct 1995 | A |
5483658 | Grube et al. | Jan 1996 | A |
5509074 | Choudhury et al. | Apr 1996 | A |
5530686 | Schylander et al. | Jun 1996 | A |
5550577 | Verbiest et al. | Aug 1996 | A |
5568275 | Norton et al. | Oct 1996 | A |
5619733 | Noe et al. | Apr 1997 | A |
5640453 | Schuchman et al. | Jun 1997 | A |
5640560 | Smith | Jun 1997 | A |
5642171 | Baumgartner et al. | Jun 1997 | A |
5651064 | Newell | Jul 1997 | A |
5659792 | Walmsley | Aug 1997 | A |
5673195 | Schwartz et al. | Sep 1997 | A |
5677953 | Dolphin | Oct 1997 | A |
5694546 | Reisman | Dec 1997 | A |
5696898 | Baker et al. | Dec 1997 | A |
5734719 | Tsevdos et al. | Mar 1998 | A |
5734898 | He | Mar 1998 | A |
5736977 | Hughes | Apr 1998 | A |
5751672 | Yankowski | May 1998 | A |
RE35839 | Asai et al. | Jul 1998 | E |
5790753 | Krishnamoorthy | Aug 1998 | A |
5801685 | Miller et al. | Sep 1998 | A |
5802294 | Ludwig et al. | Sep 1998 | A |
5804810 | Woolley et al. | Sep 1998 | A |
5805442 | Crater et al. | Sep 1998 | A |
5808662 | Kinney et al. | Sep 1998 | A |
5809471 | Brodsky | Sep 1998 | A |
5812661 | Akiyama et al. | Sep 1998 | A |
5819284 | Farber | Oct 1998 | A |
5822291 | Brindze et al. | Oct 1998 | A |
5825876 | Peterson, Jr. | Oct 1998 | A |
5857021 | Kataoka et al. | Jan 1999 | A |
5860068 | Cook | Jan 1999 | A |
5869819 | Knowles et al. | Feb 1999 | A |
5872747 | Johnson | Feb 1999 | A |
5875296 | Shi et al. | Feb 1999 | A |
5878020 | Takahashi | Mar 1999 | A |
5878233 | Schloss | Mar 1999 | A |
5882291 | Bradshaw et al. | Mar 1999 | A |
5887143 | Saito et al. | Mar 1999 | A |
5889980 | Smith, Jr. | Mar 1999 | A |
5892825 | Mages et al. | Apr 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5892908 | Hughes et al. | Apr 1999 | A |
5893910 | Martineau et al. | Apr 1999 | A |
5895073 | Moore | Apr 1999 | A |
5899980 | Wilf et al. | May 1999 | A |
5907322 | Kelly et al. | May 1999 | A |
5907704 | Gudmundson et al. | May 1999 | A |
5909551 | Tahara et al. | Jun 1999 | A |
5913210 | Call | Jun 1999 | A |
5915093 | Berlin et al. | Jun 1999 | A |
5920694 | Carleton et al. | Jul 1999 | A |
5922045 | Hanson | Jul 1999 | A |
5924013 | Guido et al. | Jul 1999 | A |
5930238 | Nguyen | Jul 1999 | A |
5930767 | Reber et al. | Jul 1999 | A |
5933497 | Beetcher et al. | Aug 1999 | A |
5940504 | Griswold | Aug 1999 | A |
5943304 | Kamada et al. | Aug 1999 | A |
5950173 | Perkowski | Sep 1999 | A |
5956482 | Agraharam et al. | Sep 1999 | A |
5960398 | Fuchigami et al. | Sep 1999 | A |
5969898 | Hansen et al. | Oct 1999 | A |
5978773 | Hudetz et al. | Nov 1999 | A |
5987454 | Hobbs | Nov 1999 | A |
5987525 | Roberts et al. | Nov 1999 | A |
5991374 | Hazenfield | Nov 1999 | A |
5991399 | Graunke et al. | Nov 1999 | A |
5991798 | Ozaki et al. | Nov 1999 | A |
5995965 | Experton | Nov 1999 | A |
6006328 | Drake | Dec 1999 | A |
6009410 | LeMole et al. | Dec 1999 | A |
6012071 | Krishna et al. | Jan 2000 | A |
6016166 | Huang et al. | Jan 2000 | A |
6018768 | Ullman et al. | Jan 2000 | A |
6021307 | Chan | Feb 2000 | A |
6034937 | Kumagai | Mar 2000 | A |
6035329 | Mages et al. | Mar 2000 | A |
6044403 | Gerszberg et al. | Mar 2000 | A |
6052717 | Reynolds | Apr 2000 | A |
6052785 | Lin et al. | Apr 2000 | A |
6055314 | Spies et al. | Apr 2000 | A |
6061057 | Knowlton et al. | May 2000 | A |
6064979 | Perkowski | May 2000 | A |
6073124 | Krishnan et al. | Jun 2000 | A |
6076733 | Wilz, Sr. et al. | Jun 2000 | A |
6080207 | Kroening et al. | Jun 2000 | A |
6081785 | Oshima et al. | Jun 2000 | A |
6083276 | Davidson et al. | Jul 2000 | A |
6097291 | Tsai et al. | Aug 2000 | A |
6097814 | Mochizuki | Aug 2000 | A |
6101180 | Donahue et al. | Aug 2000 | A |
6101534 | Rothschild | Aug 2000 | A |
6108002 | Ishizaki | Aug 2000 | A |
6125388 | Reisman | Sep 2000 | A |
6128649 | Smith et al. | Oct 2000 | A |
6128652 | Toh et al. | Oct 2000 | A |
6134533 | Shell | Oct 2000 | A |
6134593 | Alexander | Oct 2000 | A |
6138150 | Nichols et al. | Oct 2000 | A |
6141010 | Hoyle | Oct 2000 | A |
6145006 | Vishlitsky et al. | Nov 2000 | A |
6154738 | Call | Nov 2000 | A |
6154773 | Roberts | Nov 2000 | A |
6154844 | Touboul et al. | Nov 2000 | A |
6157953 | Chang | Dec 2000 | A |
6161132 | Roberts et al. | Dec 2000 | A |
6175842 | Kirk et al. | Jan 2001 | B1 |
6175872 | Neumann et al. | Jan 2001 | B1 |
6182222 | Oparaji | Jan 2001 | B1 |
6184877 | Dodson et al. | Feb 2001 | B1 |
6189032 | Susaki et al. | Feb 2001 | B1 |
6192340 | Abecassis | Feb 2001 | B1 |
6195693 | Berry et al. | Feb 2001 | B1 |
6199048 | Hudetz et al. | Mar 2001 | B1 |
6208805 | Abecassis | Mar 2001 | B1 |
6219675 | Pal et al. | Apr 2001 | B1 |
6226235 | Wehmeyer | May 2001 | B1 |
6229523 | Czako | May 2001 | B1 |
6230174 | Berger et al. | May 2001 | B1 |
6233618 | Shannon | May 2001 | B1 |
6239793 | Barnert et al. | May 2001 | B1 |
6240459 | Roberts et al. | May 2001 | B1 |
6240555 | Shoff et al. | May 2001 | B1 |
6243692 | Floyd et al. | Jun 2001 | B1 |
6246778 | Moore | Jun 2001 | B1 |
6263501 | Schein et al. | Jul 2001 | B1 |
6263505 | Walker et al. | Jul 2001 | B1 |
6289165 | Abecassis | Sep 2001 | B1 |
6301661 | Shambroom | Oct 2001 | B1 |
6321252 | Bhola et al. | Nov 2001 | B1 |
6331865 | Sachs et al. | Dec 2001 | B1 |
6341375 | Watkins | Jan 2002 | B1 |
6374402 | Schmeidler et al. | Apr 2002 | B1 |
6389467 | Eyal | May 2002 | B1 |
6389473 | Carmel et al. | May 2002 | B1 |
6415438 | Blackketter et al. | Jul 2002 | B1 |
6449653 | Klemets et al. | Sep 2002 | B2 |
6453420 | Collart | Sep 2002 | B1 |
6453459 | Brodersen et al. | Sep 2002 | B1 |
6460086 | Swaminathan et al. | Oct 2002 | B1 |
6499057 | Portuesi | Dec 2002 | B1 |
6505169 | Bhagavath et al. | Jan 2003 | B1 |
6510467 | Behfar et al. | Jan 2003 | B1 |
6516467 | Schindler et al. | Feb 2003 | B1 |
6522463 | Shimomura et al. | Feb 2003 | B1 |
6526580 | Shimomura et al. | Feb 2003 | B2 |
6529949 | Getsin et al. | Mar 2003 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6543053 | Li et al. | Apr 2003 | B1 |
6567980 | Jain et al. | May 2003 | B1 |
6580870 | Kanazawa et al. | Jun 2003 | B1 |
6591420 | McPherson et al. | Jul 2003 | B1 |
6609253 | Swix et al. | Aug 2003 | B1 |
6625656 | Goldhor et al. | Sep 2003 | B2 |
6659861 | Faris et al. | Dec 2003 | B1 |
6691106 | Sathyanarayan | Feb 2004 | B1 |
6691126 | Syeda-Mahmood | Feb 2004 | B1 |
6694309 | Cho et al. | Feb 2004 | B2 |
6698020 | Zigmond et al. | Feb 2004 | B1 |
6760043 | Markel | Jul 2004 | B2 |
6769130 | Getsin | Jul 2004 | B1 |
20010001160 | Shoff | May 2001 | A1 |
20020026321 | Faris et al. | Feb 2002 | A1 |
20020057893 | Wood et al. | May 2002 | A1 |
20020073152 | Andrew et al. | Jun 2002 | A1 |
20020078144 | Lamkin et al. | Jun 2002 | A1 |
20020088011 | Lamkin et al. | Jul 2002 | A1 |
20020143774 | Vandersluis | Oct 2002 | A1 |
20030005461 | Shinohara | Jan 2003 | A1 |
20030028892 | Gewickey et al. | Feb 2003 | A1 |
20030101232 | Ullman et al. | May 2003 | A1 |
20040024889 | Getsin et al. | Feb 2004 | A1 |
20040040042 | Feinleib | Feb 2004 | A1 |
Number | Date | Country |
---|---|---|
42 42 992 | Jun 1994 | DE |
0 372 716 | Jun 1990 | EP |
0 762 422 | Mar 1997 | EP |
0 802 527 | Oct 1997 | EP |
0 809 244 | Nov 1997 | EP |
0 814 419 | Dec 1997 | EP |
0 849 734 | Jun 1998 | EP |
0 853 315 | Jul 1998 | EP |
0 809 244 | Dec 1998 | EP |
0849 734 | Mar 1999 | EP |
0 853 315 | Dec 1999 | EP |
10063562 | Jun 1998 | JP |
11039262 | Dec 1999 | JP |
WO 9847080 | Oct 1998 | WO |
WO 9858368 | Dec 1998 | WO |
WO 9914678 | Mar 1999 | WO |
WO 9951031 | Oct 1999 | WO |
WO 0002385 | Jan 2000 | WO |
WO 0008855 | Feb 2000 | WO |
WO 0016229 | Mar 2000 | WO |
WO 0018054 | Mar 2000 | WO |
WO 0024192 | Apr 2000 | WO |