The present invention relates to a system and method for rate limiting using PAUSE frame capability in a Local Area Network/Wide Area Network interface.
Synchronous optical network (SONET) is a standard for optical telecommunications that provides the transport infrastructure for worldwide telecommunications. SONET offers cost-effective transport both in the access area and core of the network. For instance, telephone or data switches rely on SONET transport for interconnection.
In a typical application, a local area network (LAN), such as Ethernet, is connected to a wide area network (WAN), such as that provided by SONET. In many applications, the data bandwidth of the LAN is greater than that of the WAN. For example, a common application is known as Ethernet over SONET, in which Ethernet LAN traffic is communicated using a SONET channel. The Ethernet LAN is typically 100 Base-T, which has a bandwidth of 100 mega-bits-per-second (Mbps), while the connected SONET channel may be STS-1, which has a bandwidth of 51.840 Mbps. In such an application, the peak rate of data traffic to be communicated over the WAN from the LAN may exceed the bandwidth of the WAN; typically, the average rate of data traffic will not exceed the bandwidth of the WAN. In this situation, data traffic may be buffered to “smooth out” the peaks in data traffic so that the WAN can handle the traffic.
However, in some situations, the data traffic rate on the LAN may be high enough, for long enough, that the buffers may fill up. In this case, the rate of traffic communicated over the WAN from the LAN must be limited. Conventional systems provide rate limiting by throwing away user traffic, such as by dropping frames. This greatly affects the throughput of the system, since the thrown away traffic must be re-transmitted by the source of the traffic and with many common protocols, such as TCP and UDP, the process of recovering from throwing away traffic is also time consuming. Thus, a need arises for a rate limiting technique in which user traffic is not thrown away and which provides improved performance over conventional techniques.
The present invention provides a rate limiting technique in which user traffic is not thrown away and which provides improved performance over conventional techniques. The present invention couples rate limiting with flow control using PAUSE frames, which allows buffers to fill and then generate flow control to the attached switch or router preventing frame drops.
In one embodiment of the present invention, a method of rate limiting in a Local Area Network/Wide Area Network interface comprises the steps of receiving data from the Local Area Network, storing the received data in a first buffer, transmitting the received data from the first buffer to the Wide Area Network, transmitting a PAUSE frame to the Local Area Network to cause the Local Area Network to stop transmitting data, if the first buffer fills to an upper threshold, and transmitting a PAUSE frame with PAUSE=0 to the Local Area Network to cause the Local Area Network to start transmitting data, if the first buffer empties to an lower threshold.
In one aspect of the present invention, the method further comprises the step of storing the data received from the Local Area Network in a second buffer in a Level 2 Switch before storing the received data in the first buffer. The method may further comprise the steps of transmitting a PAUSE frame to the Level 2 Switch to cause the Level 2 Switch to stop transmitting data, if the first buffer fills to an upper threshold and transmitting a PAUSE frame with PAUSE=0 to the Level 2 Switch to cause the Level 2 Switch to start transmitting data, if the first buffer empties to an lower threshold. The method may further comprise the steps of transmitting a PAUSE frame to the Local Area Network to cause the Local Area Network to stop transmitting data, if the second buffer fills to an upper threshold and transmitting a PAUSE frame with PAUSE=0 to the Local Area Network to cause the Local Area Network to start transmitting data, if the second buffer empties to an lower threshold.
The data received from the Local Area Network may be at a first data rate, the data transmitted from the Wide Area Network may be at a second data rate, and the first data rate may be higher than the second data rate.
The Local Area Network may be an Ethernet network and the Wide Area Network may be a Synchronous Optical Network or a Synchronous Digital Hierarchy network.
The rate limiter pulls data from the main Tx buffer. If the input data rate exceeds the WAN output data rate, then the buffer will fill. When the buffer reaches a pre-set threshold (high watermark) then flow control is initiated—PAUSE frames are sent to the attached router or switch to prevent further frames from being sent. When the Tx drains to a point where a low watermark threshold is crossed, then flow control is de-activated by sending a second PAUSE frame which causes the attached router or switch to start sending traffic again.
The fact that it resides on the output side of the buffers is advantageous in that it can be used in conjunction with the PAUSE mechanism to effectively throttle back on a customer's incoming traffic in a lossless fashion. It also provides the flexibility in billing the customer for smaller quantities of bandwidth, with an easy growth path up to the full line rate of his/her Ethernet port of 100 Mbps or 1 Gbps, whatever the case may be.
An exemplary block diagram of a system 100 in which the present invention may be implemented is shown in
There are many different types of LANs, Ethernets being the most common for Personal Computers (PCs). Most LANs are confined to a single building or group of buildings. However, one LAN can be connected to other LANs over any distance via longer distance transmission technologies, such as those included in WAN 102. A WAN is a computer network that spans a relatively large geographical area. Typically, a WAN includes two or more local-area networks (LANs), as shown in
Among the technologies that may be used to implement WAN 102 are optical technologies, such as Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH). SONET is a standard for connecting fiber-optic transmission systems. SONET was proposed by Bellcore in the middle 1980s and is now an ANSI standard. SONET defines interface standards at the physical layer of the OSI seven-layer model. The standard defines a hierarchy of interface rates that allow data streams at different rates to be multiplexed. SONET establishes Optical Carrier (OC) levels from 51.8 Mbps (about the same as a T-3 line) to 2.48 Gbps. Prior rate standards used by different countries specified rates that were not compatible for multiplexing. With the implementation of SONET, communication carriers throughout the world can interconnect their existing digital carrier and fiber optic systems.
SDH is the international equivalent of SONET and was standardized by the International Telecommunications Union (ITU). SDH is an international standard for synchronous data transmission over fiber optic cables. SDH defines a standard rate of transmission at 155.52 Mbps, which is referred to as STS-3 at the electrical level and STM-1 for SDH. STM-1 is equivalent to SONET's Optical Carrier (OC) levels −3.
LAN/WAN interfaces 108 and 110 provide electrical, optical, logical, and format conversions to signals and data that are transmitted between a LAN, such as LANs 104 and 106, and WAN 102.
An exemplary block diagram of an optical LAN/WAN interface service unit 200 (SU) is shown in
In addition to EOS functions, SU 200 may support frame encapsulation, such as GFP, X.86 and PPP in HDLC Framing. High Order Virtual Concatenation may be supported for up to 24-STS-1 or 8-STS-3c channels and is required to perform full wire speed operation on SU 200, when operating at 1 Gbps.
SU 200 includes three main functional blocks: Layer 2 Switch 202, ELSA 204 and MBIF-AV 206. ELSA 202 is further subdivided into functional blocks including a GMII interface 208 to Layer 2 (L2) Switch 202, receive Memory Control & Scheduler (MCS) 210 and transmit MCS 212, encapsulation 214 and decapsulation 216 functions (for GFP, X.86 and PPP), Virtual Concatenation 218, frame buffering provides by memories 220, 222, and 224, and SONET mapping and performance monitoring functions 226. MBIF-AV 206 is used primarily as a backplane interface device to allow 155 Mbps or 622 Mbps operation. In addition SU 200 includes physical interface (PHY) 228.
PHY 228 provides the termination of each of the four physical Ethernet interfaces and performs clock and data recovery, data encode/decode, and baseline wander correction for the 10/100BaseT copper or 1000Base LX or SX optical. Autonegotiation is supported as follows:
PHY 228 block provides a standard GMII interface to the MAC function, which is located in L2 Switch 202.
L2 Switch 202, for purposes of EPORT and TPORT, is operated as a MAC device. L2 Switch 202 is placed in port mirroring mode to provide transparency to all types of Ethernet frames (except PAUSE, which is terminated by the MAC). L2 Switch 202 is broken up into four separate 2 port bi-directional MAC devices, which perform MAC level termination and statistics gathering for each set of ports. Support for Ethernet and Ether-like MIBs is provided by counters within the MAC portion of L2 Switch 202. L2 Switch 202 also provides limited buffering of frames in each direction (L2 Switch 202->ELSA 204 and ELSA 204->L2 Switch 202); however, the main packet storage area is the Tx Memory 222 and Rx Memory 220 attached to ELSA 204. L2 Switch 202 is capable of buffering 64 to 9216 byte frames in its limited memory. Both sides of L2 Switch 202 interface to adjacent blocks via a GMII interface.
ELSA 204 provides frame buffering, SONET Encapsulation and SONET processing functions.
In the Tx direction, the GMII interface 208 of ELSA 204 mimics PHY 228 operation at the physical layer. Small FIFOs are incorporated into GMII interface 208 to adapt bursty data flow to the Tx Memory 222 interface. Enough bandwidth is available through the GMII 208 and Tx Memory 222 interfaces (8 Gbps) to support all data transfers without frame drop for all four interfaces (especially when all four Ethernet ports are operating at 1 Gbps). The GMII interface 208 also supports the capability of flow controlling the L2 Switch 202. The GMII block 208 receives memory threshold information supplied to it from the Tx Memory Controller 212, which monitors the capacity of the Tx Memory 222 on a per port basis (per customer basis for TPORT), and is programmable to drop incoming frames or provide PAUSE frames to the L2 Switch 202 when a predetermined threshold has been reached in memory. When flow control is used, the memory thresholds are set to provide enough space to avoid dropping frames given the PAUSE frame reaction times. The GMII interface 208 must also calculate and add frame length information to the packet. This information is used for GFP frame encapsulation.
The Tx MCS 212 provides the low level interface functions to the Tx Memory 222, as well as providing scheduler functions to control pulling data from the GMII FIFOs and paying out data to the Encapsulation block 216. For practical purposes, the Tx Memory 222 is effectively a dual port RAM; so, two independent scheduler blocks are provided for reading from and writing to the Tx Memory 222. The scheduler functions for EPORT and TPORT will differ slightly, but these differences will be handled through provisioning information supplied to the scheduler.
The primary function of the Tx Memory 222 is to provide a level of burst tolerance to entering LAN data, especially in the case where the LAN bandwidth is much greater than the provisioned WAN bandwidth. A secondary function of this memory is for Jumbo frame storage; this allows cut through operation in the GMII block 208 to provide for lower latency data delivery by not buffering entire large frames. The Tx Memory 222 is typically divided into partitions, for example, one partition per port or one partition per customer (VLAN). For both cases, each partition is operated as an independent FIFO. Fixed memory sizes are chosen for each partition regardless of the number of ports or customers currently in operation. Partitioning in this fashion prevents dynamic re-sizing of memory when adding or deleting ports/customers and provides for hitless upgrades/downgrades. The memory is also sized independently of WAN bandwidth. This provides for a constant burst tolerance as specified from the LAN side (assuming zero drain rate on WAN side). This partitioning method also guarantees fair allocation of memory amongst customers.
The Encapsulation block 216 has a demand based interface to the Tx MCS 212. Encapsulation block 216 provides three types of SONET encapsulation modes, provisionable on a per port/customer basis (although SW may limit encapsulation choice on a per board basis). The encapsulation modes are:
In each encapsulation mode, additional overhead is added to the pseudo-Ethernet frame format stored in the Tx Memory 222.
The Encapsulation block 216 will decide which of the fields are relevant for the provisioned encapsulation mode. For example, Ethernet Frame Check Sequence (FCS) may or may not be used in Point-to-Point (PPP) encapsulation; and, length information is used only in GFP encapsulation. Another function of the Encapsulation block is to provide “Escape” characters to data that appears as High Level Data Link Control (HDLC) frame delineators (7Es) or HDLC Escape characters (7Ds). Character escaping is necessary in PPP and X.86 encapsulation modes. In the worst case, character escaping can nearly double the size of an incoming Ethernet frame; as such, mapping frames from the Tx Memory 222 to the SONET section of the ELSA 204 is non-deterministic in these encapsulation modes and requires a demand based access to the Tx Memory 222. An additional memory buffer block is housed in the Encapsulation block 216 to account for this rate adaptation issue. Watermarks are provided to the Tx MCS 212 to monitor when the scheduler is required to populate each port/customer space in the smaller memory buffer block.
The Virtual Concatenation (VCAT) block 218 takes the encapsulated frames and maps them to a set of pre-determined VCAT channels. A VCAT channel can consist of the following permutations:
These channel permutations provide a wide variety of bandwidth options to a customer and can be sized independently for each VCAT channel. The VCAT block 218 encodes the H4 overhead bytes required for proper operation of Virtual Concatenation. VCAT channel composition is signaled to a receive side SU using the H4 byte signaling format specified in the Virtual Concatenation standard. The VCAT block 218 provides TDM data to the SONET processing block after the H4 data has been added.
The SONET Processing block 226 multiplexes the TDM data from the VCAT block 218 into two STS-12 SONET data streams. Proper SONET overhead bytes are added to the data stream for frame delineation, pointer processing, error checking and signaling. The SONET Processing block 226 interfaces to the MBIF-AV block 206 through two STS-12 interfaces. In STS-3 mode (155 Mbps backplane interface), STS-3 data is replicated four times in the STS-12 data stream sent to the MBIF-AV 206; the first of four STS-3 bytes in the multiplexed STS-12 data stream represents the STS-3 data that is selected by the MBIF-AV 206 for transmission.
The MBIF-AV block 206 receives the two STS-12 interfaces previously described and maps them to the appropriate backplane interface LVDS pair (standard slot interface or BW Extender interface). The MBIF-AV 206 also has the responsibility of syncing SONET data to the Frame Pulse provided by the Line Unit and insuring that the digital delay of data from the frame pulse to the Line Unit is within specification. The MBIF-AV 206 block also provides the capability of mapping SONET data to a 155 Mbps or 622 Mbps LVDS interface; this allows SU 200 to interface to the OC3LU, OC12LU or OC48LU. 155 Mbps or 622 Mbps operation is provisionable and is upgradeable in system with a corresponding traffic hit. When operating as a 155 Mbps backplane interface, the MBIF-AV 206 must select STS-3 data out of the STS-12 stream supplied by the SONET Processing block and format that for transmission over the 155 Mbps LVDS links.
In the WAN-to-LAN datapath, MBIF-AV 206 is responsible for Clock and Data Recovery (CDR) for the four LVDS pairs, at either 155 Mbps or 622 Mbps.
The MBIF-AV 206 also contains a full SONET framing function; however, for the most part, the framing function serves as an elastic store element for clock domain transfer that is performed in this block. SONET Processing that is performed in this block is as follows:
Additional SONET processing is provided in the SONET Processing block 226. Multiplexing of Working/Protect channels from the standard slot interface or Bandwidth Extender slot interface is also provided in the MBIF-AV block 206. Working and Protect selection is chosen under MCU control. After the proper working/protect channels have been selected, the MBIF-AV block 206 transfers data to the SONET Processing block through one or both STS-12 interfaces. When operating at 155 Mbps, the MBIF-AV 206 has the added responsibility of multiplexing STS-3 data into an STS-12 data stream which is supplied to the SONET Processing block 226.
On the receive side, the SONET Processing block 226 is responsible for the following SONET processing:
In STS-3 mode of operation (155 Mbps backplane interface), a single stream of STS-3 data must be plucked from the STS-12 data stream as it enters the SONET Processing block 226. The SONET Processing block 226 selects the first of the four interleaved STS-3 bytes to reconstruct the data stream. After SONET Processing has been completed, TDM data is handed off to the VCAT block 218.
The VCAT block 218 processing is a bit more complicated on the receive side because the various STS-1 or STS-3c channels that comprise a VCAT channel may come through different paths in the network—causing varying delays between SONET channels. The H4 byte is processed by the VCAT block to determine:
This information is learned over the course of 16 SONET frames to determine how the VCAT block 218 should process the aggregate VCAT channel data. As data on each STS-1 or STS-3c is received, it is stored in VC Memory 224. Skews between each STS-1 or STS-3c are compensated for by their relative location in VC Memory 224 based on delay information supplied in the H4 information for each channel. The maximum skew between any two SONET channels is determined by the depth of the VC Memory 224. Bytes of data are spread one-by-one across each of the SONET channels that are members of a VCAT channel; so, if one SONET channel is lost, no data will be supplied through the aggregate VCAT channel.
The Decapsulation block 214 pulls data out of the VC Memory 224 based on sequencing information supplied to it by the VCAT block 218. Data is pulled a byte at a time from different address locations in VC Memory 224 corresponding to each received SONET channel that is a member of the VCAT channel. The Decapsulation block 214 is a Time Division Multiplex (TDM) block that is capable of supporting multiple instances of VCAT channels (up to 24 in the degenerate case of all STS-1 SONET channels) as well as multiple encapsulation types, simultaneously. Decapsulation of PPP in HDLC framing, X.86 and GFP (frame mode) are all supported. The Decapsulation block 214 strips all encapsulation overhead data from the received SONET data and provides raw Ethernet frames to the Rx MCS 210. If Ethernet FCS data was stripped by the transmit side Encap block 216 (option in PPP), then it is also added in the Decap block 214. Length information, used by GFP, will be stripped in this block.
Rx MCS 210 receives data from the Decapsulation block 214. In TPORT mode, the Rx Memory Controller block inserts a VLAN tag corresponding to the VCAT channel associated with a particular customer.
The scheduling function required for populating Rx Memory 220 from the SONET side is straightforward. As the Decapsulation block 214 provides data to Rx MCS 210, it writes the corresponding data to memory 220 in the order that it was received. There is a clock domain transfer from the Decapsulation block 214 to Rx MCS 210; so, a small amount of internal buffering is provided for rate adaptation within the ELSA 204. Through provisioning information, Rx MCS 210 creates associations of VCAT channels to memory locations. In the case of EPORT, four memory partition locations are supported, one for each possible LAN port. Data in each memory partition is organized and controlled as a FIFO.
The algorithm for scheduling data from the Rx Memory 220 to corresponding LAN ports is essentially a token-based scheduling scheme. Ports/customers are given a relative number of tokens based on the bandwidth that they are allocated on the WAN side. So, an STS-3c channel is allocated three times as many tokens as an STS-1 channel. Tokens are refreshed for each port/customer on a regular basis. When the tokens reach a predetermined threshold, a port/customer is allowed to transfer data onto the appropriate LAN port. If the threshold is not reached, additional token replenishment is required before data can be sent. This algorithm takes into account the relative size of frames (byte counts) as well as the allocated WAN bandwidth for a particular port/customer. Each port/customer receives a fair share of LAN bandwidth proportional to the WAN bandwidth that was provisioned.
The scheduler function also takes into account the possibility of WAN oversubscription. Since it is possible to provision an STS-24 worth of bandwidth, care must be taken when mapping this amount of bandwidth onto a 1 Gbps LAN link; maintaining fairness of bandwidth allocation among ports/customers is key. The scheduler algorithm provides fair distribution of bandwidth under these conditions. In the case where WAN oversubscription is persistent, Rx Memory 220 will fill and eventually data will be discarded; however, it will be discarded fairly, based on the amount of memory that each port/customer was provisioned.
As with the Tx Memory 222, the Rx Memory 220 is partitioned in the same manner. For EPORT, four partitions are created. Each port/customer will get an equal share of memory.
The GMII interface 208 provides the interface to the L2 switch 202 as described earlier for the Tx direction. In the Rx direction, the GMII interface 208 supplies PAUSE data as part of the data stream when the GMII has determined that watermarks were crossed in the Tx Memory 222.
The L2 Switch 202 operates the same in the Rx direction as in the Tx direction. It is completely symmetrical and uses port mirroring in this direction as well. It may receive PAUSE frames from the GMII I/F 208 in the ELSA 204, in which case, it will stop sending data to the ELSA 204. In turn, the L2 Switch 202 memory may fill (in the Tx direction) and eventually packets will be dropped, or the L2 Switch 202 will generate PAUSE to the attached router or switch. The L2 Switch 202 supplies the PHY 228 with GMII formatted data.
The PHY 228 converts the GMII information into appropriately coded information and performs a parallel to serial conversion and transfers the data out onto the respective LAN port.
A process 300 of operation of SU 200, implementing rate limiting using PAUSE frames, is shown in
If the upper storage limit within Tx Memory 222 is reached in step 304, then in step 306, a pause frame 404 is transmitted from Tx MCS 212 to L2 Switch 202. Upon receiving pause frame 404, L2 Switch 202 stops transmitting data to Tx MCS 212. With L2 Switch 202 not transmitting data, Tx Memory 222 begins to empty, while the buffers included in L2 Switch 202 begin to fill.
If there is a large data throughput mismatch, the buffers in L2 Switch 202 may, in step 308, themselves reach an upper limit or threshold of storage. If the upper storage limit of the buffers in L2 Switch 202 is reached in step 308, then, in step 310, a pause frame 406 is transmitted from L2 Switch 202 to the LAN through PHY 228. Upon receiving the pause frame, the LAN stops transmitting data to SU 200.
After step 310, with the LAN not transmitting data, L2 Switch 202 not transmitting data, and Tx Memory 222 emptying, in step 312, Tx Memory 222 will reach its lower limit. Likewise, after step 306, with L2 Switch 202 not transmitting data and Tx Memory 222 emptying, if the data throughput mismatch is not too large or too sustained, in step 312, Tx Memory 222 will reach its lower limit. In response, in step 314, a pause frame 408 with PAUSE=0 is transmitted from Tx MCS 212 to L2 Switch 202. Upon receiving pause frame 408 with PAUSE=0, L2 Switch 202 begins transmitting data to Tx MCS 212.
With L2 Switch 202 transmitting data, the buffers in L2 Switch 202 begin to empty. Eventually, in step 316, the buffers in L2 Switch 202 reach their lower limit. In response, a pause frame 410 with PAUSE=0 is transmitted from L2 Switch 202 to the LAN through PHY 228. Upon receiving pause frame 410 with PAUSE=0, the LAN begins transmitting data to SU 200.
It will be understood by those of skill in the art that there are other embodiments that may provide similar advantages to the described embodiments. For example, one of skill in the art would recognize that rate limiting using PAUSE frames may be advantageously applied to SDH networks, as well as SONET networks. Likewise, for another example, the technique shown in
The use of two numbers to control rate limiting makes the problem linear and requires shallow counters. Use of a ratio scheme between two numbers provides a more exact rate limit. In general, rate limits are for 10/100 Meg/Ethernet from 1 in increments of 1 (1 . . . 10/100). For 1000 Meg/Ethernet from 10 in increments of 10 (10 . . . 1000).
Two parameters that are software derived are n and m, as shown in the following general relationship:
If m=Rd(limit Rate desired) and n=L−Rd, then m and n will be integers that give the desired results (when L and Rd are integers).
An exemplary logical block diagram 500 that implements two number rate limiting is shown in
A process of operation 600 of two number rate limiting is shown in
In general SE/planning agreement that rate limits are for 10/100 from 1 in increments of 1 (1 . . . 10/100). For 1000 from 10 in increments of 10 (10 . . . 1000). From the block diagram two parameters that are software derived are n and m. The general relationship is as follows:
Then, if m=Rd(limit Rate desired) and n=L−Rd, m and n will be integers that give the desired results (when L and Rd are integers)
Mathematically any algorithm that uses a single number will fall into one of two types:
For 2) the values go asymptotically closer to 0 and the last useful value is 1−1/(1+K)=K/(K+1). Therefore Max m=K{circumflex over ( )}2. In this case L=1/(1+K)=1/100, K=99, and m=99{circumflex over ( )}2=9801=14 bits. This works at the boundary conditions but is not a perfect match in the linear sense.
For non-integral WAN links, let W be the highest integral value of the Rate Limit on the WAN link. Small e is the remaining BW.
An example of another embodiment of Rate Limiting is shown in
The debit based approach differs from the aforementioned credit based approach in that it is frame aware and does not rely on an idle insert method, but rather continually counts down at the provisioned rate 706, for example, in provisioned multiples of 10,000000 bits per second. The debit based approach only allows the SONET interface to read entire frames from the TX buffer when the UP/DOWN counter 708 is equal to zero 710. The variability in the Ethernet frame sizes is handled by counting up on a per byte basis. Idle time on the LAN interface is not credited, as the UP/DOWN counter 709 does not decrement below zero. The fact that credits are not built up for intervals where the LAN side is not transmitting frames does not adversely affect the average rate transmitted over the SONET WAN, as idle periods of time on the LAN interface will fall below the provisioned rate anyway. The frame level handshake 702 on the SONET side ensures that number of bytes transmitted over time is accurately captured, and yields a smoothing effect on the traffic that generates a frame gap to the SONET interface that is proportional to the size of the last frame transmitted in conjunction with the provisioned rate. This implementation requires a single provisioned value which determines the multiple of discrete quanta, such as in 10,000,000 bit/sec quanta, to rate limit out to the SONET interface. The accuracy is achieved via cascaded fractional dividers 712 which adapt the quanta into the SONET domain.
It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media such as floppy disc, a hard disk drive, RAM, and CD-ROM's, as well as transmission-type media, such as digital and analog communications links.
Although specific embodiments of the present invention have been described, it will be understood by those of skill in the art that there are other embodiments that are equivalent to the described embodiments. Accordingly, it is to be understood that the invention is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims.