The present invention relates generally to a system and method for providing streaming content to a requester and more specifically to streaming content from a stream engine node to a requester while meeting Quality of Service constraints.
Digital streaming data, which includes continuous media such as audio and video streams, must be delivered in a predictable way from source to destination in order to preserve and reproduce at the destination the timing relationships that existed at the source. This generally means delivering packets making up the stream in order and on time. Failure to do so causes garbled and unusable results at the destination. One way of reliably delivering the packets that make up digital streaming data is under a Quality of Service constraint.
Quality of Service (QoS) constraints are well known in the area of networking. For example, a QoS constraint on the transport service of a network can require that a transport connection be established within a specified period of time, that a certain level of throughput be maintained by the established connection over a specified time interval, and that the transport layer provide protection against unauthorized third parties accessing transported data. These constraints can be requested by the user when establishing the connection at the transport level. Very often, if the transport layer cannot achieve the requested constraints, a negotiation occurs between the requesting site and the remote sites of the connection for a set of constraints that are acceptable to both ends of the connection. If a set of constraints is found to be acceptable to both ends of the connection, the connection is set up and the constraints are maintained throughout the duration of the connection.
Though difficult, meeting reasonable QoS constraints in the network context is achievable because the times involved are on the order of microseconds. However, a storage system that is connected to a network meeting QoS constraints is another matter. Storage systems that provide streaming data operate with times on the order of milliseconds and, if rotating media are involved, may have unpredictable response times and throughputs. Streaming data includes an audio stream or video stream or other continuous media for which the receiver must reproduce the timing relationship that existed at the transmitter. The delivery of such data benefits from traversing a network having QoS constraints.
For a network that is connected to such storage systems, it does little good for streaming to have the network meet the QoS constraints while the storage system does not. The servicing of a streaming request by such a storage system will appear to the user as unpredictable and slow and possibly unworkable if the stream is being viewed in real time, even though the network is performing adequately under its QoS constraints. One such prior art streaming storage system 100 is shown in
In
Such a system configuration makes it difficult, if not impossible, to meet a QoS constraint on the response time and throughput in servicing the streaming request. The reason is that the requested data must flow through the server 103 system. Thus, a QoS constraint or guarantee can only go so far as the routers 105 and the second set of switches 106, as shown. Prior art systems have sought to remedy this problem by providing more processors and memory in the servers, in effect over-provisioning the servers. However, this solution is expensive, wastes resources, and is not scalable, in the sense that the system cannot grow easily to handle more concurrent streaming requests.
Therefore, there is a need for a storage system, especially a storage system that supplies streaming data to provide the data under the QoS constraints just as the network does, in a scalable manner and without requiring excess resources.
The present invention is directed towards such a need. A method, in accordance with the present invention, includes receiving a request for a streaming media object at the stream director node and locating a stream engine node to which or from which the streaming media object is to be transferred. The method further includes verifying that sufficient system resources are available to service the request, and preparing a data transfer path between the stream, engine node and the client-system that made the streaming media request. The data transfer path includes one or more resources along the path, but not the stream director node. The method further includes causing resources along the prepared path to be reserved for use by the data transfer, passing the request for the streaming media object from the stream director node to the located stream engine node for servicing, and transferring, over the prepared path, data comprising the streaming media object between the client system that made the request and the located stream engine node.
In one embodiment of the present invention, the transfer of the streaming media object is performed under a Quality of Service constraint.
A system in accordance with the present invention includes at least one stream director node that is configured to: receive a request for a streaming media object at the stream director node; locate a stream engine node to which or from which the streaming media object is to be transferred; verify that sufficient system resources are available to service the request; prepare a data transfer path between the stream engine node and the client system that made the streaming media request, where the stream director node is not included in the data transfer path and the data transfer path includes one or more resources along the path; cause resources along the prepared path to be reserved for use by the data transfer; pass the request for the streaming media object from the stream director node to the located stream engine node for servicing; and transfer, over the prepared path, data comprising the streaming media object between the client system that made the request and the located stream engine node. The system further includes at least one stream engine node for storing streaming media objects including the requested streaming media object, the stream engine node being configured to receive the request for a streaming media object and transfer, over the prepared path, data comprising the streaming media object between the stream engine node on which the object resides and the client system that made the request.
One advantage of the present invention is that the system servicing the streaming media request can operate under a Quality of Service constraint because the processing node is not involved in the transfer of data comprising the streaming media object. The stream engine nodes on which the streaming media object resides handles the data transfer.
Another advantage of the present invention is that it is scalable to handle increased numbers of streaming media requests by adding stream engine nodes and without having to overprovision the stream director node or nodes.
These and other features, aspects and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
In the system of
If sufficient resources are not available, the stream director node 205 may hold the request 107 in its queue until a later time, pass the queued request along to another stream director node 205, which may be able to obtain the needed resources to service the request, or simply return the request 107 to the requester.
In the configuration of the present invention there are no servers through which the streaming data 108 must pass to reach the ultimate requester. Therefore, a QoS constraint can be imposed on the system all the way to and including the stream engine node 204 servicing the request. Furthermore, the system is scalable without over-provisioning by simply adding more stream engine nodes 204 and not necessarily more stream director nodes 205, and the system is deadlock free because all resources needed to service the stream request 107 are obtained before the request 107 is serviced and as a condition of the request 107 being serviced.
MPLS (Multiprotocol Label Switching) 302A is a protocol that relies on a modified set of routing tables in the routers making up a network. The routing tables are modified to route based on a specific label rather than source and destination addresses in packet headers. This permits faster service through the routers 201 and guarantees a fixed transmission path throughout the network, which provides a mechanism by which a QoS constraint can be enforced.
In the protocol, routing occurs based on label-switched paths (LSP), which are a sequence of labels at every node along the path from the source to the destination of a connection. An LSP is established prior to the data transmission by means of a label distribution protocol (LDP) or other similar protocol. Labels are spliced into a Layer 2 header. A router that receives such a packet examines the label to determine the next hop in the pre-established route. Information called a forward equivalence class (FEC) is bound to a label in each router that participates in the LSP. The FEC determines the service requirements that a packet or set of packets receive when traversing the LSP.
Devices that participate in the LSP are Label Edge Routers (LERs) and Label Switching Routers (LSRs). Label Edge Routers operate at the edge of the LSP and LSRs operate in the core of the network to support the LSP.
In order for a data packet to travel through a network according to an LSP several steps occur prior to the data actually traversing the LSP. First, labels are created and distributed to the various routers in the network from source to destination of the connection. In this step, the routers bind labels to FECs.
Second, tables in the routers are constructed. These tables contain the mappings between a label and a FEC. Third, an LSP is created starting from the destination and working towards the source from which label distribution started.
RSVP (Resource Reservation Protocol) 304A is an application level protocol that uses IP datagrams as the signaling mechanism for LSP setup communications. These communications include peer discovery, label requests and mapping and management. The protocol supports a RESV message to reserve resources with traffic and QoS Parameters (such as guaranteed bandwidth) in the LSR upstream direction (towards the ingress). In one embodiment, the upstream direction for RSVP 304A is away from the client/user system and the downstream direction is towards the client/user system. This means that, in this embodiment, the user/client system obtains information from the streaming server system in order to send the RESV message to reserve resources along an LSP. In an alternative embodiment, the upstream direction is towards the client/user system. A RESVConf message to confirm the LSP setup is sent in the downstream direction (towards the client/user system). Once the reservations have been setup in the LSP, refresh messages are required to maintain the path and the reservations.
In
In
In
In
Directory Enabled Network Services
Because stream objects have a relatively long lifetime ranging from seconds to hours, there is no need for complex databases to keep track of the streams. A directory 700 as shown in
In the present invention, a directory 700 contains the paths or routes 705, 704 (the Border Gateway Protocol (BGP) may be used to share route information) to each stream object 707 and the resources 706 required along the path to sustain the stream. When a request arrives at the stream director node 205 containing the directory 700, the stream director node 205 determines the location of the desired object and possible routes the stream data may traverse.
In addition to the directory 700, the stream director node also contains a list of resources 706, such as available bandwidth and buffers. Leases associate an object with a resource for a specified amount of time. When a lease expires, the object no longer moves through the network and the associated resources are returned. The stream director nodes 205 track available resources, and inform each other whenever a lease is granted for a resource.
Load Balancing
Because storage capacity is increasing faster than the speed of an individual stream engine node 204, load balancing is preferably accomplished by replicating the stream object on multiple stream engine nodes. Replication, i.e., the complete copy of a stream object on another device, doubles or further multiplies the number of streams that may be served.
Within a directory 700, replicated stream objects are adjacent. A stream director node 205 can easily determine the load associated with access to and from a particular stream object by examining the lease reservations in the corresponding directory 700 entry. The stream director node 205 balances the load by choosing the stream engine nodes 204 with the lowest load (fewest or shortest lease reservations). If no replicated stream objects have sufficient resources, then the user's request is held until such time as enough lease reservations expire to support the request.
In an alternative embodiment, a small amount of resources are allocated to a background task that creates another stream object copy dynamically, as needed. Once the replicated storage object is available on another stream engine node 204, the load on the overburdened node 204 is mitigated. After the demand for the storage object subsides, the replicated storage object is abandoned.
Authenticating and Configuring Low Level Device Drivers for Streaming Data Operation
Device drivers operate to abstract the underlying hardware apparatus, such as a hard disk drive (HDD), for file systems and operating systems. In particular, the device driver abstracts the attributes of a variety of types of HDDs into a consistent interface, called an Application Programming Interface (API) or I/O Control Interface (IOCTL).
As part of this abstraction, present device drivers translate the logical block addressing (LBA) of the HDD into the cluster or block addressing of the file system. For example, HDD blocks are small, on the order of 512 bytes, while file system blocks are 2 KB to 8 KB. The file system block sizes align well with the paging memory subsystems that are used in virtual memory operating systems such as Unix, Linux, Windows NT, Solaris and VMS.
A well-designed device driver attempts to minimize the movement of the HDD positioning arm and rotational delays associated with the HDD access. The device driver accomplishes this by accessing larger amounts of data than requested of the device. These larger accesses effectively pre-fetch data into memory in anticipation of a future request. For example, the Linux Operating System may obtain up to 64K bytes from the HDD when a request is made of an HDD.
However, as large as these requests are, streaming requests are even larger. In the case of a video stream a request of 1 Megabyte is not uncommon. These large requests tend to flush other useful data out of the pre-fetch memory and degrade the performance of traditional applications. Furthermore, these large requests may cause positioning arm movements due to crossing cylinder boundaries and may require several rotations of the HDD to obtain. It is desired that these transfers avoid the positioning arm and rotational delays as much as possible in order to facilitate the meeting of QoS constraints.
Therefore, the invention dynamically configures device drivers for either traditional operation or streaming operation. Because device drivers are aware of the application or process requesting service, this becomes a matter of identifying or authenticating the process as a streaming process. Default device driver requests are sized according to the particular operating system. Streaming applications cause the device driver to reconfigure itself for the specific request size required to sustain the stream. For example, audio streams are expected to have smaller request sizes than video streams.
The streaming application authenticates by sending an authentication request to an authentication server located on the World Wide Web. The authentication server verifies that a valid license has been issued to the client/user of the streaming application.
Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.
This application is a continuation of U.S. patent application Ser. No. 14/160,360, filed Jan. 21, 2014, which is a continuation of U.S. patent application Ser. No. 11/846,657, filed Aug. 29, 2007, which is a divisional of U.S. patent application Ser. No. 10/176,498, filed Jun. 21, 2002, which claims priority to U.S. Provisional Patent Application Ser. No. 60/308,918, filed Jul. 27, 2001. Each of the aforementioned applications is hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
5926649 | Ma et al. | Jul 1999 | A |
5978843 | Wu et al. | Nov 1999 | A |
6052600 | Fette | Apr 2000 | A |
6081900 | Subramaniam et al. | Jun 2000 | A |
6134658 | Multerer et al. | Oct 2000 | A |
6314475 | Collin et al. | Nov 2001 | B1 |
6377996 | Lumelsky et al. | Apr 2002 | B1 |
6401126 | Douceur et al. | Jun 2002 | B1 |
6438652 | Jordan et al. | Aug 2002 | B1 |
6445679 | Pang et al. | Sep 2002 | B1 |
6446204 | Pang et al. | Sep 2002 | B1 |
6463454 | Lumelsky | Oct 2002 | B1 |
6499054 | Hesselink et al. | Dec 2002 | B1 |
6615312 | Hamlin et al. | Sep 2003 | B1 |
6662231 | Drosset | Dec 2003 | B1 |
6732158 | Hesselink et al. | May 2004 | B1 |
6760765 | Asai et al. | Jul 2004 | B1 |
6788696 | Allan et al. | Sep 2004 | B2 |
6845508 | Parry | Jan 2005 | B2 |
6968390 | Chavez, Jr. | Nov 2005 | B1 |
6990512 | Major et al. | Jan 2006 | B1 |
7120692 | Hesselink et al. | Oct 2006 | B2 |
7171662 | Misra et al. | Jan 2007 | B1 |
RE39801 | Marbry et al. | Aug 2007 | E |
7274659 | Hospodor | Sep 2007 | B2 |
7454443 | Ram et al. | Nov 2008 | B2 |
7467187 | Hesselink et al. | Dec 2008 | B2 |
7478434 | Hinton et al. | Jan 2009 | B1 |
7546353 | Hesselink et al. | Jun 2009 | B2 |
7587467 | Hesselink et al. | Sep 2009 | B2 |
7600036 | Hesselink et al. | Oct 2009 | B2 |
7734682 | Aubry | Jun 2010 | B2 |
7788404 | Hesselink et al. | Aug 2010 | B2 |
7917628 | Hesselink et al. | Mar 2011 | B2 |
7934251 | Hesselink et al. | Apr 2011 | B2 |
7949564 | Hughes et al. | May 2011 | B1 |
8004791 | Szeremeta et al. | Aug 2011 | B2 |
8255661 | Karr et al. | Aug 2012 | B2 |
8285965 | Karr et al. | Oct 2012 | B2 |
8341117 | Ram et al. | Dec 2012 | B2 |
8341275 | Hesselink et al. | Dec 2012 | B1 |
8352567 | Hesselink et al. | Jan 2013 | B2 |
8526798 | Hesselink | Sep 2013 | B2 |
8631284 | Stevens | Jan 2014 | B2 |
8646054 | Karr et al. | Feb 2014 | B1 |
8661507 | Hesselink et al. | Feb 2014 | B1 |
8688797 | Hesselink et al. | Apr 2014 | B2 |
8713265 | Rutledge | Apr 2014 | B1 |
8762682 | Stevens | Jun 2014 | B1 |
8780004 | Chin | Jul 2014 | B1 |
8793374 | Hesselink et al. | Jul 2014 | B2 |
8819443 | Lin | Aug 2014 | B2 |
9998390 | Hospodor | Jun 2018 | B1 |
20020064149 | Elliott et al. | May 2002 | A1 |
20020112076 | Rueda et al. | Aug 2002 | A1 |
20030005132 | Nguyen et al. | Jan 2003 | A1 |
20040215718 | Kazmi | Oct 2004 | A1 |
20050144195 | Hesselink et al. | Jun 2005 | A1 |
20050144200 | Hesselink et al. | Jun 2005 | A1 |
20070214215 | McCaleb | Sep 2007 | A1 |
20100329574 | Moraleda | Dec 2010 | A1 |
20120036041 | Hesselink et al. | Feb 2012 | A1 |
20130212401 | Lin | Aug 2013 | A1 |
20130266137 | Blankenbeckler et al. | Oct 2013 | A1 |
20130268749 | Blankenbeckler et al. | Oct 2013 | A1 |
20130268759 | Blankenbeckler et al. | Oct 2013 | A1 |
20130268771 | Blankenbeckler et al. | Oct 2013 | A1 |
20140095439 | Ram | Apr 2014 | A1 |
20140169921 | Carey | Jun 2014 | A1 |
20140173215 | Lin et al. | Jun 2014 | A1 |
Entry |
---|
Advisory Action dated Mar. 11, 2010 from U.S. Appl. No. 11/846,657 3 pages. |
Andrew D. Hospodor, et al., U.S. Appl. No. 11/846,657, filed Aug. 29, 2007, 32 pages. |
Final Office Action dated Dec. 1, 2009 from U.S. Appl. No. 11/846,657 11 pages. |
Final Office Action dated Sep. 18, 2013 from U.S. Appl. No. 11/846,657 10 pages. |
Interview Summary dated Nov. 1, 2013 from U.S. Appl. No. 11/846,657 4 pages. |
IPER dated Jun. 13, 2003 received on PCT Application PCT/US02/23517, filed Jul. 23, 2002 (4 pages). |
Non-Office Action dated Jun. 11, 2013 from U.S. Appl. No. 11/846,657 9 pages. |
Non-Office Action dated Jul. 24, 2009 from U.S. Appl. No. 11/846,657 15 pages. |
Notice of Allowance dated May 24, 2007 from U.S. Appl. No. 10/176,498 8 pages. |
Office Action dated Nov. 29, 2006 from U.S. Appl. No. 10/176,498 11 pages. |
Office Action dated Jul. 18, 2006 from U.S. Appl. No. 10/176,498 15 pages. |
Number | Date | Country | |
---|---|---|---|
20190007335 A1 | Jan 2019 | US |
Number | Date | Country | |
---|---|---|---|
60308918 | Jul 2001 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10176498 | Jun 2002 | US |
Child | 11846657 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14160360 | Jan 2014 | US |
Child | 16005476 | US | |
Parent | 11846657 | Aug 2007 | US |
Child | 14160360 | US |