The present invention relates to networked storage systems.
With the accelerating growth of Internet and intranet communication, high-bandwidth applications (such as streaming video), and large information databases, the need for networked storage systems has increased dramatically.
In networked storage systems, users access the data on the storage elements through host ports. The host ports may be located in close proximity to the storage elements or they may be several miles away. The storage elements used in networked storage systems are often hard disk drives. Unfortunately, when a drive fails, the data stored on the drive is inaccessible. In a system in which access to data is imperative, there must be a backup system. Most backup systems today involve storing the data on multiple disk drives so that if one drive fails, another drive that contains a copy of the data is available. These multiple disk drives are known as redundant arrays of independent disks (RAIDs). The addition of RAIDs and their associated RAID controllers make a networked storage system more reliable and fault tolerant. Because of its inherent advantages, RAID has quickly become an industry standard.
Conventional enterprise-class RAID controllers employ a backplane as the interconnect between the hosts and the storage devices. A series of host port interfaces are connected to the backplane, as are a series of storage element interfaces. Generally, a centralized cache and transaction/RAID processor are also directly connected to the backplane. Unfortunately, the more host port interfaces and storage element interfaces are added to the backplane, the less performance the overall system possesses. A backplane can only offer a fixed bandwidth, and therefore cannot very well accommodate scalability. The only way, currently, to provide scalability is to add another enterprise-class RAID controller box to the network storage system. Current RAID controller systems, such as Symmetrix by EMC, are large and costly. Therefore, it is often not economically viable to add an entire RAID controller box for the purposes of scalability.
The conventional system is also severely limited in flexibility because it does not offer an architecture that allows any host to access any storage element in the system if there are multiple controllers. Typically, the controller is programmed to control access to certain storage elements from only certain host ports. For other hosts, there is simply no path available to every storage element.
Neither does the conventional system offer a way to coordinate overlapped writes to the RAID with high accuracy, high performance, and low numbers of data collisions.
Attempts have been made to improve system performance by adding scalability enablers and incorporating a direct communications path between the host and storage device. Such a system is described in U.S. Pat. No. 6,397,267, entitled “Redirected I/O for scalable performance storage architecture,” assigned to Sun Microsystems, Inc., which is hereby incorporated by reference. While the system described in this patent may improve system performance by adding scalability, it does not offer an architecture in which any host can communicate with any storage element in the system with multiple controllers.
It is therefore an object of the invention to provide a RAID controller capable of allowing any host port access to any volume through request mapping.
It is yet another object of the present invention to provide a scalable networked storage system architecture.
It is another object of the invention to provide a scalable architecture that allows any host port to communicate with any logical or virtual volume.
It is yet another object of the invention to provide concurrent volume accessibility through any host port.
It is yet another object of this invention to provide a scalable networked storage system architecture that has significantly improved performance over conventional storage systems.
It is yet another object of this invention to provide a scalable networked storage system architecture that is more flexible than conventional storage system architectures.
It is yet another object of the present invention to provide a method and apparatus for coordinating overlapped writes in a networked storage controller/virtual storage engine architecture.
The present invention is a RAID controller architecture with integrated map-and-forward function, virtualization, scalability, and mirror consistency. The RAID controller architecture utilizes decentralized transaction processor controllers with decentralized cache to allow for unlimited scalability in a networked storage system. The system provides virtualization through a map-and-forward function in which a virtual volume is mapped to its logical volumes at the controller level. The system also provides a scalable networked storage system control architecture that provides any number of host and/or storage ports in such a way that significantly increases system performance in a low-cost and efficient manner. The system also provides a controller/virtualizer architecture and associated methods for providing mirror consistency in a virtual storage environment in which different hosts may write to the same LBA simultaneously.
The foregoing and other advantages and features of the invention will become more apparent from the detailed description of exemplary embodiments of the invention given below with reference to the accompanying drawings, in which:
Now referring to the drawings, where like reference numerals designate like elements, there is shown in
RAID controller 1120 further includes a host port 1 (H1) 121, a host port 2 (H2) 122, a storage element port 1 (S1) 123, a storage element port 2 (S2) 124, an interconnect interface port 1 (I1) 125, and an interconnect interface port 2 (I2) 126. S1123 is connected to a storage element 127. S2124 is connected to storage element 128. I1125 connects to an interconnect 1150. I2126 connects to an interconnect 2160. RAID controller 1120 also includes a cache 129.
RAID controller 2130 further includes a host port 1 (H1) 131, a host port 2 (H2) 132, a storage element port 1 (S1) 133, a storage element port 2 (S2) 134, an interconnect interface port 1 (I1) 135, and an interconnect interface port 2 (I2) 136. S1133 is connected to a storage element 137. S2134 is connected to a storage element 138. I1135 connects to interconnect 1150. I2136 connects to interconnect 2160. RAID controller 2130 also includes a cache 139.
RAID controller 3140 further includes a host port 1 (H1) 141, a host port 2 (H2) 142, a storage element port 1 (S1) 143, a storage element port 2 (S2) 144, an interconnect interface port 1 (I1) 145, and an interconnect interface port 2 (I2) 146. S1143 is connected to a storage element 147. S2144 is connected to a storage element 148. I1145 connects to interconnect 1150. I2146 connects to interconnect 2160. RAID controller 140 also includes a cache 149.
The configuration shown in networked storage system architecture 100 may include any number of hosts, any number of controllers, and any number of interconnects. For simplicity and ease of explanation, only a representative sample of each is shown. In a topology with multiple interconnects, path load balancing algorithms generally determine which interconnect is used. Path load balancing is fully disclosed in U.S. patent application Ser. No. 10/637,533, filed Aug. 8, 2003, which is hereby incorporated by reference.
RAID controller 1120, RAID controller 2130, and RAID controller 3140 are each based on Aristos Logic pipelined transaction processor-based I/O controller architecture as fully disclosed in U.S. patent application Ser. No. 10/429,048, entitled “System and Method for Scalable Transaction Processing” and U.S. patent application Ser. No. 09/716,195, entitled, “Integrated I/O Controller,” the disclosures of which are hereby incorporated by reference.
Storage controller system 180 may or may not physically include a system configuration controller 170. System configuration controller 170 may physically reside outside storage controller system 180 and its information may enter through one of the host ports. The information provided by system configuration controller 170 may be obtained by the RAID controllers from hosts 115 or from another device connected to network communication fabric 110. System configuration controller 170 provides information required by the RAID controllers to perform store-and-forward and map-and-forward operations. This information may include volume mapping tables, lists of volume controllers, setup information, and control information for volumes recently brought online. In this example, storage configuration controller 170 has established logical volume 1 as residing on storage element 127 and storage element 128. Both storage element 127 and storage element 128 are controlled by RAID controller 1120. Similarly, storage configuration controller 170 may also establish logical volume 2 as residing on storage element 137 and storage element 138, which are controlled by RAID controller 2130. Finally, storage configuration controller 170 may establish logical volume 3 as residing on storage element 147 and storage element 148, which are controlled by RAID controller 3140. Storage configuration controller 170 updates each RAID controller with logical volume assignments for all RAID controllers within storage controller system 180.
In operation, any host 115 may send a write request to any volume in storage controller system 180 via any RAID controller and the write will be performed correctly. In one example, host1 115 requests a write to volume 1. In this example, host1 115 sends the request to RAID controller 2130 via network communication fabric 110. RAID controller 2130 knows which elements own the volume from volume mapping information supplied by system configuration controller 170; RAID controller 2130 also knows that volume 1 is physically composed of storage element 127 and storage element 128, which belong to RAID controller 1120. RAID controller 2130 stores the write command in its cache 139 and forwards the write request to RAID controller 1120 for storage element 127 and storage element 128. When RAID controller 1120 has completed the write request, it sends a write complete status back to RAID controller 2130. RAID controller 2130 then forwards the write complete status back to host1 115 and deletes the original stored command. This operation is explained in detail in reference to
Step 210: Requesting Volume Access
In this step, host1 115 requests a write action on H1131 of RAID controller 2130. The request is routed through network communication fabric 110 to H1131 of RAID controller 2130. Method 200 proceeds to step 215.
Step 215: Receiving Command
In this step, RAID controller 2130 receives the command from host1 115 at port H1131. Method 200 proceeds to step 220.
Step 220: Mapping Request Command Context
In this step, RAID controller 2130 maps the volume 1 request in cache 139. Method 200 proceeds to step 225.
Step 225: Identifying Raid Controller to which Request Command Belongs
In this step, RAID controller 2130 uses volume mapping information previously supplied by system configuration controller 170 to determine that RAID controller 1120 controls the requested volume 1 on storage element 127 and storage element 128. Method 200 proceeds to step 230.
Step 230: Forwarding Command to Appropriate RAID Controller
In this step, RAID controller 2130 forwards the write command from I1135 through interconnect 1150 to RAID controller 1120. Method 200 proceeds to step 235.
Step 235: Receiving Request at RAID Controller
In this step, the command arrives at RAID controller 1120 at port I1125. Method 200 proceeds to step 240.
Step 240: Executing Request
In this step, RAID controller 1220 executes the write command to volume 1 on storage element 127 and storage element 128. When the write operation is complete, method 200 proceeds to step 245.
Step 245: Sending Status to Mapping RAID Controller Via Interconnect
In this step, RAID controller 1120 sends the status of the write operation back to RAID controller 2130 via interconnect 1150. RAID controller 1120 sends the status through port I1125 in this example. Method 200 proceeds to step 250.
Step 250: Forwarding Status to Host
In this step, RAID controller 2130 forwards the status received from RAID controller 1120 back through network communication fabric 110 to host1 115. Method 200 proceeds to step 255.
Step 255: Deleting Context from List
In this step, RAID controller 2130 deletes the original request from its list in cache 139. This concludes method 200 for executing a map-and-forward command. Method 200 repeats for the next map-and-forward transaction.
Storage controller systems often employ the use of several storage devices to redundantly store data in case one or more storage devices fail (e.g., mirroring). In a like manner, several storage devices may be used in parallel to increase performance (striping). In more complex systems, these combinations may span RAID controllers, so a “virtual” volume may reside on storage devices that are controlled by more than one RAID controller. This allows much greater flexibility in storage resource management, allowing volume size, performance, and reliability to change as users' needs change. However, it would be very inefficient for hosts to be required to keep track of all the various logical and physical combinations, so a layer of abstraction is needed. This is the concept of storage virtualization, in which the internal functions of a storage subsystem or service are essentially hidden from applications, computer servers, or general network resources for the purpose of enabling application and network independent management of storage or data. In a virtualized network storage system architecture, hosts request access to virtual volumes, which may consist of any number of storage elements controlled by any number of RAID controllers. For example, with reference to
Step 310: Requesting Virtual Volume Access
In this step, host1 115 sends a request for a write to virtual volume 4 to RAID controller 2130 via network communication fabric 110. Method 300 proceeds to step 315.
Step 315: Receiving Command
In this step, RAID controller 2130 receives the volume 4 write command at port H1131. Method 300 proceeds to step 320.
Step 320: Mapping Request Command Context
In this step, RAID controller 2130 stores the volume 4 request in cache 139. Method 300 proceeds to step 325.
Step 325: Mapping Request Command to One or More Logical Volumes
In this step, RAID controller 2130 uses information previously supplied by system configuration controller 170 to determine that virtual volume 4 is composed of logical volumes 1 and 3. RAID controller 2130 further determines that RAID controller 1120 controls logical volume 1 and that RAID controller 3140 controls logical volume 3. RAID controller 2130 stores the context of each of these new commands. Method 300 proceeds to step 330.
Step 330: Forwarding Requests
In this step, RAID controller 2130 forwards a request to one of the RAID controllers determined to control the involved logical volumes via the corresponding interconnect. Method 300 proceeds to step 335.
Step 335: Have all Requests Been Forwarded?
In this decision step, RAID controller 2130 checks to see if all of the pending requests have been forwarded to the correct controller. If yes, method 300 proceeds to step 340; if no, method 300 returns to step 330.
Step 340: Waiting for Execution of Forwarded Commands
In this step, RAID controller 2130 waits for the other RAID controllers to finish executing the commands. The flow of execution is identical to the execution of step 235, step 240, and step 245 of method 200. In this example, RAID controller 1120 receives its command at I1125 from interconnect 1150. RAID controller 1120 then executes the write command to storage element 127. Finally, RAID controller 1120 sends a status packet back to RAID controller 2130 via interconnect 1150. RAID controller 2130 receives the status packet at I1135. Concurrently, RAID controller 3140 receives its command at I2146 from interconnect 2160. RAID controller 140 then executes the write command to storage element 147. Finally, RAID controller 3140 sends a status packet back to RAID controller 2130 via interconnect 2160. RAID controller 2130 receives the status packet at I2136. Method 300 proceeds to step 345.
Step 345: Have all Status Packets Been Received?
In this decision step, RAID controller 2130 determines whether all of the forwarded requests have been processed by checking to see if a status packet exists for each transaction. If yes, method 300 proceeds to step 350; if no, method 300 returns to step 340.
Step 350: Aggregating Status Results
In this step, RAID controller 2130 aggregates the status results from each transaction into a single status packet. Method 300 proceeds to step 355.
Step 355: Forwarding Status to Requesting Host
In this step, RAID controller 2130 forwards the aggregated status packet back to the original requesting host1 115 via network communication fabric 110. Method 300 proceeds to step 360.
Step 360: Deleting Context from List
In this step, RAID controller 2130 deletes the original write request. Method 300 ends.
Network storage system architecture 100 can employ the map-and-forward function for storage virtualization. The map-and-forward function maps a single request to a virtual volume into several requests for many logical volumes and forwards the requests to the appropriate RAID controller. A single request that applies to a single logical volume is a store-and-forward function. A store-and-forward function is a simple case of the map-and-forward function in which the controller maps one request to one logical volume.
Network storage system architecture 100 allows any port to request any volume, either logical or virtual, and to have that request accurately serviced in a timely manner. Network storage system architecture 100 forwards this capability inherently. Conventional network storage system architectures require additional hardware such as a switcher in order to provide the same functionality. Network storage system architecture 100 also provides a scalable architecture that allows any host port to communicate with any logical or virtual volume, regardless of the number of added hosts and/or volumes. Additionally, network storage system architecture 100 provides concurrent volume accessibility through any host port due to the incorporation of decentralized cache and processing. Finally, network storage system architecture 100 may be used in any loop topology system such as Infiniband, fibre channel, Ethernet, ISCSI, SATA, or other similar topologies.
In an alternative embodiment, network storage system architecture 100 may be configured as a modularly scalable networked storage system architecture with a serial interconnect.
SCM1410, SCM2420, and SCMn 430 are each modeled from Aristos Logic pipelined transaction processor-based I/O controller architecture, as fully disclosed in U.S. patent application Ser. Nos. 10/429,048 and 09/716,195, previously incorporated herein by reference.
Scalable networked storage system control architecture 400 has distributed cache, unlike a conventional centralized cache. Each time an SCM is added to scalable networked storage system control architecture 400, there is more available cache; therefore, cache throughput is no longer a factor in the degradation of system performance. Similarly, since each SCM has its own processing element, every time a new SCM is added to scalable networked storage system control architecture 400, more processing power is also added, thereby increasing system performance. In fact, the additional cache and processing elements enhance and significantly improve system performance by parallelizing the transaction process in networked storage systems.
Recently, fibre channel switches have become very inexpensive, making a switched fibre channel network a viable option for inter-controller interconnects. With a switched fibre channel network, scalable networked storage system control architecture 400 scales proportionally with interconnect bandwidth. In other words, the more SCMs that are added to the system, the more bandwidth the interconnect fabric has to offer. A looped fibre channel is also an option. Although it costs less to implement a looped fibre channel than a switched fibre channel, a looped fibre channel offers only a fixed bandwidth, because data must always travel in a certain path around the loop until it reaches its destination and may not be switched to its destination directly. Scalable storage system control architecture 400 may also be used with a loop-switch type of topology, which is a combination of loop and switched architectures. Other topologies such as 3GIO, Infiniband, and ISCSI may also be used as the inter-controller interconnect.
As previously described, storage virtualization can hide the internal functions of a storage subsystem or service from applications, computer servers, or general network resources for the purpose of enabling application and network independent management of storage or data. For example, a hidden internal function exists in the situation where a storage element is a mirror of another storage element. Using virtualization, a scalable networked storage system control/virtualizer architecture may create a virtual volume that maps to both physical storage elements. Therefore, when a host wants to store data it writes to the virtual volume, and the RAID controller system physically writes the data to both storage elements. Virtualization is becoming widely used in network storage systems due to use of RAID architectures and the overhead reduction that it enables for the hosts. The hosts see only simplified virtual volumes and not the physical implementation of the RAID system.
Another advantage of VM1510 is the fact that its interconnect ports may be used for any type of interconnect (i.e., host interconnect, storage interconnect, etc). For example, interconnect port 511 is shown as an interconnect port in
In an alternative embodiment, network storage system architecture 100 may be configured to provide accurate handling of simultaneous, overlapped writes from multiple hosts to the same logical block address (LBA). This configuration assumes that the virtualizer engine does not employ a RAID 5 architecture, obviating stripe coherency as an obstacle.
SVE1710 further includes a host interface 715, a storage interface 716, and an intercontroller interface 717.
SVE2720 further includes a host interface 725, a storage interface 726, and an intercontroller interface 727.
SVEn 775 further includes a host interface 776, a storage interface 777, and an intercontroller interface 778.
For this example, SE1760 is coupled to SVE1710 through storage interface 716 via storage bus 765, SE2770 is coupled to SVE2720 through storage interface 726 via storage bus 775, and SEn 785 is coupled to SVEn 775 through storage interface 777 via storage bus 786. Furthermore, SVE1710, SVE2720, and SVEn 775 are coupled through their respective intercontroller interfaces via a virtualizer interconnect 790. In storage virtualization engine architecture 700, one storage virtualization engine is designated as the coordinator at the system level. The others are configured to recognize which of the other SVEs is the coordinator. The rule for coordination is as follows: any virtual volume request resulting in two or more storage element requests requires coordination, even if there is no conflict with another request. In other words, a request to a virtual volume that translates to either a read or a write request to two or more storage elements needs to be coordinated to avoid data mirroring inconsistencies. The following flow diagram illustrates the process for detecting a possible data inconsistency problem, coordinating the storage virtualizer engines, and resolving any conflicts before they become problems.
Step 805: Sending Request 1 to SVE1 and Sending Request 2 to SVE2
In this step, host 1730 sends request 1 to SVE1710, and host 2720 sends request 2 to SVE2720. Method 800 proceeds to step 810.
Step 810: Determining that Request 1 Needs Coordination
In this step, SVE1710 determines that request 1 requires coordination because it is a write request to two mirrored logical volumes, i.e., SE1760 and SE2770. Method 800 proceeds to step 815.
Step 815: Coordinating Request 1 with No Conflict
In this step, SVE1710 coordinates request 1 and determines that there is no conflict. The coordination process is described in more detail with reference to
Step 820: Executing Request 1
In this step, SVE1710 executes request 1. Method 800 proceeds to step 825.
Step 825: Determining that Request 2 Needs Coordination
In this step, SVE2720 determines that request 2 needs coordination because it is a write request to two mirrored logical volumes, i.e., SE1760 and SE2770. Method 800 proceeds to step 830.
Step 830: Requesting Coordination for Request 2
In this step, because SVE2720 recognizes that SVE1710 is the system coordinator for requests involving SE1760 and SE2770, SVE2720 requests coordination for request 2 from SVE1710. Method 800 proceeds to step 835.
Step 835: Executing Coordination for Request 2
In this step, SVE1710 executes coordination for request 2 and finds conflict. Method 800 proceeds to step 840.
Step 840: Flagging Conflict
In this step, SVE1710 flags the conflict and records the conflict into a local table. Method 800 proceeds to step 845.
Step 845: Holding Request 2 Pending Conflict Resolution
In this step, SVE1710 holds request 2 pending resolution of the conflict. Method 800 proceeds to step 850.
Step 850: Completing Request 1 and Resolving Conflict
In this step, SVE1710 completes request 1 and resolves the conflict. The conflict resolution process is fully described with reference to
Step 855: Releasing Request 2 to SVE2
In this step, SVE1710 releases request 2 to SVE2720. Method 800 proceeds to step 860.
Step 860: Executing and Completing Request 2
In this step, SVE2720 executes and completes request 2. Method 800 proceeds to step 865.
Step 865: Notifying SVE1 of Request 2 Completion
In this step, SVE2720 notifies SVE1710 of the completion of request 2. Method 800 proceeds to step 870.
Step 870: Freeing Coordination Data Structure
In this step, SVE1710 frees the coordination data structure. Method 800 ends.
The overall system performance may be negatively impacted by this type of configuration. The additional overhead required and the processing time lost while requests are being held is addressed in the preferred embodiment. The preferred embodiment for storage virtualization engine architecture 700 uses a pipelined transaction processor-based I/O controller architecture as fully disclosed in U.S. patent application Ser. Nos. 10/429,048 and 09/716,195, previously incorporated by reference. A request coordination process is further described with reference to
Step 910: Searching for Existing Data Structure for LBA Range
In this step, SVE1710 searches for an existing data structure for the LBA range in question. Method 900 proceeds to step 920.
Step 920: does a Data Structure Exist?
In this decision step, method 900 checks existing tables of data structures to determine whether a data structure exists for the particular LBA range in question. If yes, method 900 proceeds to step 940; if no, method 900 proceeds to step 930.
Step 930: Allocating Data Structure
In this step, SVE1710 allocates a data structure for the required LBA range. Method 900 ends.
Step 940: Attempting to Reserve Data Structure
In this step, SVE1710 attempts to reserve a data structure for the LBA range of request. Method 900 proceeds to step 950.
Step 950: is Reserve Successful?
In this decision step, method 900 determines whether the reserve is successful. If yes, method 900 ends; if no, method 900 proceeds to step 960.
Step 960: Creating Conflict Table Entry
In this step, SVE1710 creates a record of conflict by adding an entry to a table that records all the conflicts. Method 900 proceeds to step 970.
Step 970: Holding Request
In this step, SVE1710 holds the request (in this example, request 2) until the conflict has been resolved (see method illustrated in
Step 1010: Removing Reservation for Completed Request
In this step, SVE1710 removes the reservation for the completed request. Method 1000 proceeds to step 1020.
Step 1020: Is there a Conflict?
In this decision step, SVE1710 determines whether there is an existing conflict between two requests. If so, method 1000 proceeds to step 1030; if not, method 1000 ends.
Step 1030: Reserving LBA Range for First Held Request
In this step, SVE1710 reserves the LBA range for the first held request (in this case, for request 2). Method 1000 proceeds to step 1040.
Step 1040: Releasing First Held Request
In this step, SVE1710 releases the first held request by relinquishing execution to SVE2720. Method 1000 ends.
In summary, method 900 and method 1000 each repeat as often as needed to provide request coordination and conflict resolution, respectively. As a rule, any request requiring access to multiple storage elements warrants the need for coordination. Every request flagged as needing coordination does not necessarily constitute a conflict. However, those that do present conflicts are flagged and treated as such. As each conflict in storage virtualization engine architecture 700 is detected, the designated coordinating storage/virtualization controller adds the conflict to a conflict list and resolves each conflict in order of detection.
While the invention has been described in detail in connection with the exemplary embodiment, it should be understood that the invention is not limited to the above disclosed embodiment. Rather, the invention can be modified to incorporate any number of variations, alternations, substitutions, or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Accordingly, the invention is not limited by the foregoing description or drawings, but is only limited by the scope of the appended claims.
This application claims the benefit of U.S. Provisional Application Ser. No. 60/497,918, filed Aug. 27, 2003, and is related to U.S. application Ser. No. 09/716,195, filed Nov. 17, 2000, entitled, “Integrated I/O Controller” and U.S. application Ser. No. 10/429,048, filed May 5, 2003, entitled “System and Method for Scalable Transaction Processing,” the entire disclosures of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60497918 | Aug 2003 | US |