A more complete understanding of the present disclosure thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein:
While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims.
For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU), hardware or software control logic, read only memory (ROM), and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
Referring now to the drawings, the details of specific example embodiments are schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix.
Referring to
The first node 110 may include first shared disk mapping driver 116 and device name table 162. The second node 112 may include second shared disk mapping driver 118 and second device name table 164. The third node 114 may include third shared disk mapping driver 120 and associated device name table 166. As described herein, shared disk mapping drivers 116, 118 and 120 may be generally referred to as drivers herein and may comprise either hardware and/or software, including executable instruction and controlling logic stored in a suitable storage media, for carrying out the functions described herein. The first node 110 is in communication with the second node 112 via connection 117. The second node 112 is in communication with the third node 114 via connection 119 such that all three nodes 110, 112 and 114 may communicate with one another. In alternate specific example embodiments, nodes 110, 112 and 114 may be interconnected by a network, a Bus or any other suitable connection(s).
The first node 110, the second node 112 and the third node 114 may be in communication with storage enclosure 130. Storage enclosure 130 may include a plurality of disks, e.g., disk A 132, disk B 134, disk C 136 and disk D 138. Disks 132, 134, 136 and 138 may represent any suitable storage media that may be shared by the nodes 110, 112 and 114 of cluster 150. The disks 132, 134, 136 and 138 may each include designated reserved spaces 133, 135, 137 and 139, respectively, which may be designated for entering data for verification between the associated nodes 110, 112 and 114. Reserved spaces 133, 135, 137 and 139 may also be referred to as “offsets” herein.
The RAC cluster 150 may include three nodes, 110, 112 and 114. It is contemplated and within the scope of this disclosure that cluster 150 may comprise more or fewer nodes which may all be interconnected. Also, cluster 150 is shown in communication with a single storage enclosure 130. In alternate specific example embodiments, cluster 150 and the nodes thereof may be in communication with multiple storage enclosures. According to the present specific example embodiment storage enclosure 130 includes four storage disks 132, 134, 136 and 138. In alternate specific example embodiments, storage enclosure 130 may include more or fewer storage disks.
Driver 116 may preferably be configured to perform a number of different functions. For instance, drivers 116, 118 and 120 may be configured to determine a master shared disk mapping driver and one or more non-master shared disk mapping drivers. Non-master shared disk mapping drivers may be referred to as slave mapping drivers or “listener” drivers. A master driver may assign a common name or handle to the shared storage disks and communicate the common device names to the non-master drivers. The non-master drivers are then configured to adopt the common device name within an associated device name table or shared disk table. Drivers 116, 118 and 120 may arbitrate to determine which driver will be the master driver. In one embodiment, the master driver status may be given to the driver associated with the first activated node. In alternate embodiments, any other suitable method may be used to arbitrate which of the drivers is to be the master driver within the cluster. For instance, if first node 110 were activated first, first driver 116 would be deemed the master driver and drivers 118 and 120 would be non-master drivers.
In order to verify that the shared disks are appropriately identified between nodes 110, 112 and 114, a master driver such as, for instance, driver 116 may be configured to write a test message to a reserved space on a shared storage disk (such as reserved space 133 on shared disk 132). The non-master driver (such as non-master drivers 118 and 120 in this example embodiment) may then validate the identity of a shared disk with the non-master shared disk mapping driver by reading the data within the reserved space.
The proposed system utilizes drivers 116, 118 and 120 to communicate between nodes 110, 112 and 114 within cluster 150 and to perform device mapping for shared disks 132, 134, 136 and 138. Nodes 110, 112 and 114 may preferably listen on a port of an IP address for queries from a master node within the cluster 150. The master node may preferably login to the listener's port and begin an exchange of information. The master node may preferably write to one or more shared disks 132, 134, 136 and 138 at an offset (such as one of reserved spaces 133, 135, 137 and 139) in encrypted signature or other specified test message that will allow the listener drivers to read and validate the encrypted signature information. The listener drivers may then read the shared disks at the same reserved space, decrypt the information and compare it to the signature or the known test message. If there is not a read-write match, the listener reports to the master that there is no match, and in case there is a match, the listener preferably communicates the device ID string, such as “\dev\sdb1” to the master. The master may then check the device ID string for the device it had written to. If the device ID string reported by the listener matches the master, the given device mapping (in this case, \dev\stb1) is valid for both the master and the listener to be used for the shared disk (disk 132) in question. In this case, both master and listener may create an auxiliary file, handle or other identifier such as “SHAREDISK1” for the shared disk 132. The master may then traverse through the list of shared devices within device name table 162 and communicate with all listener nodes in the manner explained above. In this way, nodes 112, 114 and 116 within cluster 150 will have the same handles for the shared storage disks, providing a consistent view of the disks within storage enclosure 130.
Referring now to
In step 228, a determination is made whether the information written by the master is validated by the listener. If the information is not validated then step 226 is performed again on the next listener. If the information is validated, then in step 230 the master driver may generate an auxiliary device handle for the shared disk in question and attach it to that shared disk. Then in step 232, the listener updates its view to use the same device handle (name) to access the shared disk. In step 234, a determination is made whether all the shared disks have been accounted and labeled consistently. If all of the disks have not been accounted for then step 226 is performed again on the next listener. If all of the disks have been accounted for, then in step 236 mapping of the disk drives stops.
Referring now to
While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure.