This invention relates to automated storage systems in general, and more specifically, to user interfaces for storage networks.
Automated storage systems are commonly used to store large volumes of data on various types of storage media, such as magnetic tape cartridges, optical storage media, and hard disk drives, to name only a few examples. System devices in the storage system can be logically configured or “mapped” for user access over a network. For example, the users may be given access to one or more data access drives, for read and/or write operations, and to transfer robotics to move the storage media between storage cells and the data access drives.
In order to logically map the storage system, a network administrator needs the configuration or physical layout of the storage system. The network administrator, for example, may have to trace the physical cables from each of the system devices to the internal routers that control the system devices. Next, the network administrator connects to the storage system via an out-of-band interface (e.g., a telnet command prompt) to generate a logical map that allows user access to the various system devices. The logical map is then assigned to a host connection that facilitates user access to the storage system. This process can be time-consuming and error prone, particularly in large storage systems that have more than one internal router with many system devices that need to be configured.
In addition, the network administrator has to generate a new logical map for each host connection. Alternatively, the network administrator may configure a default map that can be assigned to all or some of the host connections. However, assigning the same default map to more than one host connection may result in conflicting commands being issued to the system devices. For example, one host connection may issue a “rewind” command to a drive while another host connection is using the same drive for a backup operation.
An exemplary storage network comprises an automated storage system including data access drives and transfer robotics. An interface manager is communicatively coupled to each of the data access drives and transfer robotics, the interface manager aggregating configuration information for the data access drives and transfer robotics in the automated storage system. An interface application is provided in computer-readable storage at the interface manager, the interface application generating user interface rendering data for the configuration information. A graphical user interface is operatively associated with the interface application, the graphical user interface outputting the configuration information in accordance with the user interface rendering data.
In an exemplary automated storage system linked to a graphical user interface including a display and a user interface selection device, a method may comprise: aggregating configuration information at an interface manager for a plurality of system devices in the automated storage system, generating user interface rendering data at the interface manager, and displaying the configuration information in an application window at the graphical user interface in accordance with the user interface rendering data.
An exemplary method of operation comprises: aggregating configuration information for a plurality of system devices in a storage system, generating user interface rendering data, and displaying the configuration information as a logical map of the system devices at a graphical user interface in accordance with the user interface rendering data.
a and 3b are illustrations of an exemplary graphical user interface;
Briefly, an implementation of the invention includes a user interface that enables a network administrator to centrally manage access to the system devices in a storage system. In addition, the network administrator can generate logical maps and assign the logical maps to host connections without having to determine the physical layout of the storage system. If the layout of the storage system changes, the network administrator can readily update the logical map of the storage system without having to individually configure each of the internal routers. This and other implementations are described in more detail below with reference to the figures.
Exemplary System
An exemplary storage area network (SAN), otherwise referred to as storage network 100, is shown in
As used herein, the term “host” comprises one or more computing systems that provide services to other computing or data processing systems or devices. For example, clients 110a, 110b may access the storage device 101 via one of the hosts 120a, 120b. Hosts 120a, 120b include one or more processors (or processing units) and system memory, and are typically implemented as server computers.
Clients 110a, 110b can be connected to one or more of the hosts 120a, 120b or to the storage system 101 directly or over a network 115, such as a Local Area Network (LAN) and/or Wide Area Network (WAN). Clients 110a, 110b may include memory and a degree of data processing capability at least sufficient to manage a network connection. Typically, clients 110a, 110b are implemented as network devices, such as, e.g., wireless devices, desktop or laptop computers, workstations, and even as other server computers.
As previously mentioned, storage network 100 includes an automated storage system 101 (hereinafter referred to as a “storage system” ). Data 130 is stored in the storage system 101 on storage media 135, such as, magnetic data cartridges, optical media, and hard disk storage, to name only a few examples.
The storage system 101 may be arranged as one or more libraries (not shown) having a plurality of storage cells 140a, 140b for the storage media 135. The libraries may be modular (e.g., configured to be stacked one on top of the other and/or side-by-side), allowing the storage system 101 to be readily expanded.
Before continuing, it is noted that the storage system 101 is not limited to any particular physical configuration. For example, the number of storage cells 140a, 140b may depend upon various design considerations. Such considerations may include, but are not limited to, the desired storage capacity and frequency with which the computer-readable data 130 is accessed. Still other considerations may include, by way of example, the physical dimensions of the storage system 101 and/or its components. Consequently, implementations in accordance with the invention are not to be regarded as being limited to use with any particular type or physical layout of storage system 101.
The storage system 101 may include one or more data access drives 150a, 150b, 150c, 150d (also referred to generally by reference 150) for read and/or write operations on the storage medium 135. In one exemplary implementation, each library in the storage system 101 is provided with at least one data access drive 150. However, in other implementations data access drives 150 do not need to be included with each library.
Transfer robotics 160 may also be provided for transporting the storage media 135 in the storage system 101. Transfer robotics 160 are generally adapted to retrieve storage media 135 (e.g., from the storage cells 140a, 140b), transport the storage media 135, and eject the storage media 135 at an intended destination (e.g., one of the data access drives 150).
Various types of transfer robotics 160 are readily commercially available, and embodiments of the present invention are not limited to any particular implementation. In addition, such transfer robotics 160 are well known and further description of the transfer robotics is not needed to fully understand or to practice the invention.
It is noted that the storage system 101 is not limited to use with data access drives and transfer robotics. Storage system 101 may also include any of a wide range of other system devices that are now known or that may be developed in the future. For example, a storage system including fixed storage media such as a redundant array of independent disks (RAID), may not include transfer robotics or separate data access drives.
Each of the system devices, such as the data access drives 150 and transfer robotics 160, are controlled by interface controllers 170a, 170b, 170c. The interface controllers are operatively associated with the system devices via the corresponding device interfaces. For example, interface controller 170a is connected to drive interfaces 155a, 155b for data access drives 150a, 150b, respectively. Interface controller 170a is also connected to the robotics interface 165 for transfer robotics 160. Interface controller 170b is connected to drive interfaces 155c, 155d for data access drives 150c, 150d, respectively. Interface controller 170b is also connected to the robotics interface 165 for transfer robotics 160.
In an exemplary implementation, the interface controllers 170a, 170b, 170c may be implemented as Fibre Channel (FC) interface controllers and the device interfaces 155a, 155b, 155c, 155d may be implemented as small computer system interface (SCSI) controllers. However, the invention is not limited to use with any particular type of interface controllers and/or device interfaces.
Storage system 101 also includes an interface manager 180. Interface manager 180 is communicatively coupled, internally, with the interface controllers 170a, 170b, 170c, and aggregates device information and management commands for each of the system devices. The interface manager 180 also allocates the system devices as uniquely identified logical units or LUNs. Each LUN may comprise a contiguous range of logical addresses that can be addressed by mapping requests from the connection protocol used by the hosts 120a, 120b to the uniquely identified LUN. Of course the invention is not limited to LUN mapping and other types of mapping now known or later developed are also within the scope of the invention.
Interface manager 180 is also communicatively coupled, externally, to user interfaces 125a, 125b via hosts 120a, 120b and/or clients 110a, 110b. In an exemplary implementation, the hosts 120a, 120b are connected to the storage system 110 by I/O adapters 122a, 122b, such as, e.g., host bus adapters (HBA), to a switch 190. Switch 190 may be implemented as a SAN switch, and is connected to the storage system 101. Accordingly, the hosts 120a, 120b and/or clients 110a, 110b may access system devices, such as the data access drives 150 and transfer robotics 160, via the interface manager 180.
Interface manager 200 communicatively couples interface controllers 220a, 220b to the user interface 210 via hosts 230 and/or clients 231. Accordingly, the interface manager 200 includes a plurality of I/O modules or controller ports 225a, 225b, 225c, 225d (also referred to generally by reference 225). The controller ports 225 facilitate data transfer between the respective interface controllers 220a, 220b. Interface manager 200 also includes at least one network port 235.
In an exemplary implementation, the controller ports 225 and network port(s) 235 may employ fiber channel technology, although other bus technologies may also be used. Interface manager 200 may also include a converter (not shown) to convert signals from one bus format (e.g., Fibre Channel) to another bus format (e.g., SCSI).
It is noted that auxiliary components may also be included with the interface manager 200, such as, e.g., power supplies (not shown) to provide power to the other components of the interface manager 200. Auxiliary components are well understood in the art and further description is not necessary to fully understand or to enable the invention.
Interface manager 200 may be implemented on a computer board and includes a processor (or processing units) 240 and computer-readable storage or memory 245 (e.g., dynamic random access memory (DRAM) and/or Flash memory). Interface manager 200 also includes a transaction manager 250 to handle all transactions to and from the interface manager 200, and a management pipeline 260 to process the transactions.
Implementations of a transaction manager and management pipeline are described in co-owned U.S. Patent Application for “INTERFACE MANAGER AND METHODS OF OPERATION IN A STORAGE NETWORK” of Maddocks et al. (Attorney Docket No. HP1-707US; Client Docket No. HP200315416-1), filed the same day as the present application and referenced above under the section entitled “Related Application,” hereby incorporated herein for all that it discloses. However, it is noted that the exemplary implementations shown and described therein are not intended to limit the interface manager of the present invention.
User interface 210 includes one or more output devices 211, such as, e.g., audio and/or video output. User Interface 210 also includes one or more input device, such as, e.g., a keyboard 212, pointing device 213, and/or microphone (not shown), to name only a few examples. User interface 210 is typically implemented at one or more of the hosts 230 and/or clients 231, although user interface 210 may be implemented at the storage system and/or at one or more clients (e.g., storage system 101 and/or clients 110a, 110b in
User interface 210 is operatively associated with an interface application 270. In
In an exemplary implementation, interface application 270 generates a graphical user interface (see e.g.,
State machine 271 determines the inventory and status of system devices connected to the interface controllers 220a, 220b (i.e., device information). State machine 271 also receives input from the user interface 210 (i.e., management commands). Render engine 272 formats output for the user interface 210, e.g., in response to processing device information and management commands by the state machine 271. For example, render engine 272 may format the inventory and status of devices connected to the interface controllers 220a, 220b and may also update the output at the user interface 210 in response to receiving management commands at the interface manager 200.
Before continuing, it is noted that the exemplary interface application 270 shown and described herein is merely for purposes of illustration and is not intended to be limiting. For example, state machine 271 and render engine 272 do not need to be provided as separate functional components. In addition, other functional components may also be provided and are not limited to a state machine and render engine.
a and 3b are diagrammatic illustrations of an exemplary user interface that may be implemented in a storage network (e.g., the storage network 100 shown in
Graphical user interface 300 is associated with an interface application (e.g., the interface application 270 in
The graphical user interface 300 supports user interaction through common techniques, such as a pointing device (e.g., mouse, style), keystroke operations, or touch screen. By way of illustration, the user may make selections using a mouse (e.g., mouse 213 in
The graphical user interface 300 is displayed for the user in a window, referred to as the “application window” 320, as is customary in a windowing environment. The application window 320 may include customary window functions, such as a Minimize Window button 321, a Maximize Window button 322, and a Close Window button 323. A title bar 330 identifies the application window 320 (e.g., as “Command View” in
Application window 320 also includes an operation space 350. Operation space 350 may include one or more graphics for displaying output and/or facilitating input from the user. Graphics may include, but are not limited to, subordinate windows, dialog boxes, icons, text boxes, buttons, and check boxes. Exemplary operation space 350 is shown having a text box 360 and a subordinate window 370.
Exemplary text box 360 may be used to select a storage system (e.g., storage system 101 in
Exemplary subordinate window 370 provides the user with a number of functions that are available through the graphical user interface 300, e.g., by selecting one of the menu tabs 371-375 (e.g., “Identity,” “Status,” “Configuration,” “Operations,” and “Support”).
Selecting the “Identity” menu tab 371 displays general information regarding the selected storage system (e.g., name, manufacturer, network address, number of interface controllers, number of data access drives, firmware version, etc.). Selecting the “Status” menu tab 372 displays status information regarding the selected storage system and is discussed in more detail below. Selecting the “Configuration” menu tab 373 displays configuration information regarding the selected storage system and is also discussed in more detail below. Selecting the “Operations” menu tab 374 displays functions that are available for the selected storage system (e.g., reboot, upgrade firmware). Selecting the “Support” menu tab 375 displays support information regarding the selected storage system (e.g., current firmware version, instructions to access available updates).
a illustrates selecting the “Status” menu tab 372 in application window 320, wherein a status tree 380 is displayed in subordinate window 370. Status tree 380 includes a number of nodes identifying various status functions that are available to the user for the selected storage system. The user may expand one or more parent nodes in the status tree 380 to “drill down” and view child nodes. The child nodes include further status operations that are available for the selected storage system.
A closed node may be displayed with a “+” symbol and an expanded node may be displayed with a “−” symbol. In
In an exemplary implementation, selecting a status function displays additional information adjacent the status tree 380. For example, in
b illustrates selecting the “Configuration” menu tab 373 in application window 320, wherein a configuration tree 390 is displayed in subordinate window 370. Configuration tree 390 includes a number of nodes identifying various configuration functions that are available to the user for the selected storage system. The user may expand one or more parent nodes in the configuration tree 390 to “drill down” and view child nodes. The child nodes include further configuration functions, as discussed in more detail above.
In an exemplary implementation, selecting a configuration function displays additional information adjacent the configuration tree 390. In
Access permissions are indicated in table 392 by check marks. For example,.the host identified as “winte14” is shown configured for access to the transfer robotics and drives D1-D2, but without access to drives D3-D6. The host identified as “2100e08b111” is shown configured for access to drives D1-D4, but without access to the transfer robotics or drives D5-D6. The host identified as “backup1” is shown configured for access to all of the drives D1-D6, but without access to the transfer robotics.
A user may configure access permissions for one or more of the hosts by selecting a host from the Host Access table 392. For example, host “2100e08b11” is shown selected at 393 in
Menu 395 is shown including menu options 396, 397, 398. The user may select a menu option from menu 395 to configure access permissions for the selected host. The graphical user interface that is displayed in response to the user selecting menu option 396 (“Edit Host Access”) is illustrated in
Application window 400 may also include title bar 410 which identifies the application window, e.g., as “Edit Host Access.” Application window 400 may also include customary menu bar 420 with pull down menus (e.g., labeled “Actions”). For example, the user may select a refresh function (not shown) from the “Action” menu.
Application window 400 also includes an operation space 430. Operation space 430 may include one or more graphics for displaying output and/or facilitating input from the user. For example, operation space 430 includes customary function buttons 440a, 440b, and 440c labeled “OK,” “Cancel,” and “Help,” respectively.
Operation space 430 includes a host access table 450, which may be displayed in a subordinate window. Host access table 450 is configured in columns and rows identifying the various hosts and corresponding system devices in the selected storage system. Access permissions are displayed for the hosts (e.g., indicated with check marks 455). A user may configure one or more of the hosts for the selected storage system, for example, by selecting and deselecting devices corresponding to the host. For purposes of illustration, the user may move graphical pointer 460 to the Robotics checkbox and click the mouse button to place a check 455 in the checkbox 457. Accordingly, the host identified as “21003e8b111” is configured for access to the transfer robotics.
The user may also configure access permissions by selecting one or more rows and/or columns to grant or deny access for the hosts and/or system devices. In an exemplary implementation, the user may select one or more rows of system devices for a host (e.g., illustrated by outline 470 in
In
Graphical user interfaces shown in
It is also noted that, although not shown in
Exemplary Operations
In operation 600, a logical configuration of the storage system is generated, for example, at the interface manager based on aggregated device information from the interface controllers. In an exemplary implementation, the logical configuration may include plurality of logical devices (also called logical units or LUNs) allocated within the storage system. Each LUN comprises a contiguous range of logical addresses that can be addressed by host devices by mapping requests from the connection protocol used by the host device to the uniquely identified LUN.
In operation 610, the logical configuration generated in operation 600 is displayed in a graphical user interface, for example, as illustrated in
In operation 630, the logical configuration of the storage system is updated. For example, the logical configuration may be updated to indicate that a host is granted access to the transfer robotics and another host is denied access to one or more of the data access drives, based on the configuration commands received during operation 620. Alternatively, the logical configuration may be updated to indicate that one or more of the system devices have been added to or removed from the storage system, based on the device information received from the interface controller(s). In operation 640, the updated logical configuration is displayed at the graphical user interface. Operations may then return to operation 520, for example, when the user makes another selection at the user interface.
It is noted that the exemplary operations shown and described with reference to
In addition to the specific implementations explicitly set forth herein, other aspects and implementations will also be apparent to those skilled in the art from consideration of the specification disclosed herein. It is intended that the specification and illustrated implementations be considered as examples only, with a true scope and spirit of the following claims.
This application is related to co-owned U.S. Patent Application for “INTERFACE MANAGER AND METHODS OF OPERATION IN A STORAGE NETWORK” of Maddocks et al. (Attorney Docket No. HP1-707US; Client Docket No. HP200315416-1), filed the same day as the present application.