Aspects described herein relate to configuration of network features, and more particularly to scheduling data transfers. Bandwidth is conventionally provisioned to meet a projected peak data demand and paid for over the course of a contract that may stretch for several years. Peak demand may occur relatively infrequently, resulting in over-provisioning for a significant amount of time. This over-provisioning of the bandwidth results in excess costs to a customer who is paying for unused bandwidth over the course of the contract.
An attempt to lower costs by provisioning less bandwidth over the course of the contract is largely ineffective because of expensive overcharges when peak demand exceeds the amount of bandwidth provisioned. Bandwidth considerations and costs are especially important in large data center applications, such as data mirroring or backup, where the amount of data being transferred, and therefore the resulting bandwidth consumption, is potentially massive.
Shortcomings of the prior art are overcome and additional advantages are provided through the provision of a computer-implemented method that includes maintaining a plurality of bandwidth optimization criteria corresponding to a plurality of different types of data transfer event scenarios, the plurality of bandwidth optimization criteria indicating criteria for scheduling a data transfer and dynamically configuring bandwidth allocation from an elastic network service provider of an elastic cloud computing network; based on recognizing a data transfer event scenario, selecting a bandwidth optimization criteria of the plurality of bandwidth optimization criteria based on a type of the data transfer event scenario; determining a schedule for transferring data from a source storage location to a target storage location across the elastic network, the schedule determined according to the selected bandwidth optimization criteria; and using the elastic network in transferring the data to the target storage location, the using comprising dynamically configuring elastic network bandwidth allocation from the elastic network service provider and initiating transfer of the data to the target storage location according to the schedule.
Further, a computer program product including a computer readable storage medium readable by a processor and storing instructions for execution by the processor is provided for performing a method that includes: maintaining a plurality of bandwidth optimization criteria corresponding to a plurality of different types of data transfer event scenarios, the plurality of bandwidth optimization criteria indicating criteria for scheduling a data transfer and dynamically configuring bandwidth allocation from an elastic network service provider of an elastic cloud computing network; based on recognizing a data transfer event scenario, selecting a bandwidth optimization criteria of the plurality of bandwidth optimization criteria based on a type of the data transfer event scenario; determining a schedule for transferring data from a source storage location to a target storage location across the elastic network, the schedule determined according to the selected bandwidth optimization criteria; and using the elastic network in transferring the data to the target storage location, the using comprising dynamically configuring elastic network bandwidth allocation from the elastic network service provider and initiating transfer of the data to the target storage location according to the schedule.
Yet further, a computer system is provided that includes a memory and a processor in communications with the memory, wherein the computer system is configured to perform a method including: maintaining a plurality of bandwidth optimization criteria corresponding to a plurality of different types of data transfer event scenarios, the plurality of bandwidth optimization criteria indicating criteria for scheduling a data transfer and dynamically configuring bandwidth allocation from an elastic network service provider of an elastic cloud computing network; based on recognizing a data transfer event scenario, selecting a bandwidth optimization criteria of the plurality of bandwidth optimization criteria based on a type of the data transfer event scenario; determining a schedule for transferring data from a source storage location to a target storage location across the elastic network, the schedule determined according to the selected bandwidth optimization criteria; and using the elastic network in transferring the data to the target storage location, the using comprising dynamically configuring elastic network bandwidth allocation from the elastic network service provider and initiating transfer of the data to the target storage location according to the schedule.
An example bandwidth optimization criteria corresponds to a predictable type data transfer event scenario in which a data transfer event is planned, the bandwidth optimization criteria specifying an economical bandwidth allocation and transfer of data within a planned time constraint, where the data transfer event scenario comprises a predictable data transfer event scenario and the selected bandwidth optimization criteria is the prior-mentioned bandwidth optimization criteria, and where the schedule is determined and the dynamically configuring the elastic network bandwidth allocation is performed to optimize bandwidth cost in transferring the data within the time constraint.
Another example bandwidth optimization criteria corresponds to an unpredictable type data transfer event scenario in which a data transfer event is unplanned, the bandwidth optimization criteria specifying a priority bandwidth allocation and urgent transfer of data, where the data transfer event scenario comprises an unpredictable data transfer event scenario and the selected bandwidth optimization criteria is the prior-mentioned bandwidth optimization criteria, and where the schedule is determined and the dynamically configuring the elastic network bandwidth allocation is performed to optimize bandwidth cost in allocating a priority level of bandwidth for urgent transfer of data.
Aspects of the above have an advantage over conventional approaches to backup where a constant, static bandwidth allocation is utilized, by providing opportunities for scheduling and bandwidth allocation configuration to optimize the transfer in terms of cost or other properties. Advantageously, dynamic configuration and optimization criteria-based data transfer scheduling enables on-demand access to a variable amount of bandwidth tailorable based on cost, capacity, size of data, and other parameters. Another advantage is that the need for a dedicated link is eliminated, which saves costs, while providing the ability to schedule the backup more dynamically by leveraging the elastic network capability.
The dynamically configuring can include dynamically establishing a relationship with the network service provider to provide bandwidth to effect the transfer of the data. This has an advantage in that the relationship can be established dynamically, requesting only the bandwidth needed and then deallocating that bandwidth when finished in order to reduce costs.
The transfer of the data can occurs across a period of time, where the method further includes facilitating testing of transferred data at the target storage location during the period of time, the facilitating testing including: dynamically provisioning elastic network bandwidth between the target storage location and at least one additional location from which test connections to the transferred data are made; and initiating the test connections to the transferred data at the target storage location from the at least one additional location. An advantage is provided in the use of the dynamic nature of elastic network features to provide additional bandwidth for testing purposes and streamlined migration when requested. By utilizing dynamic bandwidth capacity at both the source and target sites, an advantage is provided in that additional bandwidth can be provided for testing purposes and streamlined migration when requested. This enables more extensive and accurate testing of the transferred data, bolstering the validity of the testing.
Additional features and advantages are realized through the concepts described herein.
Aspects described herein are particularly pointed out and distinctly claimed as examples in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
Aspects described herein leverage elastic network technologies that provide for dynamic provisioning of wide area network bandwidth and transfer capability between sites. More particularly, data transfer event scenarios are recognized and trigger criteria-based scheduling of data transfers, including dynamic configuration of elastic bandwidth allocation from an elastic network service provider.
First site 102 includes a first application server 108 (i.e. a computer) hosting one or more applications, a first application database 110, a first storage area network (SAN) volume controller (SVC) 112 (i.e., a first storage resource), a first SAN switch 114 and a first edge appliance 116, which may be a router or other edge device, for example. In one embodiment, application server 108 or SVC 112 runs a data replication application that replicates data in first application database 110 from first SVC 112 via first SAN switch 114 and first edge appliance 116.
Management of elastic network bandwidth allocation is provided in the environment. A feature of the environment 100 is that one or more processes can determine and inform a dynamic network control application programming interface (API) 118 of the network service provider about when and how much bandwidth of an elastic cloud computing network 120 should be allocated for transfer of data, which transfer may utilize a dedicated channel to the second site 104 via a network 120. In this example, network 120 is an optical network provided by network service provider 106. In one embodiment, optical network 120 is used as a WAN. In another embodiment, optical network 120 is a Multiprotocol Label Switching (MPLS) network and application server 108 utilizes a Fiber Channel over Ethernet EDU01 network interface to connect first SAN switch 114 and first edge appliance 116 to the MPLS network.
Dynamic network control API 118 is executed, in one example, by a transport device (not shown), that is managed by network service provider 106. Dynamic network control API 118 allows first SVC 112, second SVC 128, an edge appliance (116, 132), a PC 140, or any other component at site 102, 104, or another site to dynamically change bandwidth allocation from network service provider 106. This is leveraged in accordance with aspects described herein to optimize bandwidth allocation and usage and therefore decrease the cost associated with transferring data using that bandwidth.
Second site 104 can include components similar to those of first site 102. Thus, in this example, second site similarly includes a second application server 122 (i.e., a computer), second application database 126, second SVC 128 (i.e., a second storage resource), second SAN switch 130, and a second edge appliance 132. In one embodiment, data is transferred from first site 102 to second site 104, i.e. from first SVC 112 via first SAN switch 114 and first edge appliance 116 over optical network 120 to second SVC 128 via second edge appliance 132 and second SAN switch 130. Data may be transferred similarly from second site 104 to first site 102.
Cloud backup services may be constrained by the cost of WAN capacity. Various types of cloud-provided backup services exist. One example is disaster prevention/recovery involving partial backup for an equipment repair or other purpose, or full backup of an at-risk data center. Another example is scheduled backups, usually being site-wide. For backups that are relatively large—potentially petabytes of data—a physical RAID pack may be copied and securely shipped to the target storage site. In this situation, a digital transfer from one site to another may not be practical. A 1 petabyte site backup would take at least 9 days using a steady 10 Gb/s Ethernet virtual connection (EVC). A more practical timeframe of 1 day using a 90 Gb/s EVC may not be economically feasible.
Described herein are facilities for scheduling backup(s) to a target storage location. Scheduling may be in response to different data transfer event scenarios, which may be of different types, for example predictable or unpredictable type. Depending on the gravity of the event scenario, the transfer schedule can vary. A potentially devastating unpredictable type of event (such as a natural disaster) might warrant a schedule that begins the transfer immediately using a large amount of bandwidth despite the high costs. A less severe and more predictable type of event scenario might dictate that a backup is to occur within the next 36 hours, lending an opportunity to leverage off-peak rates for bandwidth capacity to minimize costs. This has advantages over conventional approaches to backup where a constant, static bandwidth allocation does not provide the same opportunities for scheduling and bandwidth allocation configuration to optimize the transfer in terms of cost or other properties.
Although disaster recovery (DR) backup to a disaster recovery site is a focus of the described examples, it is understood that the principles described herein can be leveraged for any type of backup, or more generally transfer of data, to any kind of site for whatever reason. The backup is done by exploiting elastic network features to accomplish the backup less expensively while providing an enhancement enabling more comprehensive testing of the transferred data.
Aspects provide dynamic requests for DR both for scheduled and unscheduled data transfer events. For scheduled events, there is an opportunity to exploit differences in bandwidth pricing for different capacities, time of day, and other parameters. More bandwidth may be scheduled during off-peak hours, for instance, to reduce costs. Bandwidth optimization criteria for scheduling data transfer can focus on economical bandwidth allocation and transfer of data according to a planned timeframe. Bandwidth optimization criteria for unscheduled events (e.g. ones that indicate an anticipated or actual disaster) can focus on scheduling a more immediate or imminent transfer of some/all data, allowing higher costs to be incurred but optimizing those costs according to available network resources. In an unscheduled data transfer event scenario, the transfer is optimized to prioritize bandwidth allocation and urgent transfer of data. The dynamically configured bandwidth may be provisioned for transferring the data to a destination location, as the target, or an alternate or intermediate data center at the initial target for staging the data for later transfer to the destination. As data is transferred, testing of the data may be facilitated by requesting elastic bandwidth for testing purposes, dynamically allocating additional bandwidth to the target or destination site, and deallocating bandwidth to the source site as the data transfer winds down or disaster on the source site is declared.
For DR purposes in particular, a conventional approach is to have a dedicated, static backup network connection for disaster recovery, the dedicated connection providing limited bandwidth and being a cost sink regardless of whether it is utilized. In contrast, and as advantageously provided herein, dynamic configuration and optimization criteria-based data transfer scheduling enables on-demand access to a variable amount of bandwidth tailorable based on cost, capacity, size of data, and other parameters. It also enables the backup relationship to be established on-demand and/or on a one-off basis, where capacity is requested as desired, potentially in response to a data transfer event indicating that a disaster or other type of event is imminent. Another advantage is that the need for a dedicated link is eliminated, which saves costs while providing the ability to schedule the backup more dynamically by leveraging the elastic network capability. In the case of an impending disaster, the customer is able to move a potentially significantly larger amount of data more in a faster amount of time as compared to a dedicated static connection. This is especially useful when disaster events can be at least somewhat anticipated or predicted, in which case the elastic network capacity for one or more providers can be queried, a schedule for transferring the data can be determined, and elastic network bandwidth can be allocated and used to perform the transfer, the deallocated.
Referring back to
As described above, the edge appliances 116, 132 or other components of the environment may include dynamic network controls that enable establishment of a relationship with elastic network service provider(s), provision/de-provision of bandwidth, retrieval of bandwidth pricing information, and/or other functions supporting dynamic configuration of network features.
In one example, the dynamic provisioning is dictated by a backup process. The backup process can be configured with bandwidth optimization criteria (e.g. policies) for different types of data transfer event scenarios, such as predictable and unpredictable event scenarios. The criteria can be specified by a user/administrator and modified as desired. Based on recognition of a data transfer event scenario (such as a predicted impending disaster), the backup process can signal an edge appliance, which may be consumer premise equipment (CPE), to dynamically configure the bandwidth according to a schedule that the backup process determines and uses to accomplish the backup. The edge appliance/CPE may include additional functionality including routing and firewall functions, and include separate policies, aside from DR or backup related policies of the backup process, that guide CPE functionality and, optionally, other elastic network feature configurations.
Recognition of a data transfer event scenario can be driven manually or automatically, for instance based on input information from various sources. An unpredictable event scenario might be a natural disaster like a hurricane and input information might be weather data. Another unpredictable event scenario might be a fire in the data center facility, and input information might be a signal from a local fire alarm/detector that a fire is present. A predictable event scenario might be an anticipated or scheduled maintenance of the data center and the input might be the schedule of maintenance or an administrator informing the data center of the maintenance. More generally, the input information can be any information from which a relevant event might be recognized.
An example use case transfers data to a disaster recovery backup data center by exploiting the dynamic bandwidth capability described herein. A bandwidth optimization criteria is selected based on the type of data transfer event scenario recognized, and a schedule for transferring the data from the source location to the target location is determined according to that criteria. The elastic network is used to transfer the data according to the schedule and based on dynamically configuring the elastic network bandwidth allocation from the provider, optimized for the given bandwidth optimization criteria. Eventually after the transfer is complete, and if desired, the main data center is decommissioned and the backup data center takes over as the operational data center supporting the demand. This process also utilizes dynamic reconfiguration of the elastic network. Traffic that was previously directed to the main data center (now offline) is instead directed to the backup site (now considered the primary site). Network capacity provided to the backup site is provisioned accordingly (e.g. increased), and the traffic previously destined for the main data center is migrated to the backup site, which migrates the users, addresses, and so forth. The elastic network capacity provisioned to the main data center is de-provisioned and reallocated to the disaster site, in one example.
Another example use case involves disaster testing. Data centers usually have a disaster contingency plan in place that offers a scripted process in which limited testing proceeds on application(s) at the DR site using pre-staged data and any other software needed to perform the test. One or more test sites connect to that DR site with limited connectivity options to perform testing. This process does not allow enterprise-level testing and usually involves test scenarios that may not accurately reflect actual use. Connection to the DR site for more accurate testing is impractical for various reasons. For instance, the addresses (IP, URLs, etc.) are not easily migrated and there is usually insufficient network capacity dedicated to testing. Moreover, there might be a significant delay in time before adequate capacity is provided to enable proper testing and configuration of the DR site. Because network circuits can be prohibitively expensive, additional capacity for testing purposes has not been available and/or practical.
An advantage of aspects described herein uses the dynamic nature of elastic network features to provide additional bandwidth for testing purposes and streamlined migration when requested. Instead of ordering new circuits, bandwidth can be dynamically allocated for use by test site(s) (see 170 of
The provisioning of capacity to the test site(s), which may be standard locations accessing the main data center, may be performed by any suitable components, such as a backup process at the target data center, main data center, other component, or distributed among any of the foregoing.
The backup process then selects an appropriate bandwidth transfer criteria depending on the particular type of data transfer event scenario (204). Different bandwidth transfer criteria correspond to different types of data transfer event scenarios and specify criteria to apply in scheduling and transferring the data under the corresponding type of scenario. In an unpredictable data transfer event scenario, the criteria will specify that a priority level of bandwidth is appropriate to effect an urgent transfer of the data. This is despite the fact that the transfer might be less expensive if performed under different parameters like, by deferring to an off-peak time of day for instance. The criteria can specify different factors with weightings, thresholds, and the like for each factor to guide the decision making about exactly when to transfer the data, how much bandwidth to use, and other parameters. The criteria for a predictable data transfer event scenario may include relaxed constraints that enable greater flexibility in scheduling the transfer and therefore in optimizing to total costs. It is also noted that criteria can vary for different predictable events, and likewise for different unpredictable events. A fire in the facility (one unpredictable event) might dictate a more urgent transfer than a hurricane (another unpredictable event) set to arrive in 3 hours, or a switch to battery backup after a power outage (yet another unpredictable event) wherein the battery backup has an expected life of 12 hours.
The process continues by the backup facility checking capabilities of the target data center (206), such as the bandwidth availability and resources for receiving and storing the data, checking the network service provider's bandwidth terms and capabilities (208), any window(s) of maintenance for the data center when transfer is not possible (210), and the cost of bandwidth (212). In some examples, multiple network service providers are queried and a selection between them is made for performing the transfer. The backup facility can then determine a schedule for transferring the data from the source to the target site based on the selected bandwidth optimization criteria and some or all of the above-collected information. The backup facility can also determine the particular configuration for network features that satisfy the optimization criteria. In a predictable event, for example, the criteria might specify that the data is to be transferred within 48 hours and the collected information might indicate that relatively low off-peak rates will be available during a 3 hour timeframe starting at 11:00 PM that night. The determined schedule might dictate that a dynamic allocation is to be made starting at 11:00 PM and for an amount that provides enough bandwidth to transfer all of the desired data in the 3 hour timeframe. As another example, an unpredictable event criteria might emphasize that an immediate transfer of the data is to be made if bandwidth is available under a particular threshold cost, otherwise the data is to be held for retry every 4 hours until an acceptable rate offered by a service provided is found. The process might query multiple providers, find that one is able to offer immediate bandwidth below the threshold cost, and immediately begin the data transfer using that provider. The above example criteria are just examples, and many other examples of optimization criteria exist.
Once the backup facility establishes the schedule, it can then request (or cause another component to request) the appropriate level of bandwidth allocated/deallocated at the appropriate times, and invoke the data transfer accordingly (214), after which the process ends.
Based on recognizing a data transfer event scenario, the backup facility selects a bandwidth optimization criteria of the plurality of bandwidth optimization criteria is based on a type of the data transfer event scenario (304), for example. The backup facility determines a schedule for transferring data from a source storage location to a target storage location across the elastic network (306), the schedule determined according to the selected bandwidth optimization criteria. The schedule may be based on considering any one or more of the following, as examples: cost to transfer the data, taken across a plurality of different bandwidth levels for a plurality of different times of day, a maintenance window, or capabilities of the target storage location to receive and process the data being transferred. As an example, if cost to transfer the data during an off-peak time is significantly lower than peak time, the schedule may be set accordingly. As another example, a schedule may be set around a maintenance window. As yet another example, if the target storage location is expected to have the capacity to receive the data only during particular times or after a reconfiguration, the schedule can be set accordingly.
Then, the backup facility or other component such as the edge appliance uses the elastic network in transferring the data to the target storage location (308), which includes dynamically configuring elastic network bandwidth allocation from the elastic network service provider and initiating transfer of the data to the target storage location according to the schedule.
In one embodiment, the data transfer event scenario includes a predictable data transfer event scenario and the selected bandwidth optimization criteria is a corresponding bandwidth optimization criteria (see above), where the schedule is determined and the dynamically configuring the elastic network bandwidth allocation is performed to optimize bandwidth cost in transferring the data within the time constraint.
In another embodiment, the data transfer event scenario includes an unpredictable data transfer event scenario and the selected bandwidth optimization criteria is a corresponding bandwidth optimization criteria (see above), where the schedule is determined and the dynamically configuring the elastic network bandwidth allocation is performed to optimize bandwidth cost in allocating a priority level of bandwidth for urgent transfer of data.
Dynamically configuring the elastic network bandwidth allocation can include the source location (edge appliance or other component thereof) dynamically establishing a relationship with the network service provider to provide bandwidth to effect the transfer of the data. In this regard, multiple different network service providers may be available, and a dynamically established on-demand relationship may be formed between the source site and one (or more) of the service providers to perform the transfer. A selection can be made between these available providers, perhaps based on updated pricing and other information, thus providing another opportunity for cost optimization. Advantageously, the relationship can be established dynamically, requesting only the bandwidth needed and then deallocating that bandwidth when finished in order to reduce costs.
In this manner, dynamic configuration and optimization criteria-based data transfer scheduling enables on-demand access to a variable amount of bandwidth tailorable based on cost, capacity, size of data, and other parameters. This has advantages over conventional approaches for backing-up data where a constant, static bandwidth allocation does not provide the same opportunities for scheduling and bandwidth allocation configuration to optimize the transfer in terms of cost or other properties.
As an extension, in the case where the transfer of the data occurs across a period of time, testing of transferred data at the target storage location is facilitated during that period of time and thereafter. The facilitating includes, as indicated by the process of
Based on completion of the transfer of the data, the backup facility or other component deallocates elastic network bandwidth provided to the source storage location from the elastic network service provider (314), which eliminates the continuing costs that would otherwise be incurred by carrying that additional bandwidth. The deallocating can include sending a command to the network service provider that adjusts bandwidth to the data center.
Processes described herein may be performed singly or collectively by one or more computer systems, such as computer system(s) described below with reference to
Computer system 400 is suitable for storing and/or executing program code and includes at least one processor 402 coupled directly or indirectly to memory 404 through, e.g., a system bus 420. In operation, processor(s) 402 obtain from memory 404 one or more instructions for execution by the processors. Memory 404 may include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during program code execution. A non-limiting list of examples of memory 404 includes a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. Memory 404 includes an operating system 405 and one or more computer programs 406, for instance programs to perform aspects described herein.
Input/Output (I/O) devices 412, 414 (including but not limited to displays, microphones, speakers, accelerometers, gyroscopes, magnetometers, light sensors, proximity sensors, GPS devices, cameras, etc.) may be coupled to the system either directly or through I/O controllers 410.
Network adapters 408 may also be coupled to the system to enable the computer system to become coupled to other computer systems, storage devices, or the like through intervening private or public networks. Ethernet-based (such as Wi-Fi) interfaces and Bluetooth® adapters are just examples of the currently available types of network adapters 408 used in computer system.
Computer system 400 may be coupled to storage 416 (e.g., a non-volatile storage area, such as magnetic disk drives, optical disk drives, a tape drive, etc.), having one or more databases. Storage 416 may include an internal storage device or an attached or network accessible storage. Computer programs in storage 416 may be loaded into memory 404 and executed by a processor 402 in a manner known in the art.
The computer system 400 may include fewer components than illustrated, additional components not illustrated herein, or some combination of the components illustrated and additional components. Computer system 400 may include any computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop, handheld or mobile computer, tablet, wearable device, telephony device, network appliance (such as an edge appliance), virtualization device, storage controller, etc.
Referring to
The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below, if any, are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of one or more embodiments has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain various aspects and the practical application, and to enable others of ordinary skill in the art to understand various embodiments with various modifications as are suited to the particular use contemplated.
Number | Name | Date | Kind |
---|---|---|---|
4144414 | Nicholas | Mar 1979 | A |
6205477 | Johnson et al. | Mar 2001 | B1 |
6389432 | Srinivas et al. | May 2002 | B1 |
6418139 | Akhtar | Jul 2002 | B1 |
6563829 | Lyles et al. | May 2003 | B1 |
6738348 | Rollins | May 2004 | B1 |
6937566 | Forslow | Aug 2005 | B1 |
7089294 | Baskey et al. | Aug 2006 | B1 |
7103906 | Katz | Sep 2006 | B1 |
7542420 | Mokhtar et al. | Jun 2009 | B1 |
7590746 | Slater | Sep 2009 | B2 |
7685310 | Ueoka et al. | Mar 2010 | B2 |
7808918 | Bugenhagen | Oct 2010 | B2 |
7844967 | Kelly | Nov 2010 | B2 |
7983299 | Ma | Jul 2011 | B1 |
8412824 | Schiff | Apr 2013 | B1 |
8464335 | Sinha et al. | Jun 2013 | B1 |
8660008 | Babiarz et al. | Feb 2014 | B2 |
8762505 | Kutan et al. | Jun 2014 | B2 |
8788690 | Short et al. | Jul 2014 | B2 |
8793343 | Sorenson, III et al. | Jul 2014 | B1 |
8799320 | Chan et al. | Aug 2014 | B2 |
8873753 | Parker | Oct 2014 | B2 |
8937865 | Kumar et al. | Jan 2015 | B1 |
9026671 | Gillies et al. | May 2015 | B2 |
9172771 | Gross et al. | Oct 2015 | B1 |
9207993 | Jain | Dec 2015 | B2 |
9330156 | Satapathy | May 2016 | B2 |
9495251 | Kottomtharayil | Nov 2016 | B2 |
9535776 | Klose | Jan 2017 | B2 |
9596144 | Anderson | Mar 2017 | B2 |
20020049841 | Johnson | Apr 2002 | A1 |
20020101869 | Garcia-Luna-Aceves et al. | Aug 2002 | A1 |
20020112113 | Karpoff et al. | Aug 2002 | A1 |
20020124262 | Basso et al. | Sep 2002 | A1 |
20020133613 | Teng et al. | Sep 2002 | A1 |
20020144174 | Nwabueze | Oct 2002 | A1 |
20020181394 | Partain et al. | Dec 2002 | A1 |
20020194324 | Guha | Dec 2002 | A1 |
20030021338 | Mazzoni | Jan 2003 | A1 |
20030037061 | Sastri et al. | Feb 2003 | A1 |
20030069963 | Jayant et al. | Apr 2003 | A1 |
20030110263 | Shillo | Jun 2003 | A1 |
20030120666 | Tacaille et al. | Jun 2003 | A1 |
20030126132 | Kavuri et al. | Jul 2003 | A1 |
20030172130 | Fruchtman et al. | Sep 2003 | A1 |
20030202477 | Zhen et al. | Oct 2003 | A1 |
20040098671 | Graham et al. | May 2004 | A1 |
20040198360 | Kotzin | Oct 2004 | A1 |
20040199566 | Carlson et al. | Oct 2004 | A1 |
20040215644 | Edwards, Jr. et al. | Oct 2004 | A1 |
20040215749 | Tsao | Oct 2004 | A1 |
20040246972 | Wang et al. | Dec 2004 | A1 |
20050027345 | Horan et al. | Feb 2005 | A1 |
20050033935 | Manbert et al. | Feb 2005 | A1 |
20050125593 | Karpoff et al. | Jun 2005 | A1 |
20050129000 | Sivakumar et al. | Jun 2005 | A1 |
20050154841 | Sastri et al. | Jul 2005 | A1 |
20050289618 | Hardin | Dec 2005 | A1 |
20060039381 | Anschutz | Feb 2006 | A1 |
20060120282 | Carlson et al. | Jun 2006 | A1 |
20060129562 | Pulamarasetti et al. | Jun 2006 | A1 |
20060171390 | LaJoie | Aug 2006 | A1 |
20060206682 | Manbert et al. | Sep 2006 | A1 |
20060218369 | Fujino | Sep 2006 | A1 |
20060248231 | O'Rourke et al. | Nov 2006 | A1 |
20060265558 | Fujino | Nov 2006 | A1 |
20070022264 | Bromling et al. | Jan 2007 | A1 |
20070065078 | Jiang | Mar 2007 | A1 |
20070106798 | Masumitsu | May 2007 | A1 |
20070195765 | Heissenbullel et al. | Aug 2007 | A1 |
20070198627 | Bozionek et al. | Aug 2007 | A1 |
20080109450 | Clark et al. | May 2008 | A1 |
20080115144 | Tsao | May 2008 | A1 |
20080126525 | Ueoka et al. | May 2008 | A1 |
20080140850 | Gade et al. | Jun 2008 | A1 |
20080259798 | Loh | Oct 2008 | A1 |
20080320097 | Sawicki et al. | Dec 2008 | A1 |
20090007199 | JaJoie | Jan 2009 | A1 |
20090061853 | Anschutz | Mar 2009 | A1 |
20090100163 | Tsao | Apr 2009 | A1 |
20090172782 | Taglienti et al. | Jul 2009 | A1 |
20090187668 | Arendt et al. | Jul 2009 | A1 |
20090204711 | Binyamin | Aug 2009 | A1 |
20090217326 | Hasek | Aug 2009 | A1 |
20090240867 | Shibayama et al. | Sep 2009 | A1 |
20090271589 | Karpoff et al. | Oct 2009 | A1 |
20100257602 | Kettler et al. | Oct 2010 | A1 |
20100268632 | Rosenthal | Oct 2010 | A1 |
20100274656 | Genschel | Oct 2010 | A1 |
20100306382 | Cardosa | Dec 2010 | A1 |
20100306445 | Dake | Dec 2010 | A1 |
20100332401 | Prahlad | Dec 2010 | A1 |
20110004550 | Giordano et al. | Jan 2011 | A1 |
20110022697 | Huh | Jan 2011 | A1 |
20110078227 | McAloon et al. | Mar 2011 | A1 |
20110083037 | Bocharov et al. | Apr 2011 | A1 |
20110125889 | Tsao | May 2011 | A1 |
20110158653 | Mazed | Jun 2011 | A1 |
20110208710 | Lesavich | Aug 2011 | A1 |
20110218770 | Ii | Sep 2011 | A1 |
20110282928 | Ball et al. | Nov 2011 | A1 |
20110293278 | Mazed | Dec 2011 | A1 |
20120023545 | Qu | Jan 2012 | A1 |
20120063353 | Schlenk | Mar 2012 | A1 |
20120072600 | Richardson et al. | Mar 2012 | A1 |
20120109705 | Belady et al. | May 2012 | A1 |
20120131309 | Johnson | May 2012 | A1 |
20120137173 | Burshan et al. | May 2012 | A1 |
20120180080 | LaJoie | Jul 2012 | A1 |
20120201130 | Liv et al. | Aug 2012 | A1 |
20120210381 | Ozawa | Aug 2012 | A1 |
20120216259 | Okamoto et al. | Aug 2012 | A1 |
20120331221 | Cho | Dec 2012 | A1 |
20130003538 | Greenberg et al. | Jan 2013 | A1 |
20130007254 | Fries | Jan 2013 | A1 |
20130031258 | Mukai et al. | Jan 2013 | A1 |
20130081014 | Kadatch | Mar 2013 | A1 |
20130185404 | Patel et al. | Jul 2013 | A1 |
20130204963 | Boss et al. | Aug 2013 | A1 |
20130205002 | Wang et al. | Aug 2013 | A1 |
20130212282 | Puller | Aug 2013 | A1 |
20130212422 | Bauer et al. | Aug 2013 | A1 |
20130227009 | Padmanaban et al. | Aug 2013 | A1 |
20130242903 | Narkar | Sep 2013 | A1 |
20130254383 | Wray | Sep 2013 | A1 |
20130254407 | Pijewski | Sep 2013 | A1 |
20130268672 | Justafort | Oct 2013 | A1 |
20130282795 | Tsao | Oct 2013 | A1 |
20140040343 | Nickolov et al. | Feb 2014 | A1 |
20140057592 | Chetlur | Feb 2014 | A1 |
20140068076 | Dasher et al. | Mar 2014 | A1 |
20140075029 | Lipchuk | Mar 2014 | A1 |
20140082301 | Barton et al. | Mar 2014 | A1 |
20140082681 | Brown et al. | Mar 2014 | A1 |
20140089510 | Hao et al. | Mar 2014 | A1 |
20140098685 | Shattil | Apr 2014 | A1 |
20140115189 | Ao et al. | Apr 2014 | A1 |
20140129819 | Huang et al. | May 2014 | A1 |
20140180664 | Kochunni et al. | Jun 2014 | A1 |
20140188801 | Ramakrishnan et al. | Jul 2014 | A1 |
20140207968 | Kumar et al. | Jul 2014 | A1 |
20140233587 | Liv et al. | Aug 2014 | A1 |
20140244835 | Lopez Alvarez | Aug 2014 | A1 |
20140258535 | Zhang | Sep 2014 | A1 |
20140281015 | Orona et al. | Sep 2014 | A1 |
20140289205 | Soichi | Sep 2014 | A1 |
20140344879 | Phillips et al. | Nov 2014 | A1 |
20140365658 | Lang et al. | Dec 2014 | A1 |
20150006614 | Suryanarayanan | Jan 2015 | A1 |
20150019740 | Zhao | Jan 2015 | A1 |
20150026793 | Li | Jan 2015 | A1 |
20150046960 | Hardin | Feb 2015 | A1 |
20150067093 | Sawicki et al. | Mar 2015 | A1 |
20150067744 | Furtwangler | Mar 2015 | A1 |
20150082362 | Hasek | Mar 2015 | A1 |
20150117198 | Menezes | Apr 2015 | A1 |
20150134731 | Wang et al. | May 2015 | A1 |
20150134830 | Popa | May 2015 | A1 |
20150156204 | Resch | Jun 2015 | A1 |
20150172070 | Csaszar | Jun 2015 | A1 |
20150195173 | Gupta et al. | Jul 2015 | A1 |
20150222445 | Iyer et al. | Aug 2015 | A1 |
20150234636 | Barnes, Jr. | Aug 2015 | A1 |
20150235308 | Mick | Aug 2015 | A1 |
20150288919 | Labosco | Oct 2015 | A1 |
20150339169 | Siddiqui et al. | Nov 2015 | A1 |
20160066261 | Nasielski et al. | Mar 2016 | A1 |
20160197835 | Luft | Jul 2016 | A1 |
20160197848 | Bhide | Jul 2016 | A1 |
20160231948 | Gupta et al. | Aug 2016 | A1 |
20170076057 | Burton | Mar 2017 | A1 |
20170090773 | Vijayan | Mar 2017 | A1 |
20180062943 | Djukic | Mar 2018 | A1 |
Number | Date | Country |
---|---|---|
2014021839 | Feb 2014 | WO |
Entry |
---|
Hwang et al., “Design and Implementation of an iLVM Mechanism for Remote Mirror”, Kuasir College of Electrical Engineering and Computer Science, Department of Electrical Engineering, Journal of Internet Technology, 7(2), Apr. 2006, pp. 169-176. |
XRoads Networks, “Dynamic Bandwidth Management”, retrieved from internet Nov. 11, 2014, http://dualwanfirewalls.com/ubm/solutions/dynamic_bandwidth_control.xrn, pp. 1-4. |
IBM, Internal Model for Dynamically-Virtualizing the Storage of Data Between a RAID-6 and a Mirror, IP.com, No. 000160533, Nov. 19, 2007, pp. 1-5. |
Weil, Reliable, Scalable, and High-Performance Distributed Storage: Distributed Object Storage, IP.com, No. 000234957, Feb. 19, 2014, pp. 1-11. |
List of IBM Patents or Patent Applications Treated as Related, Dec. 1, 2015, pp. 1-2. |
Office Action in U.S. Appl. No. 14/952,466, dated Jun. 26, 2017, pp. 1-35. |
Office Action in U.S. Appl. No. 14/952,449, dated Jul. 25, 2017, pp. 1-41. |
Office Action in U.S. Appl. No. 14/952,456, dated May 17, 2017, pp. 1-25. |
Office Action in U.S. Appl. No. 14/952,463, dated May 22, 2017, pp. 1-27. |
Office Action in U.S. Appl. No. 14/731,834, dated Apr. 19, 2017, pp. 1-22. |
Office Action in U.S. Appl. No. 14/952,437, dated Sep. 18, 2017, pp. 1-36. |
Notice of Allowance in U.S. Appl. No. 14/952,456, dated Sep. 15, 2017, pp. 1-14. |
List of IBM Patents or Patent Applications Treated as Related, Dec. 18, 2017, pp. 1-2. |
Elali, H., SAN Zone Reuse in Port Allocation, https://coprhd.atlassian.net/wiki/spaces/COP/pages/8618000/SAN+Zone+Reuse+in+Port+Allocation, Oct. 15, 2015 (6 pages). |
Number | Date | Country | |
---|---|---|---|
20170149624 A1 | May 2017 | US |