User load balancing systems and methods thereof

Information

  • Patent Grant
  • 8260924
  • Patent Number
    8,260,924
  • Date Filed
    Wednesday, May 3, 2006
    18 years ago
  • Date Issued
    Tuesday, September 4, 2012
    12 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Bengzon; Greg C
    Agents
    • LeClairRyan, a Professional Corporation
Abstract
A method, computer readable medium and system for user load balancing includes identifying when an overall load in at least one of two or more servers exceeds one or more thresholds that are related to one or more user loads. One or more of the user loads in the identified overall load in one of the servers are transferred to one or more of the other servers when the one or more thresholds is exceeded.
Description
FIELD OF THE INVENTION

This invention generally relates to data storage management systems and methods thereof and, more particularly, to automatic, user load balancing systems and methods thereof.


BACKGROUND

In the emerging Software as a Service (“SaaS”) market, the majority of the processing occurs at the server and the data is centralized to take advantage of the anywhere, anytime connectivity that the Internet provides. The power of the connectivity to and sharing of information is leading to massively scalable applications that support hundreds of thousands, up to hundreds of millions of users.


Massively scalable applications are creating many new challenges in managing the users and data in an automated fashion. The ability to manage user load is particularly critical in data-heavy applications, such as email, file storage, and online backup. A user load is a load on a given server or servers using one or more services and an overall load includes the utilization of processor, memory, I/O reads, writes and transactions per second, network, disk space, power, and application or applications. To manage overall loads, currently administrators are forced to manually move these data heavy user loads between servers which is time consuming and inefficient. Additionally, these manual moves of user loads can lead to interruption in service to users.


Load balancers have been developed, but they reside at the edge of the network in front of application servers and are only available to take and split incoming traffic data between servers based on low level metrics. These low level metrics consist of factors, such as the number of network connections on a server or how fast a server is responding to HTRP requests, and are unrelated to either user loads or overall loads on the servers. These prior load balancers work well for web servers where all of the data on each server is identical or “stateless,” but do not work in applications where each user has unique data, such as in the emerging SaaS market or Web 2.0, where the data on each server is unique to each user or “stateful” and thus is dynamically changing for each user. Additionally, these load balancers do not work well were each of the user loads utilize different applications.


SUMMARY

A method for user load balancing in accordance with embodiments of the present invention includes identifying when an overall load in at least one of two or more servers exceeds one or more thresholds that are related to one or more user loads. One or more of the user loads in the identified overall load in one of the servers are transferred to one or more of the other servers when the one or more thresholds is exceeded.


A computer readable medium having stored thereon instructions for user load balancing in accordance with embodiments of the present invention identifying when an overall load in at least one of two or more servers exceeds one or more thresholds that are related to one or more user loads. One or more of the user loads in the identified overall load in one of the servers are transferred to one or more of the other servers when the one or more thresholds is exceeded.


A user load balancing system in accordance with embodiments of the present invention includes a threshold system and a transfer system. The threshold system identifies when an overall load in at least one of two or more servers exceeds one or more thresholds. Each of the overall loads comprises one or more user loads and each of the one or more thresholds is based on the user loads. The transfer system initiates a transfer of one or more of the user loads in the identified overall load in one of the servers to one or more of the other servers when the one or more thresholds is exceeded.


The present invention provides a number of advantages including providing an effective and automatic method and system for balancing user loads, such as stateful data comprising user emails, between two or more servers, such as storage servers, partitions of a centralized storage, or application servers. Additionally, with the present invention the balancing of the user loads between storage servers and/or application servers is seamless to the end user and occurs without an interruption in service. Further, the present invention is able to ensure the balancing of the user loads without loss of any of the user loads. Even further, the present invention is able to distinguish between temporary heavy usage which would not require intervention, and might even hinder performance, and more persistent heavy usage requiring user load balancing.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a user load balancing system in accordance with embodiments of the present invention; and



FIG. 2 is a flow chart of a method for user load balancing in accordance with embodiments of the present invention.





DETAILED DESCRIPTION

A system 10 with a user load balancing system 12 in accordance other embodiments of the present invention is illustrated in FIG. 1. The system 10 includes a user load balancing system 12, a global directory server 14, storage servers 16(1)-16(n), application servers 17(1)-17(n), and a communications network 18, although the system 10 can comprise other numbers and types of components in other configurations. The present invention provides an effective and automatic method and system for balancing user loads, such as stateful data comprising user emails, between two or more two or more servers, such as storage servers, partitions of a centralized storage, or application servers.


Referring more specifically to FIG. 1, the user load balancing system 12 monitors and determines when the overall load for one or more of the storage servers 16(1)-16(n) and one or more application servers 17(1)-17(n) exceeds one or more of the thresholds that would require one or more user loads comprising stateful data to be moved to a different one of the storage servers 16(1)-16(n) and/or one or more application servers 17(1)-17(n). The user load balancing system 12 includes system monitoring tools and protocols, such as Simple Network Management Protocol (“SNMP”), although other types and manners for monitoring the loads in the storage servers 16(1)-16(n) can be used.


The user load balancing system 12 comprises a central processing unit (CPU) or processor 20, a memory 22, and an interface system 24 which are coupled together by a bus or other link 26, although the user load balancing system can comprise other numbers and types of each of the components and other configurations and locations for each of the components can be used. The processor 20 executes a program of stored instructions for one or more aspects of the present invention as described and illustrated herein, including the method for user load balancing, although the processor 20 could execute other types of programmed instructions. The memory 22 stores these programmed instructions for one or more aspects of the present invention as described herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to one of the processor, can be used for the memory 22.


The interface system 24 is used to operatively couple and communicate between the user load balancing system 12 and the global directory server 14, the storage servers 16(1)-16(n), and the application servers 17(1)-17(n) via communications system 18 although other types and numbers of connections and other configurations could be used. In this particular embodiment, the communication system 18 is via TCP/IP over an Ethernet and uses industry-standard protocols including SOAP, XML, LDAP, and SNMP, although other types and numbers of communication systems, such as a direct connection, another local area network, a wide area network, Internet, modems and phone lines, e-mails, and/or wireless communication technology each having their own communications protocols, could be used.


The global directory server 14 includes a centralized master directory of locations where servers, services, and user data reside, although the global directory server can store other types of information. Any user via a service or application can query the global directory server 14 to find out where given data in a user load is located.


The global directory server 14 also includes a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of each of the components and other configurations and locations for the components can be used. For example, the global directory server 14 can comprise a single server as shown or can be replicated in a series of masters and slaves. In a master/slave environment, the master global directory server will be in charge of all updates and the slave global directory servers will be read servers that respond to application requests. This enables the directory in the global directory server 14 to scale.


In this particular embodiment, the processor in the global directory server 14 shown in FIG. 1 executes a program of stored instructions for one or more aspects of the present invention as described herein, including instructions for storing locations for each of the user loads on the storage servers 16(1)-16(n) and on the application servers 17(1)-17(n) and for the services requested by users. The memory stores these programmed instructions for one or more aspects of the present invention as described herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, can be used for the memory in the management server system. The interface system in the global directory server 14 is used to operatively couple and communicate between the global directory server 14 and the user load balancing system 12, the storage servers 16(1)-16(n), and the application servers 17(1)-17(n), although other types of connections could be used.


Each of the storage servers 16(1)-16(n) is used to store user loads comprising stateful and stateless data, although one or more of the storage servers 16(1)-16(n) could have other functions. This storage could be local or could be over SAN, NAS, iSCSI, distributed file systems, or other types of local or remote network systems. Each of the user loads stored in the storage servers 16(1)-16(n) comprises stateful data and stateless data, although other types and combinations of data can be stored, such as just user loads with just stateful data. In this particular embodiment, each of the storage servers 16(1)-16(n) includes a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of each of the components and other configurations and locations for the components can be used. Additionally, one or more other types of servers can be used with or in place of storage servers 16(1)-16(n), such as network area storage or one or more application servers.


The processor in each of the storage servers 16(1)-16(n) executes a program of stored instructions for one or more aspects of the present invention as described herein, including instructions for storing stateful and stateless data and for transferring data between storage servers 16(1)-16(n) and/or application servers 17(1)-17(n). The memory stores these programmed instructions for one or more aspects of the present invention as described herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, can be used for the memory in the management server system. The interface system in each of the storage servers 16(1)-16(n) is used to operatively couple and communicate between the storage servers 16(1)-16(n) and the user load balancing system 12, the global directory server 14, and the application servers 17(1)-17(n), although other types and connections could be used.


Each of the application servers 17(1)-17(n) has one or more user loads relating to the execution of one or more applications, such as email or one or more other applications, although one or more of the application servers 17(1)-17(n) could have other functions and other types and numbers of systems could be used. Each of the application servers 17(1)-17(n) also includes a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of each of the components and other configurations and locations for the components can be used.


The processor in each of the application servers 17(1)-17(n) executes a program of stored instructions for one or more aspects of the present invention as described herein, including instructions for applications, such as email and/or one or more other applications, and for transferring data between application servers 17(1)-17(n) and/or storage servers 16(1)-16(n). The memory stores these programmed instructions for one or more aspects of the present invention as described herein, although some or all of the programmed instructions could be stored and/or executed elsewhere, such as in one or memories of provider systems. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, can be used for the memory in the management server system. The interface system in each of the storage servers 17(1)-17(n) is used to operatively couple and communicate between the storage servers 17(1)-17(n) and the predictive capacity planning system 12, the global directory server system 14, and storage servers 16(1)-16(n), although other types and connections could be used.


Although an example of embodiments of the user load balancing system 12, the global directory server 14, the storage servers 16(1)-16(n), application servers 17(1)-17(n) is described and illustrated herein, each of the user load balancing system 12, the global directory server 14, the storage servers 16(1)-16(n), and application servers 17(1)-17(n) of the present invention could be implemented on any suitable computer system or computing device. It is to be understood that the devices and systems of the exemplary embodiments are for exemplary purposes, as many variations of the specific hardware and software used to implement the exemplary embodiments are possible, as will be appreciated by those skilled in the relevant art(s).


Furthermore, each of the systems of the present invention may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, micro-controllers, and the like, programmed according to the teachings of the present invention as described and illustrated herein, as will be appreciated by those skilled in the computer and software arts.


In addition, two or more computing systems or devices can be substituted for any one of the systems in any embodiment of the present invention. Accordingly, principles and advantages of distributed processing, such as redundancy, replication, and the like, also can be implemented, as desired, to increase the robustness and performance the devices and systems of the exemplary embodiments. The present invention may also be implemented on computer systems that extend across any network using any suitable interface mechanisms and communications technologies including, for example telecommunications in any suitable form (e.g., voice, modem, and the like), wireless communications media, wireless communications networks, cellular communications networks, G3 communications networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, a combination thereof, and the like.


The present invention may also be embodied as a computer readable medium having instructions stored thereon for user load balancing as described herein, which when executed by a processor, cause the processor to carry out the steps necessary to implement the methods of the present invention as described and illustrated herein.


The operation of the user load balancing system 12 in a system 10 will now be described with reference to FIGS. 1 and 2. In step 30, the user load balancing system 12 monitors the user loads in the overall loads in each of the storage servers 16(1)-16(n) and each of the application servers 17(1)-17(n) to determine if any thresholds have been exceeded, although the user load balancing system 12 could monitor other numbers and types of systems in other configurations, such as just storage servers 16(1)-16(n). Since the user loads in the overall loads in each of the storage servers 16(1)-16(n) include stateful data, the user loads and thus the overall loads can change at varying rates which can impact the operation of the storage servers 16(1)-16(n). For example, one or more of the storage servers 16(1)-16(n) might be reaching capacity or there could be a wide disparity between the size of the user loads on each of the storage servers 16(1)-16(n). Similarly, the user loads on one or more of the application servers 17(1)-17(n) may also be changing at varying rates based on user requests. As a result, one or more of the application servers 17(1)-17(n) might be reaching capacity or there could be a wide disparity between the size of the user loads on each of the application servers 17(1)-17(n).


By way of example only, the types thresholds being monitored by the user load balancing system include: disk space utilization (measured in both total Gigabytes (GB) of usable storage as well as measured in a percentage of 100%); RAM utilization (measured in percentages of 100%); I/O load (measured by the number of I/O reads per second, the number of I/O writes per second and the number of transactions per second; and network load (measured as a percentage of 100%), although other types and numbers of thresholds related to user loads and/or overall loads can be monitored. Additionally, the user load balancing system 12 can be configured to allow an administrator to enter and/or adjust one or more of the particular thresholds as needed or desired for the particular application.


More specifically, in these embodiments the user load balancing system 12 monitors for short and long term threshold trends in the overall loads on the storage servers 16(1)-16(n) and application servers 17(1)-17(n). These short and long term threshold trends or thresholds are entered by an administrator in the user load balancing system, although other configurations can be used, such as having the short and long term thresholds stored and retrieved from memory in the user load balancing system 12.


By way of example only, short term thresholds monitored by the user load balancing system 12 include: processor utilization at greater than 90% for longer than five minutes; RAM utilization at greater than 90% for longer than 20 minutes; network load higher than 90% for longer than five minutes; and I/O read/writes at greater than 90% of capacity for longer than 30 minutes, although these are exemplary and again other types and numbers of short term threshold trends can be monitored. Additionally, by way of example only, long term thresholds monitored by the user load balancing system 12 include: processor load greater than 90% average for more than 180 minutes; disk space utilization greater than 95%; RAM utilization at greater than 90% average for more than 300 minutes; and I/O reads/writes in excess of 85% of capacity for more than 120 minutes, although these are exemplary and again other types and numbers of long term threshold trends can be monitored.


In step 32, the user load balancing system 12 determines if at least one of the thresholds has been exceeded, although other numbers and types of thresholds could be established and checked. If none of the thresholds has been exceeded, then the No branch is taken back to step 30 where the overall load on each of the storage servers 16(1)-16(n) and on each of the application servers 17(1)-17(n) is monitored by the user load balancing system 12. If one or more of the thresholds has been exceeded, then the on or more overall loads which exceeded the threshold are identified and the Yes branch is taken to step 34.


In step 34, the user load balancing system 12 determines which of the one or more user loads in the identified overall load or loads in one or more of the storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) to move to one or more of the other storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) to even out the overall load among all of the available storage servers 16(1)-16(n) and/or application servers 17(1)-17(n), although the user load balancing system 12 can use other manners for identifying which of the user loads to move.


In step 36, the user load balancing system 12 determines an order for moving the user loads identified for transfer in step 34 based on one or more characteristics. By way of example only, these characteristics include identifying which of the user loads: is currently inactive; caused the identified overall load in at least one of the storage server or application server to exceed the threshold; is the largest in the identified overall load; and has the last login date to an application, although other numbers and types of characteristics could be used for determining the order of transfer, such as identifying the services users are using. The particular parameters for determining the order based on these characteristics can be set by the administrator. By way of example only: the user load not currently logged on could be moved first; a single user load which caused the threshold to be exceeded could be moved first; the largest user load could be moved first; or the user load with the oldest login date could be moved first.


Next, in step 38 the user load balancing system 12 determines which of the one or more other storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) to move each of the determined one or more user loads, although the user loads could be moved to other types of servers, such as application servers. By way of example only, the user load balancing system 12 may determine which of the other storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) has the lowest average utilization of at least one of processor space and memory space, although other utilizations could be examined, such as I/O reads, writes, and transactions per second, network, disk space, power, and applications. Additionally by way of example only the user load balancing system 12 may determine which of the one or more other storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) need to receive one or more of the user loads from the identified overall load in the at least one of the storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) to balance the stored load between the storage servers 16(1)-16(n) and/or application servers 17(1)-17(n). As part of this process, the user load balancing system 12 monitors for the addition of any storage server and/or application server that could be considered for transferring some of the user load or user loads. The user load balancing system 12 could actively scan for the addition of new storage servers and/or application servers, although other manners for identifying new storage servers and/or application servers could be used, such as by a notification input by an administrator.


Next, in step 40 the user load balancing system 12 initiates the transfer of the one or more user loads which have been identified and the transfer takes place directly between the storage servers 16(1)-16(n) and/or application servers 17(1)-17(n), although other manners for initiating the transfer and for transferring the user loads could be used.


In step 42, the user load balancing system 12 optionally determines if the transferred one or more user loads match identically with the original one or more user loads, although other types of verifications may be conducted. By way of example only, the user load balancing system 12 performs a checksum, although other methods for assuring an accurate transfer of data can be used or no check may be performed. If the one or more of the user loads have not been correctly copied, then the No branch is taken back to step 40 where the user load balancing system 12 transfers the one or more of the user loads again. If the one or more identified user loads have been correctly transferred, then the Yes branch is taken to step 44.


In step 44, since stateful data, such as email, may continue to flow to the user load being transferred, following the transfer the user load balancing system 12 checks for any incremental updates in the original user loads which may have occurred following the transferring, although other methods for handling incremental updates can be used. For example, when a user load is designated for transfer, the user load could be placed in a queue and only a read only version would be accessible by a user until the transfer of the user load has been completed. Alternatively, during the transfer the system 10 could prevent access to the user load being transferred until the transfer is completed. Returning to step 44, if there are no incremental updates, then the No branch is taken to step 50. If there are incremental updates, then the Yes branch is taken to step 46. In step 46, the user load balancing system 12 transfers the incremental updates into the transferred user loads.


In step 48, the user load balancing system 12 determines if the incremental updates have been correctly transferred, although other types of verifications may be conducted. If the transfer is incorrect, then the No branch is taken back to step 46 where the user load balancing system 12 attempts the transfer of the incremental updates again. Additionally, the user load balancing system 12 may provide a notification that the transfer has failed, although the user load balancing system 12 can provide notification relating to the success or failure of other events or not provide any notification. Further, the user load balancing system 12 can include a history of transactions which have taken place that can be accessed by an administrator, if desired. If the transfer is correct, then the Yes branch is taken to step 50.


In step 50, the user load balancing system 12 updates the directory with the new address or addresses for the user loads. In step 50, the user load balancing system 12 deletes the original user loads in the one or more of the storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) that exceeded the threshold. This frees up resources and should place the overall load in the identified one or more of the storage servers 16(1)-16(n) and/or application servers 17(1)-17(n) below the one or more thresholds. Following step 52, the user load balancing system 12 returns to step 30 for ongoing monitoring of the overall loads in storage servers 16(1)-16(n) and application servers 17(1)-17(n).


Accordingly, the present invention provides an efficient system for balancing user loads comprising stateful data between servers. Additionally, the present invention is able to balance the user loads without requiring any downtime to move user loads between the servers. Further, the present invention is able to ensure the balancing of the user loads without loss of any of the user loads. Even further, as illustrated above the present invention is able to distinguish between temporary heavy usage which would not require intervention and more persistent heavy usage requiring user load balancing through the use of short and long term.


Having thus described the basic concept of the invention, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for user load balancing, the method comprising: identifying with a user load balancing system when an overall load of one or more user data loads in at least one of two or more storage servers exceeds one or more thresholds, wherein the one or more user data loads in at least one of the two or more storage servers comprises stateful data;determining with the user load balancing system whether the overall load of one or more user data loads exceeds the one or more thresholds temporarily or persistently;transferring with the user load balancing system one or more of the user data loads in the identified overall load in one of the storage servers to one or more of the other storage servers only when the one or more thresholds is determined to be exceeded persistently based upon the identifying with the user load balancing system, wherein the transferring further comprises placing the one or more user data loads to be transferred in storage which is read only accessible until the transferring one or more user data loads is completed;identifying with the user load balancing system any changes to the one or more transferred user data loads which occurred during the transferring; andupdating with the user load balancing system the one or more transferred user data loads with the identified differences; andupdating with the user load balancing system a directory with one or more new locations to the determined one or more other storage servers for each of the one or more transferred user data loads to be configured to respond to a query from an application.
  • 2. The method as set forth in claim 1 wherein the one or more thresholds comprise at least one of processor utilization with respect to one of the overall loads exceeding a first percentage for greater than a first period of time, disk space utilization with respect to one of the overall loads exceeding a second percentage, RAM utilization with respect to one of the overall loads exceeding a third percentage for a greater than a second period of time, and at least one of I/O read/writes and I/O transactions per second with respect to one of the overall loads at greater than a fourth percentage for greater than a third period of time.
  • 3. The method as set forth in claim 1 wherein the one or more thresholds comprise at least one of processor utilization with respect to one of the overall loads exceeding a first percentage average for greater than a first period of time, disk space utilization with respect to one of the overall loads exceeding a second percentage, RAM utilization with respect to one of the overall loads exceeding a third percentage average for a greater than a second period of time, and I/O read/writes with respect to one of the overall loads at greater than a fourth percentage for greater than a third period of time.
  • 4. The method as set forth in claim 1 wherein the transferring further comprises: determining which of the one or more user data loads in the identified overall load to move to one or more of the other storage servers;determining which of the one or more other storage servers to move each of the determined one or more user data loads; andmoving the determined one or more user data loads in the identified overall load in the at least one of the storage servers to the determined one or more other storage servers.
  • 5. The method as set forth in claim 4 wherein the moving the determined one or more user data loads further comprises: transferring the one or more determined user data loads to the determined one or more other storage servers; andobtaining a verification that each of the one or more transferred user data loads is identical to the corresponding one of the determined one or more user data loads.
  • 6. The method as set forth in claim 4 wherein the determining which of the one or more user data loads in the identified overall load in the at least one of the storage servers to move to one or more of the other storage servers further comprises: determining an order for moving the determined one or more user data loads based on one or more characteristics; andmoving the determined one or more user data loads in accordance with the determined order.
  • 7. The method as set forth in claim 6 wherein the one or more characteristics comprise at least one of the user data loads which is currently inactive, which caused the identified overall load in the at least one of the storage servers to exceed the one or more thresholds, which is the largest in the identified overall load in the at least one storage server, and which has a last login date to an application.
  • 8. The method as set forth in claim 4 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises determining which of the other storage servers has a lowest average utilization of at least one of processor space and memory space and moving the determined one or more user data loads to the determined storage server with the lowest average utilization of at least one of processor space and memory space.
  • 9. The method as set forth in claim 4 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises determining which of the one or more other storage servers need to receive one or more of the user data loads from the identified overall load in the at least one of the storage servers to balance the stored load between the storage servers and moving the determined one or more user data loads to the determined one or more other storage servers to balance the stored load.
  • 10. The method as set forth in claim 4 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises identifying when another storage server has been added to the one or more other storage servers before the determining which of the one or more other storage servers to move each of the determined one or more user data loads.
  • 11. The method as set forth in claim 1 wherein the transferring further comprises deleting the transferred one or more of the user data loads in the identified overall load in the at least one of the storage servers after the transferring.
  • 12. A non-transitory computer readable medium having stored thereon instructions for user load balancing comprising machine executable code which when executed by at least one processor, causes the processor to perform steps comprising: identifying when an overall load of one or more user data loads in at least one of two or more storage servers exceeds one or more thresholds, wherein the one or more user data loads in at least one of the two or more storage servers comprises stateful data;determining whether the overall load of one or more user data loads exceeds the one or more thresholds temporarily or persistently;transferring one or more of the user data loads in the identified overall load in one of the storage servers to one or more of the other storage servers only when the one or more thresholds is determined to be exceeded persistently based upon the identifying, wherein the transferring further comprises placing the one or more user data loads to be transferred in storage which is read only accessible until the transferring one or more user data loads is completed;identifying any changes to the one or more transferred user data loads which occurred during the transferring; andupdating the one or more transferred user data loads with the identified differences; andupdating a directory with one or more new locations to the determined one or more other storage servers for each of the one or more transferred user data loads to be configured to respond to a query from an application.
  • 13. The medium as set forth in claim 12 wherein the one or more thresholds comprise at least one of processor utilization with respect to one of the overall loads exceeding a first percentage for greater than a first period of time, disk space utilization with respect to one of the overall loads exceeding a second percentage, RAM utilization with respect to one of the overall loads exceeding a third percentage for a greater than a second period of time, and at least one of I/O read/writes and I/O transactions per second with respect to one of the overall loads at greater than a fourth percentage for greater than a third period of time.
  • 14. The medium as set forth in claim 12 wherein the one or more thresholds comprise at least one of processor utilization with respect to one of the overall loads exceeding a first percentage average for greater than a first period of time, disk space utilization with respect to one of the overall loads exceeding a second percentage, RAM utilization with respect to one of the overall loads exceeding a third percentage average for a greater than a second period of time, and I/O read/writes with respect to one of the overall loads at greater than a fourth percentage for greater than a third period of time.
  • 15. The medium as set forth in claim 12 wherein the transferring further comprises: determining which of the one or more user data loads in the identified overall load to move to one or more of the other storage servers;determining which of the one or more other storage servers to move each of the determined one or more user data loads; andmoving the determined one or more user data loads in the identified overall load in the at least one of the storage servers to the determined one or more other storage servers.
  • 16. The medium as set forth in claim 15 wherein the moving the determined one or more user data loads further comprises: transferring the one or more determined user data loads to the determined one or more other storage servers; andobtaining a verification that each of the one or more transferred user data loads is identical to the corresponding one of the determined one or more user data loads.
  • 17. The medium as set forth in claim 15 wherein the determining which of the one or more user data loads in the identified overall load in the at least one of the storage servers to move to one or more of the other storage servers further comprises: determining an order for moving the determined one or more user data loads based on one or more characteristics; andmoving the determined one or more user data loads in accordance with the determined order.
  • 18. The medium as set forth in claim 17 wherein the one or more characteristics comprise at least one of the user data loads which is currently inactive, which caused the identified overall load in the at least one of the storage servers to exceed the one or more thresholds, which is the largest in the identified overall load in the at least one storage server, and which has a last login date to an application.
  • 19. The medium as set forth in claim 15 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises determining which of the other storage servers has a lowest average utilization of at least one of processor space and memory space and moving the determined one or more user data loads to the determined storage server with the lowest average utilization of at least one of processor space and memory space.
  • 20. The medium as set forth in claim 15 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises determining which of the one or more other storage servers need to receive one or more of the user data loads from the identified overall load in the at least one of the storage servers to balance the stored load between the storage servers and moving the determined one or more user data loads to the determined one or more other storage servers to balance the stored load.
  • 21. The medium as set forth in claim 15 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises identifying when another storage server has been added to the one or more other storage servers before the determining which of the one or more other storage servers to move each of the determined one or more user data loads.
  • 22. The medium as set forth in claim 12 wherein the transferring further comprises deleting the transferred one or more of the user data loads in the identified overall load in the at least one of the storage servers after the transferring.
  • 23. A user load balancing apparatus comprising: one or more processors;a memory coupled to the one or more processors, the one or more processors configured to execute programmed instructions stored in the memory comprising: identifying when an overall load of one or more user data loads in at least one of two or more storage servers exceeds one or more thresholds, wherein the one or more user data loads in at least one of the two or more storage servers comprises stateful data;determining whether the overall load of one or more user data loads exceeds the one or more thresholds temporarily or persistently;transferring one or more of the user data loads in the identified overall load in one of the storage servers to one or more of the other storage servers only when the one or more thresholds is determined to be exceeded persistently based upon the identifying, wherein the transferring further comprises placing the one or more user data loads to be transferred in storage which is read only accessible until the transferring one or more user data loads is completed;identifying any changes to the one or more transferred user data loads which occurred during the transferring; andupdating the one or more transferred user data loads with the identified differences; andupdating a directory with one or more new locations to the determined one or more other storage servers for each of the one or more transferred user data loads to be configured to respond to a query from an application.
  • 24. The apparatus as set forth in claim 23 wherein the one or more thresholds comprise at least one of processor utilization with respect to one of the overall loads exceeding a first percentage for greater than a first period of time, disk space utilization with respect to one of the overall loads exceeding a second percentage, RAM utilization with respect to one of the overall loads exceeding a third percentage for a greater than a second period of time, and at least one of I/O read/writes and I/O transactions per second with respect to one of the overall loads at greater than a fourth percentage for greater than a third period of time.
  • 25. The apparatus as set forth in claim 23 wherein the one or more thresholds comprise at least one of processor utilization with respect to one of the overall loads exceeding a first percentage average for greater than a first period of time, disk space utilization with respect to one of the overall loads exceeding a second percentage, RAM utilization with respect to one of the overall loads exceeding a third percentage average for a greater than a second period of time, and I/O read/writes with respect to one of the overall loads at greater than a fourth percentage for greater than a third period of time.
  • 26. The apparatus as set forth in claim 23 wherein the transferring further comprises: determining which of the one or more user data loads in the identified overall load to move to one or more of the other storage servers;determining which of the one or more other storage servers to move each of the determined one or more user data loads; andmoving the determined one or more user data loads in the identified overall load in the at least one of the storage servers to the determined one or more other storage servers.
  • 27. The apparatus as set forth in claim 26 wherein the moving the determined one or more user data loads further comprises: transferring the one or more determined user data loads to the determined one or more other storage servers; andobtaining a verification that each of the one or more transferred user data loads is identical to the corresponding one of the determined one or more user data loads.
  • 28. The apparatus as set forth in claim 26 wherein the determining which of the one or more user data loads in the identified overall load in the at least one of the storage servers to move to one or more of the other storage servers further comprises: determining an order for moving the determined one or more user data loads based on one or more characteristics; andmoving the determined one or more user data loads in accordance with the determined order.
  • 29. The apparatus as set forth in claim 28 wherein the one or more characteristics comprise at least one of the user data loads which is currently inactive, which caused the identified overall load in the at least one of the storage servers to exceed the one or more thresholds, which is the largest in the identified overall load in the at least one storage server, and which has a last login date to an application.
  • 30. The apparatus as set forth in claim 26 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises determining which of the other storage servers has a lowest average utilization of at least one of processor space and memory space and moving the determined one or more user data loads to the determined storage server with the lowest average utilization of at least one of processor space and memory space.
  • 31. The apparatus as set forth in claim 26 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises determining which of the one or more other storage servers need to receive one or more of the user data loads from the identified overall load in the at least one of the storage servers to balance the stored load between the storage servers and moving the determined one or more user data loads to the determined one or more other storage servers to balance the stored load.
  • 32. The apparatus as set forth in claim 26 wherein the determining which of the one or more other storage servers to move each of the determined one or more user data loads further comprises identifying when another storage server has been added to the one or more other storage servers before the determining which of the one or more other storage servers to move each of the determined one or more user data loads.
  • 33. The apparatus as set forth in claim 23 wherein the transferring further comprises deleting the transferred one or more of the user data loads in the identified overall load in the at least one of the storage servers after the transferring.
US Referenced Citations (170)
Number Name Date Kind
5101425 Darland et al. Mar 1992 A
5210789 Jeffus et al. May 1993 A
5442707 Miyaji et al. Aug 1995 A
5493105 Desai Feb 1996 A
5509074 Choudhury et al. Apr 1996 A
5551030 Linden et al. Aug 1996 A
5615268 Bisbee et al. Mar 1997 A
5617114 Bier et al. Apr 1997 A
5640577 Scharmer Jun 1997 A
5654908 Yokoyama Aug 1997 A
5694563 Belfiore et al. Dec 1997 A
5706517 Dickinson Jan 1998 A
5727057 Emery et al. Mar 1998 A
5737424 Elteto et al. Apr 1998 A
5737726 Cameron et al. Apr 1998 A
5774117 Kukkal et al. Jun 1998 A
5774668 Choquier et al. Jun 1998 A
5790790 Smith et al. Aug 1998 A
5790793 Higley Aug 1998 A
5794207 Walker et al. Aug 1998 A
5794259 Kikinis Aug 1998 A
5802518 Karaev et al. Sep 1998 A
5805811 Pratt et al. Sep 1998 A
5818442 Adamson Oct 1998 A
5835896 Fisher et al. Nov 1998 A
5845261 McAbian Dec 1998 A
5848131 Moore et al. Dec 1998 A
5848161 Luneau et al. Dec 1998 A
5852807 Skarbo et al. Dec 1998 A
5855006 Huemoeller et al. Dec 1998 A
5870470 Johnson et al. Feb 1999 A
5870544 Curtis Feb 1999 A
5875296 Shi et al. Feb 1999 A
5878141 Daly et al. Mar 1999 A
5889989 Robertazzi et al. Mar 1999 A
5890138 Godin et al. Mar 1999 A
5893118 Sonderegger Apr 1999 A
5895454 Harrington Apr 1999 A
5897622 Blinn et al. Apr 1999 A
5899980 Wilf et al. May 1999 A
5905973 Yonezawa et al. May 1999 A
5917491 Bauersfeld Jun 1999 A
5940807 Purcell Aug 1999 A
5946665 Suzuki et al. Aug 1999 A
5948040 DeLorme et al. Sep 1999 A
5950200 Sudai et al. Sep 1999 A
5956709 Xue Sep 1999 A
5960406 Rasansky et al. Sep 1999 A
5960411 Hartman et al. Sep 1999 A
5963949 Gupta et al. Oct 1999 A
5970475 Barnes et al. Oct 1999 A
5974441 Rogers et al. Oct 1999 A
5987423 Arnold et al. Nov 1999 A
5991740 Messer Nov 1999 A
5995098 Okada et al. Nov 1999 A
5999914 Blinn et al. Dec 1999 A
5999938 Bliss et al. Dec 1999 A
6006215 Retallick Dec 1999 A
6006332 Rabne et al. Dec 1999 A
6014647 Nizzari et al. Jan 2000 A
6016478 Zhang et al. Jan 2000 A
6058417 Hess et al. May 2000 A
6065046 Feinberg et al. May 2000 A
6085166 Beckhardt et al. Jul 2000 A
6086618 Al-Hilali et al. Jul 2000 A
6111572 Blair et al. Aug 2000 A
6141005 Hetherington et al. Oct 2000 A
6148335 Haggard et al. Nov 2000 A
6182109 Sharma et al. Jan 2001 B1
6219669 Haff et al. Apr 2001 B1
6259405 Stewart et al. Jul 2001 B1
6262725 Hetherington et al. Jul 2001 B1
6266651 Woolston Jul 2001 B1
6269135 Sander Jul 2001 B1
6269369 Robertson Jul 2001 B1
6363392 Halstead et al. Mar 2002 B1
6369840 Barnett et al. Apr 2002 B1
6370566 Discolo et al. Apr 2002 B2
6393421 Paglin May 2002 B1
6430611 Kita et al. Aug 2002 B1
6446123 Ballantine et al. Sep 2002 B1
6581088 Jacobs et al. Jun 2003 B1
6598027 Breen et al. Jul 2003 B1
6601092 Itabashi et al. Jul 2003 B2
6633898 Seguchi et al. Oct 2003 B1
6647370 Fu et al. Nov 2003 B1
6658473 Block et al. Dec 2003 B1
6732171 Hayden May 2004 B2
6763335 Nanbu et al. Jul 2004 B1
6831970 Awada et al. Dec 2004 B1
6862623 Odhner et al. Mar 2005 B1
6879691 Koretz Apr 2005 B1
6898564 Odhner et al. May 2005 B1
6898633 Lyndersay et al. May 2005 B1
6917963 Hipp et al. Jul 2005 B1
6938256 Deng et al. Aug 2005 B2
6950662 Kumar Sep 2005 B2
6954784 Aiken et al. Oct 2005 B2
6957433 Umberger et al. Oct 2005 B2
6986076 Smith et al. Jan 2006 B1
6990662 Messer et al. Jan 2006 B2
6993572 Ross, Jr. et al. Jan 2006 B2
7050936 Levy et al. May 2006 B2
7110913 Monroe et al. Sep 2006 B2
7113990 Scifres et al. Sep 2006 B2
7124101 Mikurak Oct 2006 B1
7219109 Lapuyade et al. May 2007 B1
7305471 Odhner et al. Dec 2007 B2
7305491 Miller et al. Dec 2007 B2
7313620 Odhner et al. Dec 2007 B2
7392314 Betzler et al. Jun 2008 B2
7437449 Monga et al. Oct 2008 B1
7443767 Mohler et al. Oct 2008 B2
7475206 Murotani et al. Jan 2009 B2
7499715 Carro et al. Mar 2009 B2
7552208 Lubrecht et al. Jun 2009 B2
7562140 Clemm et al. Jul 2009 B2
7562145 Aiken et al. Jul 2009 B2
7617303 Duggirala Nov 2009 B2
7657638 Deen et al. Feb 2010 B2
7689448 Fu et al. Mar 2010 B2
7693983 Gupta et al. Apr 2010 B1
7711855 Thind et al. May 2010 B2
7725559 Landis et al. May 2010 B2
20010049613 Gramann, III et al. Dec 2001 A1
20020099576 MacDonald et al. Jul 2002 A1
20020147759 Ranganathan Oct 2002 A1
20020152305 Jackson et al. Oct 2002 A1
20020178206 Smith Nov 2002 A1
20030069874 Hertzog et al. Apr 2003 A1
20030115244 Molloy et al. Jun 2003 A1
20030135507 Hind et al. Jul 2003 A1
20040010451 Romano et al. Jan 2004 A1
20040015539 Alegria et al. Jan 2004 A1
20040024894 Osman et al. Feb 2004 A1
20040039626 Voorhees Feb 2004 A1
20040054504 Chang et al. Mar 2004 A1
20040064543 Ashutosh et al. Apr 2004 A1
20040103189 Cherkasova et al. May 2004 A1
20040117476 Steele et al. Jun 2004 A1
20040162901 Mangipudi et al. Aug 2004 A1
20050005012 Odhner et al. Jan 2005 A1
20050050138 Creamer et al. Mar 2005 A1
20050097204 Horowitz et al. May 2005 A1
20050114191 Atkin et al. May 2005 A1
20050138170 Cherkasova et al. Jun 2005 A1
20050149417 Crescenzo et al. Jul 2005 A1
20050198179 Savilampi Sep 2005 A1
20050228879 Cherkasova et al. Oct 2005 A1
20050240466 Duggirala Oct 2005 A1
20050278439 Cherkasova Dec 2005 A1
20050278453 Cherkasova Dec 2005 A1
20060020691 Patterson et al. Jan 2006 A1
20060068812 Carro et al. Mar 2006 A1
20060218278 Uyama et al. Sep 2006 A1
20070061441 Landis et al. Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070067440 Bhogal et al. Mar 2007 A1
20070113186 Coles et al. May 2007 A1
20070130364 Joglekar et al. Jun 2007 A1
20070168498 Lambert et al. Jul 2007 A1
20070208803 Levi et al. Sep 2007 A1
20070214188 Lapuyade et al. Sep 2007 A1
20070233556 Koningstein Oct 2007 A1
20070245010 Arn et al. Oct 2007 A1
20070294387 Martin Dec 2007 A1
20080004748 Butler et al. Jan 2008 A1
20080091726 Koretz et al. Apr 2008 A1
20080201720 Betzler et al. Aug 2008 A1
20090276771 Nickolov et al. Nov 2009 A1
Foreign Referenced Citations (1)
Number Date Country
WO 0023862 Apr 2000 WO
Related Publications (1)
Number Date Country
20070260732 A1 Nov 2007 US