Preferential loading in data centers

Information

  • Patent Grant
  • 11683263
  • Patent Number
    11,683,263
  • Date Filed
    Monday, June 21, 2021
    2 years ago
  • Date Issued
    Tuesday, June 20, 2023
    11 months ago
Abstract
A method comprises receiving, at a system from an application server, a request for a service, the system comprising two or more global session databases, and the request associated with a session; identifying among the two or more global session databases, a first global session database to fulfill the request based on a criteria; storing, by the application server, the session at the first global session database; and transmitting, by the session, data associated with the request for the service in accordance with a configuration table. Additional methods, systems, and non-transitory computer-readable media or computer program products provide similar or alternative functionality.
Description
COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


BACKGROUND

Embodiments disclosed and taught herein relate generally to data centers and, more specifically, to methods and systems for containing a load within a data center.


A data center, in general, is a physical location or facility that houses computing systems for a particular business, industry, governmental entity, or other organization. The computing systems may include, for example, one or more server cells or clusters that perform various functions for the organization. Examples of such functions may include Web site hosting, information storing, business processing, and the like. Other computing systems may also be housed in the data center for performing still other functions. Clients may then access the organization, typically via the Internet for external clients and an intranet for internal clients, to obtain services from the data center.


When one or more computing systems or the entire data center fails or becomes unavailable, service may be disrupted, and the organization's image or brand may suffer. Therefore, many organizations provide redundant computing systems in a second data center that is connected to the first data center over a high-speed network. The second data center typically operates simultaneously with the first data center such that clients may obtain the same services through either data center. The particular data center a client may be assigned to upon accessing the organization may follow a random, round robin, or other suitable process. Thereafter, if one or more computing systems in one data center is unavailable, the client may be automatically redirected to computing systems in the other data center (i.e., a failover situation). This “always-on” or high-availability approach allows the organization to maintain substantially continuous and uninterrupted service to clients.


But simply having multiple data centers may not be sufficient if there are performance issues with the data centers. For example, data centers are typically located far enough away from one another so that a catastrophic event (e.g., fire, explosion, chemical spill, etc.) at one data center does not take down another data center. However, this physical separation may also introduce unnecessary latency, for example, when a client initially assigned to one data center is routed to another data center for service during normal operation (i.e., a non-failover situation). Such routing of service between data centers may additionally increase the potential for disruption of service should either data centers fail relative to routing that is confined to one data center.


Accordingly, what is needed is a way to provide services to clients that optimizes performance during normal operation of the data centers and minimizes potential for disruption of service should one of the data centers fail. More specifically, what is needed is a way to minimize the probability that a client assigned to one data center is unnecessarily routed to another data center for service during normal operation.


SUMMARY

The disclosed embodiments are directed to an autonomous system for managing global sessions on behalf of clients or users. In an embodiment, a method comprises receiving, at a system from an application server, a request for a service, the system comprising two or more global session databases, and the request associated with a session; identifying among the two or more global session databases, a first global session database to fulfill the request based on a criteria; storing, by the application server, the session at the first global session database; and transmitting, by the session, data associated with the request for the service in accordance with a configuration table.


Additional methods, systems, and non-transitory computer-readable media or computer program products are disclosed elsewhere herein for similar, complementary, or alternative aspects. The summary should not be deemed limiting in this regard, and the claims or applications claiming the benefit of this application may pursue scope commensurate with the entirety of the disclosure, both by way of explicit discussion and based on what would be understood by one of ordinary skill in the art based on review of this disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing and other advantages of the disclosed embodiments will become apparent from the following detailed description and upon reference to the drawings, wherein:



FIG. 1 illustrates an exemplary computing system that may be used to implement various aspects of the autonomous intranet system of the disclosed embodiments;



FIG. 2 illustrates an exemplary autonomous intranet system for providing preferential loading in multiple data centers according to disclosed embodiments;



FIG. 3 illustrates an exemplary autonomous intranet system for providing preferential loading of business servers according to disclosed embodiments;



FIG. 4 illustrates another exemplary autonomous intranet system for providing preferential loading of business servers according to disclosed embodiments;



FIG. 5 illustrates an exemplary autonomous intranet system for providing preferential loading of application servers according to disclosed embodiments; and



FIG. 6 illustrates an exemplary autonomous intranet system for providing preferential loading of global session databases according to disclosed embodiments.





DETAILED DESCRIPTION

The figures described above and the written description of specific structures and functions provided herein are not presented to limit the scope of what Applicants have invented or the scope of the appended claims. Rather, the figures and written description are provided to teach any person skilled in the art to make and use that for which patent protection is sought. Those skilled in the art will appreciate that not all features of a commercial embodiment are described or shown for the sake of clarity and understanding. Persons of skill in the art will also appreciate that the development of an actual commercial embodiment incorporating various aspects of this disclosure may require numerous implementation-specific decisions to achieve the developer's ultimate goal for the commercial embodiment. Such implementation-specific decisions may include, and likely are not limited to, compliance with system-related, business-related, government-related standards and other constraints, which may vary over time by location and specific implementation. While a developer's efforts might be complex and time-consuming in an absolute sense, such efforts would be, nevertheless, a routine undertaking for those of skill in this art having benefit of this disclosure. It must be understood that the embodiments disclosed and taught herein are susceptible to numerous and various modifications and alternative forms. Also, the use of a singular term, such as, but not limited to, “a,” is not intended as limiting of the number of items. Furthermore, the use of relational terms, such as, but not limited to, “top,” “bottom,” “left,” “right,” “upper,” “lower,” “down,” “up,” “side,” and the like are used in the written description for clarity in specific reference (e.g., to the figures) and are not intended to limit the scope of embodiments or the appended claims.


Particular embodiments may be described below with reference to block diagrams and/or operational illustrations of methods. It will be understood that each block of the block diagrams and/or operational illustrations, and combinations of blocks in the block diagrams and/or operational illustrations, may be implemented by analog and/or digital hardware, and/or computer program instructions. Such computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, ASIC (application specific integrated circuit), and/or other programmable data processing system. The executed instructions may create structures and functions for implementing the actions specified in the block diagrams and/or operational illustrations. In some alternate implementations, the functions/actions/structures noted in the figures may occur out of the order noted in the block diagrams and/or operational illustrations. For example, two operations shown as occurring in succession, in fact, may be executed substantially concurrently or the operations may be executed in the reverse order, depending upon the functionality/acts/structure involved.


Computer programs for use with or by the embodiments disclosed herein may be written in an object oriented programming language, conventional procedural programming language, or lower-level code, such as assembly language and/or microcode. The program may be executed entirely on a single processor and/or across multiple processors, as a stand-alone software package or as part of another software package.


As mentioned above, the disclosed embodiments provide high-availability data centers that minimize the probability of a client assigned to one data center being unnecessarily routed to another data center for service during normal operation. In general, the data centers of the disclosed embodiments implement a “stovepipe” philosophy in which processing activity related to fulfillment of a given service request are deliberately and specifically confined to a single data center unless required otherwise. Services that may be affected may include any service provided by the data centers, including, for example, banking services (e.g., a deposit, a withdrawal, etc.), insurance services (e.g., a premium quotation, a coverage change, etc.) and, investment services (e.g., a stock purchase, a stock sale, etc.), and the like.


In accordance with the disclosed embodiments, where a service request may be fulfilled by more than one data center, a preference may be provided for fulfilling the service request in the data center that originally received the service request. This confinement to one data center may help optimize performance for each data center during normal operation by minimizing latency to the extent the data centers are geographically separated. The confinement may also reduce the likelihood of service disruption should one of the data centers fail (i.e., compared to a service request that is being processed in more than one data center). As used herein, “service disruption” refers to any actual or perceived disruption in the service being requested, whether by a person or an application, and may include, for example, a dropped connection, a “page not found” error, slow or sluggish responses, and the like.


In one embodiment, systems and/or resources that are replicated in each data center under a common IP address may have service requests routed to them using routing tables that prefer one of the systems and/or resources over another. For example, the routing tables may prefer the system and/or resource that resides in the “local” data center, which is the data center where the service requests originated. Alternatively, the routing tables may prefer the system and/or resource that meets some other criteria, such as the one that is topographically closest, has the fastest processing capability, and the like. For systems and/or resources that are replicated in the data centers under different IP addresses, service requests may be routed using configuration tables that favor one of the systems and/or resources over another. As with the routing tables, the configuration tables may be more favorable to the system and/or resource that resides in the “local” data center, or they may be more favorable to the system and/or resource that meets some other criteria. In general, the system and/or resource that provides optimal performance based on one or more criteria, such as minimizing latency, failure potential, and the like, may be preferred in accordance with the disclosed embodiments. While this system and/or resource is typically the one that resides in the “local” data center, those having ordinary skill in the art will understand that the disclosed embodiments are not so limited.


Referring now to FIG. 1, a computing system 100 is shown that may be used to implement various aspects of the high-availability data centers according to the disclosed embodiment. Such a computing system 100 may be a server, workstation, mainframe, and the like. As can be seen, the computing system 100 typically includes a bus 102 or other communication mechanism for communicating information and a processor 104 coupled with the bus 102 for processing information. The computing system 100 may also include a main memory 106, such as a random access memory (RAM) or other dynamic storage device, coupled to the bus 102 for storing computer-readable instructions to be executed by the processor 104. The main memory 106 may also be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor 104. The computing system 100 may further include a read-only memory (ROM) 108 or other static storage device coupled to the bus 102 for storing static information and instructions for the processor 104. A non-volatile computer-readable storage device 110, such as a magnetic, optical, or solid state device, may be coupled to the bus 102 for storing information and instructions for the processor 104.


The computing system 100 may be coupled via the bus 102 to a display 112, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a user. An input device 114, including, for example, a keyboard having alphanumeric and other keys, may be coupled to the bus 102 for communicating information and command selections to the processor 104. Another type of user input device may be a cursor control 116, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processor 104, and for controlling cursor movement on the display 112. The cursor control 116 typically has two degrees of freedom in two axes, a first axis (e.g., X axis) and a second axis (e.g., Y axis), that allow the device to specify positions in a plane.


The term “computer-readable instructions” as used above refers to any instructions that may be performed by the processor 104 and/or other components. Similarly, the term “computer-readable medium” refers to any storage medium that may be used to store the computer-readable instructions. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, and transmission media. Transmission media may include coaxial cables, copper wire and fiber optics, including wires of the bus 102, while transmission may take the form of acoustic, light, or electromagnetic waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media may include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH EPROM, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.


The computing system 100 may also include a communication interface 118 coupled to the bus 102. The communication interface 118 typically provides a two way data communication coupling between the computing system 100 and a network. For example, the communication interface 118 may be an integrated services digital network (ISDN) card or a modem used to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface 118 may be a local area network (LAN) card used to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. Regardless of the specific implementation, the main function of the communication interface 118 is to send and receive electrical, electromagnetic, optical, or other signals that carry digital data streams representing various types of information.


Referring now to FIG. 2, the computing system 100 described above may be used to implement various aspects of an autonomous intranet system 200 according the disclosed embodiments. An autonomous intranet system, in general, is a network or collection of networks under the control of a single organization, for example, where the organization sets the routing policy for the autonomous intranet system 200. The autonomous intranet system 200 may include an interior network 202, such as an Interior Gateway Protocol (IGP) network or other suitable network, to which data centers 204a and 204b (e.g., Data Center A and Data Center B respectively) may be connected.


Each data center 204a and 204b may be a fully active and redundant version of the other data center 204b and 204a so that a given service request may be fulfilled by either data center 204a and 204b. Referring to Data Center A, each data center 204a and 204b may include a number of computing systems and/or resources, such as http (hypertext transfer protocol) servers 206a, application servers 208a, business servers 210a, and databases 212a. In general, the http servers 206a manage static content (e.g., graphics, images, etc.) on the organization's Web site, the application servers 208a receive and process service requests, the business servers 210a fulfill the service requests, and the databases 212a store data and information used by the organization. Data Center B may include the same or similar counterpart systems and/or resources (not expressly shown). These computing systems and/or resources are typically organized as cells or clusters in which multiple servers operate as a single virtual unit. Other types of computing systems and/or resources in addition to or instead of those mentioned above, such as file servers, email servers, print servers, and the like, may also be present in the data centers 204a and 204b.


Routers, some of which are shown at 214a and 214b, may be provided to route network traffic to/from the data centers 204a and 204b from/to the interior network 202. Within the interior network 202, additional routers may be provided, some of which are shown at 216a & 216c and 216b & 216d, to route the network traffic to/from its intended destination.


Various clients may then connect to the interior network 202 to obtain services from the data centers 204a and 204b. For example, internal clients 218a and 218b, which may be employees of the organization, may connect to the interior network 202 via routers 220a and 220b, respectively, to obtain services from the data centers 204a and 204b. Similarly, external clients 222a and 222b, which may be customers of the organization, may connect to the interior network 202 via access points 224a and 224b, respectively, to obtain services from the data centers 204a and 204b. The external clients 222a and 222b may be routed to the access points 224a and 224b over the Internet, for example, via a Border Gateway Protocol (BGP) or other suitable protocols known to those having ordinary skill in the art and lead of the disclosure here in.


When a request for service from either the internal clients 218a and 218b or the external clients 222a and 222b arrives at the interior network 202, the service request is routed to one of the data centers 204a and 204b. The particular data center 204a and 204b that the service request is routed to may be selected using a random, round robin, or other suitable process known to those having ordinary skill in the art. In accordance with the disclosed embodiments, the autonomous intranet system 200 may be designed such that whichever data center 204a or 204b receives the service request, that data center 204a or 204b performs the activities related to fulfilling the service request unless there is a failover or similar event requiring routing to the other data center 204a or 204b.


A specific example of the above “stovepipe” arrangement is illustrated in FIG. 3 where portions of exemplary data centers, Data Center A and Data Center B, are shown in more detail. As can be seen, an autonomous intranet system 300 may include an interior network 302 similar to the interior network 202 (see FIG. 2) to which application servers 304a of Data Center A may connect via a local subnet 306a and a router 308a. Application servers 304b of Data Center B may similarly connect to the interior network 302 via a local subnet 306b and a router 308b. Routers 310a & 310c and 310b & 310d within the interior network 302 route network traffic from the application servers to business servers 312a and 312b in each of the Data Centers A and B, respectively.


The particular business servers 312a and 312b shown in this example are database servers for “active-active” databases (not expressly shown) in each of the Data Centers A and B. Such “active-active” databases update changes in each other in real time across the Data Centers A and B. Examples of databases that may be “active-active” include customer relation management (CRM) databases that store customer-related information for the organization. A Layer 2 network 314 (see Open Systems Interconnection (OSI) model) may be used to provide high-speed connectivity between the databases, as compared to a Layer 3 or higher network, which requires processing that can delay network traffic between the databases. Local subnets 316a and 316b and routers 318a and 318b connect the business servers 312a and 312b to the interior network 302. Note that the subnets 306a and 316a in Data Center A may be the same subnet in some embodiments, or they may be different subnets. A similar situation may exist for the subnets 306b and 316b in Data Center B.


In an “active-active” database arrangement, both databases are immediately updated whenever there is a change in either database so that they are effectively a single database. As such, the business servers 312a and 312b that access these databases may be advertised on the interior network 302 under a common IP address (e.g., 1.1.1.1). This allows service requests that are sent to the common IP address to be fulfilled by either of the business servers 312a and 312b. Network monitors 320a and 320b may then be provided to check the status of the business servers 312a and 312b and advertise their availability (or lack thereof) to the interior network 302. The network monitors 320a and 320b may be, for example, Layer 4 router switches available from F5 Networks, Inc. of Seattle, Wash., that have a Route Health Injection (RHI) feature.


In accordance with the disclosed embodiments, the routers 310a & 310c and 310b & 310d of the interior network 302 may be provided with routing tables that have preferential routing for the business servers 312a and 312b. Routing tables are well known to those having ordinary skill in the art and will not be described in detail here. Suffice it to say, the preferential routing may be implemented in the form of a weight applied to one or more routes that a service request to the business servers 312a and 312b may traverse from the routers 310a & 310c and 310b & 310d. In some embodiments, the routing tables may have a higher preference for routes that send the service request to a business server 312a and 312b within a local Data Center A or B, whereas routes that send the service request to a business server 312a and 312b outside the local Data Center A or B may have a lower preference. This is illustrated in FIG. 3 via the relatively thicker lines between the routers 310a and 310c of Data Center A and similar lines between the routers 310b and 310d of Data Center B.


Alternatively, the routing tables of the routers 310a & 310c and 310b & 310d may prefer a business server 312a and 312b that meets one or more other criteria, such as the one that is topographically nearest (e.g. according to the Open Shortest Path First, (OSPF) protocol), has the fastest processing capability, and the like.



FIG. 4 illustrates another example of the “stovepipe” philosophy of the disclosed embodiments. In FIG. 4, an autonomous intranet system 400 may include an interior network 402 similar to the interior network 202 (see FIG. 2) to which application servers 404a of one data center, Data Center A, may connect via a local subnet 406a and a router 408a. Application servers 404b of another data center, Data Center B may similarly connect to the interior network 402 via a local subnet 406b and a router 408b. Routers 410a & 410c and 410b & 410d within the interior network 402 route network traffic from the application servers 404a and 404b to business servers 412a and 412b in each of the Data Centers A and B respectively.


In accordance with the disclosed embodiments, the application servers 404a and 404b may have a persistent affinity for one of the business servers 412a and 412b upon receiving a service request for one of the business servers 412a and 412b, as described below.


The particular business servers 412a and 412b shown here may be backend servers for one of the organization's lines of business, such as its banking business, insurance business, investment business, credit card business, and the like. As in the example of FIG. 3, a request for service may be fulfilled by either of the business servers 412a and 412b. Unlike the example of FIG. 3, however, the business servers 412a and 412b have many different IP addresses. Therefore it is possible to direct the service request to a specific one of the business servers 412a and 412b. To this end, configuration tables for the business servers 412a and 412b may be provided in the application servers 404a and 404b, respectively, that have a persistent affinity for one of the business servers 412a and 412b. Configuration tables are well known to those having ordinary skill in the art and will not be described in detail here. The persistent affinity may then be implemented in the form of a listing that causes the application servers 404a and 404b to send the service request to a particular one of the business servers 412a and 412b before sending the request to the other one.


In some embodiments, the persistent affinity of the application servers 404a and 404b may be for the particular business server 412a and 412b within a local Data Center A or B. If the business server 412a or 412b within a local Data Center A or B is not available, then the application servers 404a and 404b may try the business server 412a or 412b outside the local Data Center A or B. Alternatively, the configuration tables of the application servers 404a and 404b may hold an affinity for a business server 412a or 412b that meets some other criteria, such as the one that is topographically nearest, has the fastest processing capability, and the like.



FIG. 5 illustrates yet another example of the “stovepipe” philosophy of the disclosed embodiments. In FIG. 5, an autonomous intranet system 500 may include an interior network 502 similar to the interior network 202 (see FIG. 2) to which http servers 504a of one data center, Data Center A, may connect via a local subnet 506a and a router 508a. Http servers 504b of another data center, Data Center B, may similarly connect to the interior network 502 via a local subnet 506b and a router 508b. Routers 510a & 510c and 510b & 510d within the interior network 502 route network traffic from the http servers 504a and 504b to application servers 512a and 512b in each of the Data Centers A and B, respectively.


In accordance with the disclosed embodiments, the http servers 504a and 504b may have a persistent affinity for one of the application servers 512a and 512b upon receiving a service request that needs to be forwarded to the application servers 512a or 512b. To this end, configuration tables for the application servers 512a and 512b may be provided in the http servers 504a and 504b, respectively, that cause the http servers 504a and 504b to send the service request to a particular one of the application servers 512a and 512b before sending the request to the other one.


In some embodiments, the persistent affinity of the http servers 504a and 504b may be for the particular application server 512a or 512b within a local Data Center A or B. If the application server 512a and 512b within a local Data Center A or B is not available, then the http servers 504a and 504b may try the application server 512a and 512b outside the local Data Center A or B. Alternatively, the configuration tables of the http servers 504a and 504b may have an affinity for an application server 512a and 512b that meets some other criteria, such as the one that is topographically nearest, has the fastest processing capability, and the like.



FIG. 6 illustrates still another example of the “stovepipe” philosophy of the disclosed embodiments. In FIG. 6, an autonomous intranet system 600 may include an interior network 602 similar to the interior network 202 (see FIG. 2) to which application servers 604a of one data center, Data Center A, may connect via a local subnet 606a and a router 608a. Application servers 604b of another data center, Data Center B, may similarly connect to the interior network 602 via a local subnet 606b and a router 608b. Routers 610a & 610c and 610b & 610d within the interior network 602 route network traffic from the application servers 604a and 604b to global session databases 612a and 612b in each of the Data Centers A and B, respectively.


The global session databases 612a and 612b basically store administrative information about a person's access session when the person accesses the organization's Web site. Such information may include, for example, the username, identity, and other security credentials of the person. Such information allows a person to navigate various areas of the Web site without having to reenter his/her security credentials at each area. The information may also track which Web pages the person visited on the Web site, the person's activities on the Web site, and the like.


In accordance with the disclosed embodiments, the application servers 604a and 604b may have a persistent affinity for one of the global session databases 612a and 612b upon opening of a new session that needs to be stored to the global session databases 612a and 612b. To this end, configuration tables for the global session databases 612a and 612b may be provided in the application servers 604a and 604b, respectively, that causes the application servers 604a and 604b to store the session to a particular one of the global session databases 612a and 612b before sending the request to the other one.


In some embodiments, the persistent affinity of the application servers 604a and 604b may be for the particular global session database 612a or 612b within a local Data Center A or B. If the global session database 612a or 612b within a local Data Center A or B. is not available, then the application servers 604a or 604b may try the global session database 612a or 612b outside the local Data Center A or B. Alternatively, the configuration tables of the application servers 604a and 604b may have an affinity for a global session database 612a and 612b that meets one or more other criteria, such as the one that is topographically nearest, has the fastest processing capability, and the like.


Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results may be substituted for the specific embodiments disclosed. This disclosure is intended to cover adaptations or variations of various embodiments. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description herein. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.


Furthermore, various features in the foregoing are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may also be found in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims
  • 1. A method comprising: receiving, at a system from an application server, a request for a service, wherein the request is associated with a user access session, and wherein the system comprises two or more global session databases configured to store administrative information about a user access session;identifying among the two or more global session databases, using a configuration table configured to identify a priority among the two or more global session databases, a first global session database to fulfill the request based on a criteria;storing, by the application server, the administrative information about the user access session at the first global session database; andtransmitting, by the session, data associated with the request for the service in accordance with the configuration table.
  • 2. The method of claim 1, wherein the application server is configured to connect to an interior network via a subnet.
  • 3. The method of claim 2, wherein the subnet connects to the interior network via a router.
  • 4. The method of claim 2, wherein the application server is configured to connect to at least the first global session database via the subnet.
  • 5. The method of claim 1, wherein the administrative information about the user includes security credentials associated with the user.
  • 6. The method of claim 1, wherein the configuration table priority identifies that the application server has a persistent affinity for the first global session database.
  • 7. A computer system comprising: a system server configured to receive, from an application server, a request for a service, wherein the request is associated with a user access session; andtwo or more global session databases configured to store administrative information about a user access session,wherein the server is configured to identify among the two or more global session databases, using a configuration table configured to identify a priority among the two or more global session databases, a first global session database to fulfill the request based on a criteria, andwherein the first global session database is configured to store the administrative information about the user access session,and wherein data associated with the request for the service is transmitted by the session in accordance with the configuration table.
  • 8. The computer system of claim 7, wherein the application server is configured to connect to an interior network via a subnet.
  • 9. The computer system of claim 8, wherein the subnet connects to the interior network via a router.
  • 10. The computer system of claim 8, wherein the application server is configured to connect to at least the first global session database via the subnet.
  • 11. The computer system of claim 7, wherein the administrative information about the user includes security credentials associated with the user.
  • 12. The computer system of claim 7, wherein the application server has a persistent affinity for the first global session database.
  • 13. A non-transitory computer-readable medium storing instructions that when executed by a processor are configured to effectuate: receiving, at a system from an application server, a request for a service, wherein the request is associated with a user access session, and wherein the system comprises two or more global session databases configured to store administrative information about a user access session;identifying among the two or more global session databases, using a configuration table configured to identify a priority among the two or more global session databases, a first global session database to fulfill the request based on a criteria;storing, by the application server, the administrative information about the user access session at the first global session database; andtransmitting, by the session, data associated with the request for the service in accordance with the configuration table.
  • 14. The non-transitory computer-readable medium of claim 13, wherein the configuration table priority identifies that the application server is configured to connect to an interior network via a subnet.
  • 15. The non-transitory computer-readable medium of claim 14, wherein the subnet connects to the interior network via a router.
  • 16. The non-transitory computer-readable medium of claim 14, wherein the application server is configured to connect to at least the first global session database via the subnet.
  • 17. The non-transitory computer-readable medium of claim 13, wherein the administrative information about the user includes security credentials associated with the user.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/268,304, filed Feb. 5, 2019, which is a continuation of U.S. patent application Ser. No. 15/359,547, filed Nov. 22, 2016, now U.S. Pat. No. 10,243,843 issued Mar. 26, 2019, entitled PREFERENTIAL LOADING IN DATA CENTERS, which is a continuation application of U.S. patent application Ser. No. 14/257,646, filed Apr. 21, 2014, now U.S. Pat. No. 9,503,530 issued Nov. 22, 2016, entitled PREFERENTIAL LOADING IN DATA CENTERS, which is a continuation application of U.S. patent application Ser. No. 12/196,275, filed Aug. 21, 2008, now U.S. Pat. No. 8,706,878 issued Apr. 22, 2014 each of which is incorporated herein in its entirety. This application is related in subject matter to, and incorporates herein by reference in its entirety, each of the following: U.S. application Ser. No. 12/196,276 entitled “PREFERENTIAL LOADING IN DATA CENTERS,” filed Aug. 21, 2008, now abandoned; and U.S. application Ser. No. 12/196,277 entitled “PREFERENTIAL LOADING IN DATA CENTERS,” filed Aug. 21, 2008, now abandoned. This application is further related in subject matter to, and incorporates herein by reference in its entirety, each of the following U.S. patent applications: application Ser. No. 11/533,248 entitled HIGH-AVAILABILITY DATA CENTER, filed Sep. 19, 2006, now U.S. Pat. No. 7,685,465; application Ser. No. 11/533,262 entitled HIGH-AVAILABILITY DATA CENTER, filed Sep. 19, 2006, now U.S. Pat. No. 7,680,148; and application Ser. No. 11/533,272 entitled HIGH-AVAILABILITY DATA CENTER, filed Sep. 19, 2006, now U.S. Pat. No. 7,747,898. This application is further related in subject matter to, and incorporates herein by reference in its entirety, each of the following U.S. patent applications: application Ser. No. 12/188,187 entitled “SYSTEMS AND METHODS FOR NON-SPECIFIC ADDRESS ROUTING”, filed Aug. 7, 2008 now U.S. Pat. No. 8,171,111; application Ser. No. 12/188,188 entitled “SYSTEMS AND METHODS FOR NON-SPECIFIC ADDRESS ROUTING”, filed Aug. 7, 2008, now abandoned; and application Ser. No. 12/188,190 entitled “SYSTEMS AND METHODS FOR NON-SPECIFIC ADDRESS ROUTING”, filed Aug. 7, 2008, now abandoned. This application is further related in subject matter to, and incorporates herein by reference in its entirety, each of the following U.S. patent applications: application Ser. No. 12/191,979, entitled “SYSTEMS AND METHODS FOR DATA CENTER LOAD BALANCING”, filed Aug. 14, 2008, now abandoned; application Ser. No. 12/191,985 entitled “SYSTEMS AND METHODS FOR DATA CENTER LOAD BALANCING”, filed Aug. 14, 2008, now abandoned; and application Ser. No. 12/191,993 entitled “SYSTEMS AND METHODS FOR DATA CENTER LOAD BALANCING”, filed Aug. 14, 2008, now U.S. Pat. No. 8,243,589.

US Referenced Citations (144)
Number Name Date Kind
5329531 Diepstraten et al. Jul 1994 A
5452447 Nelson et al. Sep 1995 A
5594863 Stiles Jan 1997 A
5611049 Pitts Mar 1997 A
5634122 Loucks et al. May 1997 A
5689706 Rao et al. Nov 1997 A
5706435 Barbara et al. Jan 1998 A
5717897 McCrory Feb 1998 A
5740370 Battersby et al. Apr 1998 A
5805809 Singh et al. Sep 1998 A
5864837 Maimone Jan 1999 A
5878218 Maddalozzo, Jr. et al. Mar 1999 A
5881229 Singh et al. Mar 1999 A
6012085 Yohe et al. Jan 2000 A
6049874 McClain et al. Apr 2000 A
6076108 Courts Jun 2000 A
6119151 Cantrell et al. Sep 2000 A
6122629 Walker et al. Sep 2000 A
6134673 Chrabaszcz Oct 2000 A
6185695 Murphy et al. Feb 2001 B1
6243760 Armbruster et al. Jun 2001 B1
6292832 Shah et al. Sep 2001 B1
6366952 Pitts Apr 2002 B2
6377993 Brandt et al. Apr 2002 B1
6377996 Lumelsky et al. Apr 2002 B1
6397307 Ohran May 2002 B2
6415323 McCanne et al. Jul 2002 B1
6453404 Bereznyi et al. Sep 2002 B1
6505241 Pitts Jan 2003 B2
6577609 Sharony Jun 2003 B2
6587921 Chiu et al. Jul 2003 B2
6597956 Aziz et al. Jul 2003 B1
6601084 Bhaskaran et al. Jul 2003 B1
6609183 Ohran Aug 2003 B2
6694447 Leach et al. Feb 2004 B1
6728896 Forbes et al. Apr 2004 B1
6760861 Fukuhara et al. Jul 2004 B2
6816905 Sheets et al. Nov 2004 B1
6816980 Fukuhara et al. Nov 2004 B1
6842774 Piccioni Jan 2005 B1
6944676 Armbruster et al. Sep 2005 B1
6944788 Dinker et al. Sep 2005 B2
6973033 Chiu et al. Dec 2005 B1
7020132 Narasimhan et al. Mar 2006 B1
7039709 Beadle et al. May 2006 B1
7103617 Phatak Sep 2006 B2
7127638 Sardella et al. Oct 2006 B1
7272613 Sim et al. Sep 2007 B2
7284055 Oehrke et al. Oct 2007 B1
7434087 Singh Oct 2008 B1
7474898 Yamazaki Jan 2009 B2
7490164 Srivastava Feb 2009 B2
7512702 Srivastava et al. Mar 2009 B1
7567504 Darling et al. Jul 2009 B2
7600148 Shaw et al. Oct 2009 B1
7609619 Naseh et al. Oct 2009 B2
7680148 Nishibayashi et al. Mar 2010 B2
7685465 Shaw et al. Mar 2010 B1
7697416 Shand et al. Apr 2010 B2
7710865 Naseh et al. May 2010 B2
7734787 Huff Jun 2010 B2
7747898 Shaw et al. Jun 2010 B1
7769886 Naseh et al. Aug 2010 B2
7783777 Pabla et al. Aug 2010 B1
7961625 Raciborski et al. Jun 2011 B2
8141164 Kamath et al. Mar 2012 B2
8166197 Hoynowski et al. Apr 2012 B2
8171111 Niedzielski et al. May 2012 B1
8243589 Trost et al. Aug 2012 B1
8312120 Ram et al. Nov 2012 B2
8706878 Niedzielski et al. Apr 2014 B1
8959523 Patil et al. Feb 2015 B2
9503530 Niedzielski Nov 2016 B1
9553809 Sorenson, III et al. Jan 2017 B2
9621468 Sorenson, III Apr 2017 B1
20010011300 Pitts Aug 2001 A1
20010016896 Pitts Aug 2001 A1
20010047482 Harris et al. Nov 2001 A1
20010049741 Skene et al. Dec 2001 A1
20010052058 Ohran Dec 2001 A1
20020042818 Helmer et al. Apr 2002 A1
20020049841 Johnson et al. Apr 2002 A1
20020083111 Row et al. Jun 2002 A1
20020107977 Dunshea et al. Aug 2002 A1
20020112087 Berg Aug 2002 A1
20020141343 Bays Oct 2002 A1
20020144068 Ohran Oct 2002 A1
20020165944 Wisner et al. Nov 2002 A1
20020194324 Guha Dec 2002 A1
20030009707 Pedone et al. Jan 2003 A1
20030014526 Pullara et al. Jan 2003 A1
20030067890 Goel et al. Apr 2003 A1
20030135640 Ho et al. Jul 2003 A1
20030169769 Ho et al. Sep 2003 A1
20030171977 Singh et al. Sep 2003 A1
20030214930 Fischer Nov 2003 A1
20030220990 Narayanan et al. Nov 2003 A1
20040260745 Gage et al. Dec 2004 A1
20040260768 Mizuno Dec 2004 A1
20050010653 McCanne Jan 2005 A1
20050071469 McCollom et al. Mar 2005 A1
20050135284 Nanda et al. Jun 2005 A1
20050144292 Banga et al. Jun 2005 A1
20050157715 Hiddink et al. Jul 2005 A1
20050165950 Takagi et al. Jul 2005 A1
20050195858 Nishibayashi et al. Sep 2005 A1
20050220145 Nishibayashi et al. Oct 2005 A1
20050238016 Nishibayashi et al. Oct 2005 A1
20050265302 Nishibayashi et al. Dec 2005 A1
20060036761 Amra et al. Feb 2006 A1
20060036895 Henrickson Feb 2006 A1
20060064478 Sirkin Mar 2006 A1
20060075279 Cameras et al. Apr 2006 A1
20060083233 Nishibayashi et al. Apr 2006 A1
20060092871 Nishibayashi et al. May 2006 A1
20060167883 Boukobza Jul 2006 A1
20060190602 Canali et al. Aug 2006 A1
20060193247 Naseh et al. Aug 2006 A1
20060193252 Naseh et al. Aug 2006 A1
20060195607 Naseh et al. Aug 2006 A1
20060210663 Castillo Sep 2006 A1
20060230122 Sutou et al. Oct 2006 A1
20060259815 Graham et al. Nov 2006 A1
20070006015 Rao et al. Jan 2007 A1
20070014237 Nishibayashi et al. Jan 2007 A1
20070168690 Ross Jul 2007 A1
20070174660 Peddada Jul 2007 A1
20070208852 Wexler et al. Sep 2007 A1
20070255916 Hiraiwa et al. Nov 2007 A1
20080005349 Li et al. Jan 2008 A1
20080043622 Kamath et al. Feb 2008 A1
20080072226 Armes et al. Mar 2008 A1
20080077981 Meyer et al. Mar 2008 A1
20080140844 Halpern Jun 2008 A1
20090043805 Masonis et al. Feb 2009 A1
20090164646 Christian Jun 2009 A1
20090172192 Christian et al. Jul 2009 A1
20090201800 Naseh et al. Aug 2009 A1
20090217083 Hatasaki et al. Aug 2009 A1
20090259768 McGrath et al. Oct 2009 A1
20100076930 Vosshall et al. Mar 2010 A1
20110231888 Sequeira Sep 2011 A1
20130054813 Bercovici et al. Feb 2013 A1
20190297083 Li Sep 2019 A1
Foreign Referenced Citations (12)
Number Date Country
0540387 May 1993 EP
0932319 Jul 1999 EP
200022618 Jan 2000 JP
200341202 Dec 2000 JP
2001168784 Jun 2001 JP
2002026800 Jan 2002 JP
2002152114 May 2002 JP
2002314546 Oct 2002 JP
2004536502 Dec 2004 JP
2005184839 Jul 2005 JP
02089413 Nov 2002 WO
2005083951 Sep 2005 WO
Non-Patent Literature Citations (33)
Entry
U.S. Appl. No. 12/188,188, filed Aug. 7, 2008, entitled “Systems and Methods for Non-Specific Address Routing”, Inventor: David Michael Niedzielski, 41 pages.
U.S. Appl. No. 12/188,190, filed Aug. 7, 2008, entitled “Systems and Methods for Non-Specific Address Routing”, Inventor: David Michael Niedzielski, 42 pages.
U.S. Appl. No. 12/191,979, filed Aug. 14, 2008, entitled “Systems and Methods for Data Center Load Balancing”, Inventor: Christopher S. Trost, 38 pages.
U.S. Appl. No. 12/191,985, filed Aug. 14, 2008, entitled “Systems and Methods for Data Center Load Balancing”, Inventor: Christopher S. Trost, 40 pages.
U.S. Appl. No. 12/196,276, filed Aug. 21, 2008, entitled “Preferential Loading In Data Centers”, Inventor: David Michael Niedzielski, 28 pages.
U.S. Appl. No. 12/196,277, filed Aug. 21, 2008, entitled “Preferential Loading in Data Centers”, Inventor David Michael Niedzielski, 28 pages.
Jean Lorchat, et al., “Energy Saving in IEEE 802.11 Communications using Frame Aggregation”, Globecom 2003, IEEE. vol. 3, Dec. 5, 2003, pp. 1296-1300.
U.S. Appl. No. 11/853,437, filed Sep. 11, 2007, Hirano, et al.
“Computer Network”, Xie Xiren, fourth edition, Jun. 2003,1 front page and p. 72.
IBM, “System Automation for z/OS; Planning and Installation,” Version 2, Release 3, Twelfth Edition (Nov. 2004) [online], Copyright International Business Machines Corporation 1996, 2004, [retrieved on Feb. 12, 2007] pp. 1-249. Retrieved from the Internet s390id@de.ibm.com. [IBM Deuschland Entwicklung GmbH].
IBM, “International Information Bulletin for Customers—Capacity On Demand Offerings,” [online], Dec. 31, 2004 retrieved Feb. 12, 2007], pp. 1-8. Retrieved from the Internet International Business Machines Bulletin No. GM 13-0696-00 Feb. 2005.
IBM, “System z9 and eServer zSeries Capacity on Demand User's Guide,” [online] Copyright IBM Corporation 2005, Armonk, NY U.S.A, [r retrieved Feb. 12, 2007] pp. 1-0. Retrieved from the Internet: International Business Machines Bulletin No. GM13-0716-00 [ZSWO1604-USEN-03].
IBM, “IBM GDPS business continuity solutions,” [online] Copyright IBM Corporation 2004, Somers, NY U.S.A, retrieved Feb. 12, 2007] pp. 1-2. Retrieved from the Internet: International Business Machines Bulletin No. G510-5050-02.
Loveland, S., Miller, G., Prewitt, R, Shannon, M., “Testing z/OS: The premier operating system for IBM's zSeries server”, [online] Copyright 2002 [retrieved Feb. 12, 2007] pp. 55-76. Retrieved from the Internet: IBM Systems Journal, vol. 41, No. 1, 2002.
IBM, “IBM Trivoli SA z/OS: Technical Resources—High-availability solution for WebSphere Application Server for z/OS and OS/390,” [online) Sep. 14, 2006. [retrieved Feb. 12, 2007) p. 1. Retrieved from the Internet: ,URL: http://.03.IBM.com/servers/eserver/zseries/software/sa/techresources/washa.html.
IBM, “An Automation and High Availability Solution for WebSphere Application Server for OS/390 and z/OS Based on System Automation for OS/390,” [online) 2003. [retrieved Feb. 12, 2007] pp. 1-61. Retrieved from the Internet: “System Automation Development, IBM Boeblingen—Mar. 2003.”
IBM, “Geographically Dispersed Parallel Sysplex (GDPS®)—IBM eServer System z,” [online] Sep. 14, 2006 [retrieved Feb. 12, 2007]. pp. 1-2. Retrieved from the Internet: <URL: http://www-03.ibm.com/systems/z/gdps/>.
Gray, C.G. and D.R. Cheriton “Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency”, Proceedings of the 12.sup.th ACMSymposium on Operating Systems Principles, pp. 202-210, Nov. 1989.
Satyanarayanan, M. et al. “Code File System User and System Administrators Manual”, Carnegie Mellon University, Aug. 1997.
Braam, P.J. and PA Nelson “Removing Bottlenecks in Distributed Filesystems: Coda & InterMezzo as Examples”, Proceeding of the Linux Expo 1999, May 1999.
Braam, P.J., M. Callahan and P. Schwan “The InterMezzo File System”, Proceedings of the Perl Conference 3, O'Reilly Open Source Convention, Aug. 1999.
Phatak, S.H. and B.R. Badrinath “Data Petitioning for Disconnected Client Server Databases”, Proceedings of the 1. sup.st ACM International Workshop on Data Engineering and Wireless Mobile Access, pp. 102-109, 1999.
Tierney, B.L. et al. “A Network-Aware Distributed Storage Cache for Data Intensive Environments”, Proceedings of the B.sup.th IEEE International Symposium on High Performance Distributed Computing, pp. 185-193, 1999.
Braam, P.J. “InterMezzo: File Synchronization with InterSync”, Carnegie Mellon University, Mar. 20, 2002.
PCWebopaedia “What is Streaming?”, downloaded from www.pcwebopaedia.com, Mar. 28, 2002.
Tacit Networks, Inc_ (“Tacit Networks Delivers LAN-Speed Access to Data over WANs”) press release, Dec. 9, 2002.
Carey, M.J., M.J. Franklin, M. Livny and E.J. Shekita “Data Caching Tradeoffs in Client-Server DBMS Architectures”, Proceedings of the 1991 ACM Sigmod International Conference on Management of Data, Feb. 1991, pp. 357-366.
Cox, A.L. and R.J. Fowler “Adaptive Cache Coherency for Detecting Migratory Shared Data”, Proceedings of the 20. sup.th Annual International Symposium on Computer Architecture, 1993, pp. 98-108.
Cortes, T., S. Girona and J. Labarta “Avoiding the Cache Coherence Problem in a Parallel/Distributed File System”, Proceedings of the High-Performance Computing and Networking Conference, Apr. 1997, pp. 860-869.
Cortes, T., S. Girona and J. Labarta “Design Issues of a Cooperative Cache with No. Coherence Problems”, Proceedings of the 5.sup.th Workshop on I/O in Parallel and Distributed Systems, Nov. 17, 1997, pp. 37-46.
Wang, J. “A Survey of Web Caching Schemes for the Internet”, ACM Sigcomm Computer Communication Review, vol. 29, No. 5, Oct. 1999, pp. 36-46.
Wu, K-L and P.S. Yu “Local Replication for Proxy Web Caches with Hash Routing”, Proceedings of CIKM '99, Nov. 1999, pp. 69-76.
Luo, Q. et al. “Middle-Tier Database Caching for e-Business”, Proceedings of the 2002 ACM Sigmod Conference, Jun. 4-6, 2002, pp. 600-611.
Continuations (4)
Number Date Country
Parent 16268304 Feb 2019 US
Child 17353302 US
Parent 15359547 Nov 2016 US
Child 16268304 US
Parent 14257646 Apr 2014 US
Child 15359547 US
Parent 12196275 Aug 2008 US
Child 14257646 US