Authentication server auditing of clients using cache provisioning

Information

  • Patent Application
  • 20080098120
  • Publication Number
    20080098120
  • Date Filed
    October 23, 2006
    18 years ago
  • Date Published
    April 24, 2008
    16 years ago
Abstract
Sharing resources on a network include, for example, a domain controller hierarchy scheme, which is used in some implementations to organize and share both secure and non-secure resources in an efficient manner. Using authentication information can be used to architect a trustworthy system to divulging sensitive client data (such as user/computer passwords) to a host system. The sensitive client data can be released to the host system when a client establishes a relationship having a degree of trust with the host.
Description

BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an illustration of an example operating environment and system for authentication server auditing of clients using cache provisioning.



FIG. 2A is a schematic diagram of a domain controller hierarchy showing one or more central hub domain controllers connected over a network to one or more local domain controllers at corresponding one or more branch locations.



FIG. 2B is the domain controller hierarchy as shown in FIG. 2A, in which a user at a local branch attempts to logon to a client computer system.



FIG. 3 is a top-level illustration of a flow diagram for authentication server auditing of clients using cache provisioning.





DETAILED DESCRIPTION

As briefly described above, embodiments are directed to dynamic computation of identity-based attributes. With reference to FIG. 1, one example system for managed code assemblies includes a computing device, such as computing device 100. Computing device 100 may be configured as a client, a server, a mobile device, or any other computing device that interacts with data in a network based collaboration system. In a very basic configuration, computing device 100 typically includes at least one processing unit 102 and system memory 104. Depending on the exact configuration and type of computing device, system memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. System memory 104 typically includes an operating system 105, one or more applications 106, and may include program data 107 such that host 120, client 122, and cache 124 can be implemented (which are discussed below).


Computing device 100 may have additional features or functionality. For example, computing device 100 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 1 by removable storage 109 and non-removable storage 110. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. System memory 104, removable storage 109 and non-removable storage 110 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 100. Any such computer storage media may be part of device 100. Computing device 100 may also have input device(s) 112 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 114 such as a display, speakers, printer, etc. may also be included.


Computing device 100 also contains communication connections 116 that allow the device to communicate with other computing devices 118, such as over a network. Networks include local area networks and wide area networks, as well as other large scale networks including, but not limited to, intranets and extranets. Communication connection 116 is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer readable media as used herein includes both storage media and communication media.


In accordance with the discussion above, computing device 100 system memory 104 (and processor 102, and related peripherals can be used to implement host 120, server 122, and cache 124. Host 120, Server 122, and cache 124 in an embodiment can be used for authentication server auditing of clients using cache provisioning (described below with reference to FIGS. 2-3). Host 120 can be used receiving and forwarding an authentication request from a client, wherein the authentication request comprises affinity information for approximating a physical locality of the client. Server 122 (which is normally on a different computer than host 120) can be used for receiving and authenticating the authentication request forwarded from the client. Cache 124 is usually associated with host 120 and can be used for persisting information associated with a successfully authenticated authentication request.



FIG. 2A is a schematic diagram of a domain controller hierarchy showing one or more central hub domain controllers connected over a network to one or more local domain controllers at corresponding one or more branch locations. As can be understood more fully from the following specification and claims, a company or organization can place domain controllers in a local branch. For example, in one implementation, each local branch is provided with a Read-Only Domain Controller (“RODC”) that is essentially independent compared to other local domain controllers in a domain controller hierarchy. The local (read-only) domain controller can only be written-to by a central hub domain controller. As such, the local domain controller cannot normally be written-to by a local user, by another user at another local domain controller, or even by a malicious user from an outside network. This provides a number of security and ease-of-use limits on potential liabilities from misuse.


In addition, the local (read-only) domain controller is configured to only store the resources (e.g., user accounts “secrets”) that are needed for that branch location. For example, as will be understood more fully from the following specification and claims, the hub domain controller partitions for each branch which users can login to client computer systems at the given branch. The hub domain controller, however, does not automatically provide these resources to all local domain controllers at each branch, but provides only those authorized secrets to the given branch upon an appropriate login by the user. Thus, in one implementation, the local (read-only) domain controller is configured to receive and store only a select few of the company's or organization's secrets, which can limit the potential security exposure of the server.


For example, FIG. 2A illustrates a domain controller system 200, where one or more central hub domain controllers (e.g., 203) are connected to one or more local (read-only) domain controllers (e.g., 255) at one or more corresponding local branches 250 over a network 205. In general, the hub domain controller 203 is writable, meaning that an authorized network administrator can write, change, update or delete configuration preferences, user accounts, and/or a variety of other components at the hub domain controller 203. By contrast, the local (read only) domain controller 255 cannot generally be written-to except from a trusted source (e.g., the hub domain controller 203) in the domain hierarchy 200, but not typically from another user at the branch, or from another local domain controller.


As shown, the hub domain controller 203 includes a resources component 210, which comprises all of the configuration, non-secret, and secret information that is used or available with each branch domain controller (e.g., 255). For example, in one implementation, the resources component 210 contains user accounts in the company or organization, including the corresponding user names and passwords. The resources component 210 also contains user name and password versioning information, as well as versioning information for various configuration information used at a given branch. The hub domain controller 203 is configured to change configuration resources and/or location of resources/secrets for each different local domain controller.


For example, FIG. 2A shows that the resources component 210 has a partition for “Branch A” 215a that identifies “Configuration A” 220a information, and includes “User Account A” 225a, “User Account B” 230, and “User Account C” 235. The resources component 210 also includes a partition for “Branch B” 215b that identifies “Configuration B” 223 information, and includes “User Account A” 225a, and “User Account D” 240. The resources component 210 further includes a partition for “Branch C” 215c that identifies at least “Configuration C” 227 information. Notably, FIG. 2A shows that “User Account A” 225a is present in both the branch 215a and branch 215b partitions since the corresponding user is allowed to access client computer systems at both branches. For example, the user is a company manager visiting a given branch office in the company later in the day.



FIG. 2A also shows a branch office 250 having a local (read-only) domain controller 255 (or “local domain controller”) that is connected to one or more client computer systems 270 and 275. In at least one implementation, the local domain controller is read-only to protect the computer system from malicious or inadvertent configuration errors, as well as to protect other problems that can occur when inappropriately written-to by a local user, or otherwise non-trusted source. FIG. 2A further shows that the local domain controller 255 comprises at least configuration information 220a received from the hub domain controller 203, as well as a cache 265 for storing secrets, such as resources (e.g., secure user accounts), or the like. In particular, FIG. 2A shows that the local domain controller 255 is in a default configuration, where no local user accounts are stored in cache 265.


Thus, as shown in FIG. 2B, when a user at the local branch 250, such as a generic employee or a local administrator, attempts to logon to a client computer system 270, the logon request 260 is not necessarily authenticated directly by the local domain controller 255. Rather, the local domain controller 255 passes the logon request 260 with the local domain controller's secret in a separate message 280 through a secure communication channel. (The local domain controller 255 can also be configured to perform basic, preliminary authentication measures to ensure that random unauthorized users do not attempt to pull secrets from the hub domain controller by spoofing accounts). In one implementation, the local domain controller's secret is a secret provided previously by the hub domain controller 203, and accessible only to the local domain controller 255. The message 280 is ultimately then received and processed by an authentication module 245 at the hub domain controller 203.


The authentication module 245 identifies whether the local domain controller's secret and the user's logon credentials provided in message 280 are authentic and current. If either the local domain controller's secret or the user's logon credentials are not current, not valid, or not authentic for some other reason, the hub domain controller 203 returns an error to the local domain controller. Assuming, nevertheless, that the local domain controller's secret is valid, the authentication module 245 also checks to see if “User A” is allowed to access the resource (e.g., logon) at “Branch A” 250. For example, if User A is allowed to logon at Branch B (not shown), but not allowed to logon at the requested branch (i.e., “Branch A”), the authentication module 245 might allow the login, but will not allow the branch domain controller to cache the user's secret (e.g., user account 225a). Alternatively, the hub domain controller can return an error, if appropriate.


As shown in FIG. 2B, the local domain controller secret and the user's provided logon credentials (e.g. message 260) are valid. In addition, the user account 225b is found in the partition 215a for the Branch A domain controller. As such, the authentication module 245 of the hub controller 203 returns the current user account 225a to the local domain controller 255 through a secure communication channel. That is, the hub domain controller 203 returns the user account 225a back to the local domain controller 255, along with a message indicating the user's initial logon 260 was acceptable. Upon receipt, the local domain controller 255 then stores the user account 225a in cache 265, and tells (not shown) the client computer system 270 to allow access to the user. Since the local domain controller 255 now has the user's account 225a in cache 265, the local domain controller 255, rather the central hub domain controller 203, can handle future logon requests by this user for the action (i.e., logon request in this case).


As such, FIGS. 2A and 2B show that the local domain controller 255, and hence the local branch 250, are only given cacheable access to a secret upon a valid request by a user who is allowed to logon at the particular branch, and who is allowed to have an account cached at the branch. Thus, potential liability is limited even in situations where another malicious person might try to simulate all possible logon requests at a given branch, and “pull” those accounts down to the branch. In particular, secure account information can only be “pulled” when properly authenticated in multiple levels (e.g., basic authentication at the local level, full authentication of secrets at the hub domain controller, and/or verification of appropriate cacheability status for the local domain controller and the user).


The illustrated “as needed” or “on-demand” type of approach, however, is not required in all situations. For example, an authorized branch manager (of another branch) or company president may be visiting branch 250 that day, and will need to access one or more of the client computer systems for presentation purposes. An authorized user, such as the local network administrator for the local domain controller, can request the visitor's account in advance. For example, the local network administrator can send a request through the local domain controller 255, or through another local client computer system (e.g., 270) to the hub that requests the visitor's account.


As with prior requests, the request for advanced access also includes authentication information for the requestor, as well as the secret for the local domain controller provided earlier by the hub domain controller 203. The authentication module 245 at the hub domain controller 203 then checks to see if the visitor's account is one that can be provided in advance, and, if appropriate checks the credentials of the requester. For example, the hub domain controller can check the requester's credentials if the requester has not yet been cached at the local domain controller where the requester is making the request.


In addition, the hub domain controller 203 can check to see if the secret provided by the local domain controller is accurate. If appropriate, the hub domain controller 203 then passes the visitor's user account to the local domain controller, where it can be stored in cache 265 for an appropriate amount of time. When that amount of time has expired (e.g., when periodic updates are scheduled to be sent and received next), the hub domain controller can send information that invalidates the metadata of the secret received in advance. As will be understood more fully from the following text and claims, the messaging invalidating the secret's metadata itself comprises one or more timestamps to ensure proper ordering, prioritization, and discarding of invalid secrets cached or received by the local domain controller.


In an implementation using Kerberos, the login process of a client includes finding a Key Distribution Center (KDC) by using indirection. Using indirection in the login process typically does not specifically target a unique KDC, but rather uses a generic name that will return any KDC available, and typically nearby. This allows automatic affiliation between the KDC and the client—but this affiliation is normally unknown to the client (which normally only knows that the request is sent to an arbitrary KDC).


The first information passed is an AS_REQ (authentication server request) from the client to the KDC. This is information that can identify the client to the KDC, and normally has a limited lifetime. These messages can be snooped upon by unauthorized parties, and because the messages are also sent independently to other KDCs, they are not typically good identifiers of client affinity to a specific KDC.


The KDC responds with an encrypted package for the client identified by the AS_REQ. This package is normally only decryptable by someone holding the client's password (not available from the AS_REQ alone)—which in an embodiment is the identified client itself. The contents of the package are a typically a session key and a TGT (ticket granting ticket).


Further, when the client wishes to make a connection to some resource (such as another client, computer, resource, and the like), it creates a request and encrypts it with the session key, and includes the TGT. This request, called a TGS (ticket granting server) request can be copied and sent to other KDCs, but the internal information, the session key, the TGT, the identification of the resource requested, and the type of service requested are very resistant to being modified or spoofed (at least by network sniffing). Therefore, the TGS request itself is not a good identifier of client affinity to a specific KDC, but the data in the TGS can be used as an identifier of the client affinity. Specifically, the identity of the resource requested and the type of service requested are good identifiers of client affinity to a specific KDC.


In a Windows® logon process, a client typically requests from the affiliated KDC the services of LDAP (lightweight directory access protocol) or HOST. These are normally used for querying a Group Policy or downloading a Group Policy. Therefore, if the rule is used that whichever client, in an authorized list for a specific KDC, requests an LDAP or HOST service from that KDC in a TGS_REQUEST that is allowed to be cached, the list for tracking approximate client physical locality will be automatically created as described above.


Because the user who is logging in is by definition not already cached, all requests of the caching-KDC are forwarded to a full KDC. Accordingly, a full KDC makes the decision whether to allow caching of the TGS_REQUEST information. If the list of clients that are to be allowed to have their information and passwords cached at any single KDC is too broad, for example if too many users are allowed or a large superset of the actual physically local users, a large part of the benefit knowing the client affinity can be lost. If a very small set of clients, which directly relate to the actual physically local users and computers, is used then the security of the solution is maximized.


However, independently managing such a small list for each locale for any large or dynamic organization can be time-consuming and error prone. One question is how to automatically create the list of allowed users, while ensuring the system is secure. In an embodiment, a second list of clients who are authorized to be allowed to be cached is kept, which adds a (simplifying) level of indirection to the process. This list can be relatively large, and can include all possible clients except those explicitly denied (this list is relatively easy to manage, and in fact already is in many environments). When the large list is used, a determining factor becomes, of those who are authorized by the large list, who should be allowed to be cached. A “deny list” can also be used. As discussed above, clients that have their locations approximated via the affinity with a particular KDC can thus be cached with a level of trust.



FIG. 3 is a top-level illustration of a flow diagram for authentication server auditing of clients using cache provisioning. In operation 302, a first client who wishes to logon sends an AS_REQ, encrypted with their password to a nearby caching-KDC. The AS_REQ can be vetted to a single KDC using a locator mechanism such that the located KDC is unknown to the client.


In operation 304, it is determined that the client is not cached at the caching-KDC, and because the AS_REQ cannot be locally processed, the AS_REQ request is forwarded to a full KDC.


In operation 306, the full KDC validates the AS_REQ as if it had directly originated with the first client. If the validation is successful, the full KDC creates a response, which includes a session key and a TGT, and encrypts the response with the client password. It sends this to the caching-KDC from which it received the forwarded AS_REQ request. In operation 308, the caching-KDC returns it intact to the client, and in operation 310, the client receives the response and decrypts it.


In operation 312, the first client wishes to query about the Group Policy as part of the logon process. The first client creates a TGS_REQUEST for the LDAP service on the caching-KDC. The TGS_REQUEST is sent (comprising server and service information, along with the TGT), encrypted with the session key to the caching-KDC itself.


In operation 316, the caching-KDC verifies that it cannot read the session key to be able to decrypt the request, and the request is forwarded it to the full KDC.


In operation 318, the full KDC decrypts the information and validates the request that came from the original client by using the correct session key, and a valid TGT. The full KDC notes that the request is for the LDAP service on the caching-KDC, and marks that client to be allowed to be cached by the caching-KDC. The full KDC responds to the request appropriately, and sends the info to the caching-KDC.


In operation 320, the caching-KDC forwards the response from the full KDC to the client.


In operation 322, the client initiates a connection to the LDAP service on the caching-KDC using the Kerberos information in the TGS response.


In operation 324, the caching-KDC (because the client made an LDAP service request) requests of the full KDC that it be allowed to cache the client's information and password.


In operation 326, the full KDC determines that the client has been marked to be cached by the caching-KDC, and grants the request, sending the caching-KDC the requested information and passwords. Further requests (for TGSs to other site affiliated resources and for additional AS_REQs and TGT requests) from the client to the caching-KDC can be serviced by the caching-KDC itself, with no forwarding required.


The above specification, examples and data provide a complete description of the manufacture and use of embodiments of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.

Claims
  • 1. A computer-implemented method for authentication server auditing of clients, comprising: receiving an authentication request for authenticating a client, wherein the authentication request comprises affinity information for approximating a physical locality of the client, and wherein the affinity information is unknown to the client for which the authentication request is made;logging information associated with the authentication request in response to a successful authentication; andin response the logged information, dynamically granting security data based on the logged information.
  • 2. The method of claim 1 wherein the authentication request is made according to the Kerberos protocol.
  • 3. The method of claim 1 wherein the authentication request causes a locator to identify a key distribution center (KDC) that has a location that is approximately in the same location as the client.
  • 4. The method of claim 1 wherein the dynamically granted security data is also granted on the basis of a list of accepted users.
  • 5. The method of claim 1 wherein the dynamically granted security data is also granted on the basis of a deny list of possible users.
  • 6. The method of claim 1 further comprising sending a session key and a ticket-granting ticket in response to the successful authentication.
  • 7. The method of claim 6 wherein the dynamically granted security data is a client user or a client computer password information.
  • 8. The method of claim 7 wherein the dynamically granted security data is granted from a full KDC.
  • 9. The method of claim 7 wherein the dynamically granted security data is received by a caching KDC.
  • 10. The method of claim 1 wherein the KDC is a read-only domain controller (RODC).
  • 11. The method of claim 1 wherein the logged information comprises a service principal name (SPN) wherein the SPN is derived from the authentication request.
  • 12. The method of claim 1 wherein the dynamically granted security data is used to allow caching in a caching KDC.
  • 13. The method of claim 1 wherein the dynamically granted security data is used to allow a caching KDC to assume the identity information of the client.
  • 14. The method of claim 13 wherein the successful authentication comprises generating a package that can only be decrypted by an entity holding the client's password.
  • 15. The method of claim 14 wherein the package comprises a session key and a ticket-granting ticket.
  • 16. A system for authentication server auditing of clients, comprising: a host for receiving and forwarding an authentication request from a client, wherein the authentication request comprises affinity information for approximating a physical locality of the client,a server for receiving and authenticating the authentication request forwarded from the client; anda cache that is associated with the host for persisting information associated with a successfully authenticated authentication requested.
  • 17. The system of claim 16 wherein the host dynamically grants security data based on the cached information.
  • 18. A tangible medium comprising computer-executable instructions for: receiving an authentication request for authentication of a client, wherein the authentication request comprises affinity information, and wherein the affinity information is unknown to the client for which the authentication request is made;logging information associated with the authentication request in response to a successful authentication; andin response the logged information, dynamically granting security data based on the logged information.
  • 19. The tangible medium of claim 18 further comprising the logging information for a service principal name (SPN) and for deriving the SPN the authentication request.
  • 20. The tangible medium of claim 18 further comprising using the logged information to dynamically grant access to the client.