Continuous universal trust architecture and method

Information

  • Patent Grant
  • 12309132
  • Patent Number
    12,309,132
  • Date Filed
    Friday, July 12, 2024
    10 months ago
  • Date Issued
    Tuesday, May 20, 2025
    4 days ago
  • Inventors
    • Gaist; Nir (Dallas, TX, US)
  • Original Assignees
    • Cortwo Corp. (Dallas, TX, US)
  • Examiners
    • Moorthy; Aravind K
    Agents
    • Judson; David H.
Abstract
A method for continuous “universal” trust (CUT) leverages a core network that manages authentication set-up and continuous trust logic for entities that use the network. During communications through the network, and regardless of source or destination, CUT logic is applied to each entity independently, and without requiring authentication to be done directly between entities that desire to communicate with one another. In a typical implementation involving a user and a service, CUT authenticates both user and service independently. To facilitate both initial and continuous authentication, a CUT engine applies authentication factors to an entity, independently of factors that may be applied by the CUT entity to any other entity that uses the network. In this solution, the CUT engine continues to authenticate and maintain trust between each entity and the core network, irrespective of the underlying communication to be carried out or occurring between the entities themselves.
Description
BACKGROUND OF THE INVENTION

This disclosure relates generally to trust-based network technologies and services.


Conventional network security has focused on network perimeter defenses, but many organizations no longer have a clearly-defined network perimeter. Remote work environments and the extensive use of mobile computing devices have dissolved traditional network boundaries. To protect a modern digital enterprise, organizations need a comprehensive strategy for secure access to their corporate resources, regardless of where they are located or how they are configured. To address this need, a security model, known as zero trust, has been developed. Zero trust security provides network security in which no entity is trusted by default from inside or outside the network. Zero trust solutions work by establishing strong identity verification, validating device compliance prior to granting access, and ensuring least privilege access to only explicitly authorized resources.


Current tools and security technologies (e.g., login methods, Multiple Factor Authentication (MFA), CAPTCHA, behavioral-based and other solutions, whether basic or advanced) typically operate to facilitate trust/authentication per specific interaction, or set of interactions (i.e., a session), between a particular source (e.g., a user) and a particular destination (e.g., a service). Common implementations of such solutions, such as gateways, are positioned at or in association with the destination's premises, and they operate to identify and authenticate pre-defined users/connections that are targeting the specific service.


Continuous trust refers to the notion of trust that develops and/or evolves over time as a result of continuing interactions. For example, a system that provides contextual-based zero trust access may implement context-aware access controls that allow or authenticate access to protected resources after performing risk assessment based on security posture information; a continuous trust evaluation may then be implemented for each of the connected devices, users, and application instances in the operating environment. In this approach, the system may start with zero trust and gradually increase the trust score of a subject based on analysis of the network data, identity information, and security posture information. The system may then increase or decrease the trust level for the subject based on a risk score determined using the security posture information. A system of this type is described in U.S. Publication No. 2022/0210173.


While continuous trust solutions provide advantages, typically they are implemented on a per-site basis, and they are designed to operate within or in association with given network domains.


SUMMARY OF THE INVENTION

According to this disclosure, a method for continuous “universal” trust (CUT) leverages a core network that manages authentication set-up and continuous trust logic for all entities that use the network. During all communications through the network (e.g., through a dedicated CUT engine), and regardless of source or destination, the CUT logic is applied to each entity independently, and without requiring authentication to be done directly between entities that desire to communicate with one another. Thus, in a typical implementation involving a user and a service, CUT authenticates both user and service independently and separately. To facilitate both initial and continuous authentication, the CUT engine applies one or more authentication factors to an entity, once again independently of factors that may be applied by the CUT entity to any other entity that uses the network. In this solution, the CUT engine continues to authenticate and maintain trust between each entity and the core network, irrespective of the underlying communication to be carried out or occurring between the entities themselves. Thus, if each of a pair of entities establishes trust through the core network, that trust is retained throughout the communication session that occurs between the pair. Further, the notion of universality as provided for herein means that the trust is retained not just between the pair of entities, but also universally across the core network. Thus, if a first entity independently (and separately from a second entity) has established trust with the core network, this trust is retained and applicable to other communications with the core network.


According to a more specific aspect, a method of continuous universal trust is carried out in a network operating environment. The method begins by configuring a secure communication path between each of set of first entities, and a second entity, wherein all traffic and data flows between a first entity and the second entity pass through the secure communication path, which in this embodiment corresponds to the core network referenced above. As explained, preferably all communications flow through the secure communication path, and each endpoint for a communication (e.g., between a given first entity and the second entity) is authenticated to the core network independently and continuously. During a communication session originating at each first entity and directed to the second entity, the following sub-steps are then carried out. Initially, a determination is made whether an authentication requirement associated with a permitted access to a protected resource has been met. The authentication requirement is specified in a security policy associated with a trust circle, and wherein the set of first entities comprise a trust circle. Upon a determination that the authentication requirement has been met, the first entity is permitted to access the protected resource associated with the second entity via the secure communication path. Based on one or more authentication factors and other data, and while the communication session persists, a determination is then made whether the authentication requirement should be modified (e.g., stepped-up, stepped-down, or the like). Upon a determination that the authentication factor should be modified, the authentication requirement is adjusted and then enforced for at least some remaining portion of the communication session.


Preferably, the above-described continuous trust establishment and enforcement is enabled for each of the first entities in the trust circle, and for the second entity, thereby providing for continuous universal trust with respect to all the traffic flows.


According to another aspect, a computing infrastructure is provided for continuous and universal trust in a network operating environment, including the publicly-routable Internet. In this environment, one or more first entities desire to obtain access to protected resources associated with one or more second entities. A subset of the first entities may be associated with one another in a trust circle, which is a logical entity over which a security policy is enforced. The infrastructure comprises a core network comprising a relay through which all traffic and data flows between any first entity and any second entity pass, together with a controller, which includes an authentication module, and a continuous trust engine. Preferably, each first entity and each second entity are independently and separately authenticated to the core network. For each communication session established between a first entity and a second entity, and in this embodiment, the controller is configured to: (i) determine whether an authentication requirement associated with a permitted access to a protected resource associated with the second entity has been met, (ii) upon a determination that the authentication requirement has been met, permitting the first entity to access the protected resource associated with the second entity via the secure communication path; (iii) based on one or more authentication factors and other data, determining whether the authentication requirement should be modified; (iv) upon a determination that the authentication factor should be modified, adjusting the authentication requirement; and (v) enforcing the adjusted authentication requirement for at least some remaining portion of the communication session.


Generalizing, in a preferred embodiment, the above-described trust circle paradigm is enforced for all interactions of an entity with other entities, such that every entity on the core network is trusted and each entity's trust is universal, rather than being built and managed on a per-interaction or per-service based security model.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 depicts a representative continuous universal trust architecture of this disclosure; and



FIG. 2 depicts a visual representation of representative interactions among a user, the core network, and a service provider in the architecture depicted in FIG. 1.





DETAILED DESCRIPTION OF THE INVENTION

As noted above, the technique of this disclosure provides for a Continuous Universal Trust (CUT) architecture that enables permitted entities (e.g., users, user groups, machines, devices, applications and the like) to access protected resources (e.g., services, sites, other machines, and the like). In one embodiment, the CUT is implemented in a core network positioned as an overlay on the publicly-accessible Internet, and preferably operates in a Software-as-a-Service manner. In a variant embodiment, the core network is physically separated (in effect “air-gapped”) from the public Internet and runs as a separate and distinct network (as opposed to being an overlay), in which case communications to and from core network elements are enabled through an Internet bridge, as will be described further below. In either case, the core network may leverage a cloud computing infrastructure, and conceptually the core network sits in-between (as an intermediary) requesting entities and the protected resources (typically provided by third party service providers) that the entities are attempting to interact with. According to a first aspect of this disclosure, all (or substantially all) traffic intended for the destination entities (e.g., third party service providers, machines, or the like), and typically all responses from the destination entities and intended for the requesting entities, is routed through the CUT architecture, e.g., in an encrypted manner. This routing method and system, together with the establishment and maintenance of trust between each entity and the CUT architecture directly (as opposed to between a particular requesting entity, and an associated destination entity), enables the providing of continuous trust, as will be described. While all traffic is routed through the core in this paradigm, preferably the core network components themselves (including any CUT system operator(s)) do not have access to the content portions of the traffic due to the encryption or like protection.


As used herein, the term “site” typically refers to a website (or some protected portion thereof), but the reference to a “site” should be broadly construed to refer to any protected resource available from a server or other computing entity. The resource may be the overall site, a service endpoint, a portion of a site, a web service, an application, a web page, a document or other file, another machine or computing device, or a single protected object. When the requesting entity is a human being, that entity has an associated mobile or other computing device running one or more mobile applications (or “apps”) including in one embodiment the agent application (“agent”) of this disclosure, as will be described below. In a variant embodiment, an agent is not required. The mobile device may be a smartphone, tablet or wearable, an IOT device or appliance, laptop or the like. The mobile app can be downloaded and installed via a mobile application delivery service, or it may be provided as a native application on the mobile device. Just as the entity seeking the protected resource (at a third party service provider) runs an agent application, the service provider likewise runs a CUT application that is referred to herein as a daemon. Each agent (on the end user client device) or daemon (on the target service provider computing system) typically is implemented in software, e.g., as a program, process, execution thread, or the like.


In another embodiment, a source or requesting entity is a computing machine that runs the agent, and a target or destination entity is a computing machine that runs the daemon. Thus, machine-to-machine interactions through the CUT architecture are also enabled.


In a more specific embodiment, the core network in the CUT comprises one or more trust-based services, each of which is typically implemented as a set of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services). Typically, the service is multi-tenant based, and it is provided on behalf of each service customer (an organization) that desires to enables end users to obtain secure access to the organization's protected resources, which may be hosted on one or more servers that are distinct from the core network (although this is not a limitation). As will be described, the continuous trust provided by the solution herein typically is implemented as an adjunct to the organization's existing authentication process flow. As noted above, in one embodiment an end user entity or device has an associated agent (that is a component of the CUT) and that is configured to execute in the user's device at a network interface therein, e.g., a local loopback interface. To obtain the agent, the user of the device (or service in the case of a web site, IOT device, or any service or business account) registers that device (or other component) with the service, e.g., according to a Know Your Client (KYC) verification (or other identity management) scheme. The particular manner by which the agent is registered to the service may vary and is not a limitation. Each registered user, service or device has an associated CUT Identifier. A typical device includes a browser. Among other tasks, the agent has the capability to monitor “ambient” factors that operate in the background and that do not require active user interaction, as well as “active” factors that involve or do require active user interaction (e.g., login operations, user interface interactions, biometrics, data generated as a result of any challenge/response, and the like). Representative ambient factors are quite varied and may include, without limitation, the following: the user's local operating environment including device characteristics, operating system, installed applications, application versions, scripts, and the like, the set of icons on the user's home screen of a mobile application, a set of bookmarks that has been established for a browser, a list of then-running processes on a device, a set of mouse movements generally, and other general or specific user behaviors that are not tied to some prompting, challenge or the like To protect privacy, ambient factor data necessarily does not expose personally-identifiable (or the like) information (PII). This locally-captured ambient and active factors data is provided as telemetry to the core network.


According to a further aspect of this disclosure, the Continuous Universal Trust (CUT) architecture is accessible by multiple entities (typically end users, devices, services, enterprises or combinations thereof) that themselves are configured according to a “circle” construct. In this paradigm, a “circle” (sometimes referred to as a “trust circle”) refers to a set of entities to which a particular continuous trust policy (or more generally a security policy) is applied by the CUT architecture. Thus, and for a particular enterprise, a circle may include all of the registered users, users group or other accounts of that enterprise and that are subject to a particular continuous trust policy that will be enforced (in the same manner) for all of the entities that comprise the circle. As noted above, an end user client entity, or any service or other entity participating in CUT, and that registers with the system, has an associated CUT Identifier, and this identifier has one or more profiles, each of which corresponds to a particular circle. A given CUT Identifier may be associated with multiple circles.


Generalizing, and according to a preferred embodiment, the notion of trust with respect to the CUT architecture is built upon all interactions of an entity with other entities to provide for a universal trust paradigm. In this context, universal trust ensures that every entity on the network can be trusted (a global circle of trust), and each entity's trust is universal (a trust index or score for the entity is global). This trust is facilitated in the preferred embodiment by requiring each entity that operates in association with the CUT architecture to independently and separately authenticate itself to the core network and without regard to an authentication paradigm specific to a particular destination. Further, the notion of universality as provided for herein means that the trust is retained not just between, for example a pair of entities, but also universally across the core network. Thus, if a first entity independently (and separately from a second entity) has established trust with the core network, this trust is retained and applicable to all other communications with the core network.


As will be described, and according to a further important benefit herein, the core network facilitates authentication by the registered entities of a given circle to the protected resources defined by the circle's security policy, as well as enforcing continuous trust. Advantageously, and as will be further described, continuous trust is enabled throughout a particular interaction session between a circle's registered user and a target service provider. Thus, in a typical use case, the core network provides a registered user single sign-on (SSO) or other authentication services to facilitate the user's initial access to a target service provider, and then further enforces a continuous trust paradigm (as defined by the circle's security policy) as the user continues its interactions with the service provider during a session. Throughout this interaction, all traffic flows through the core network, but with all or part of the contents encrypted, as previously described.



FIG. 1 depicts the CUT architecture in a representative (but non-limiting) embodiment. In this embodiment, the end user 100 seeking access to a protected resource associated with a service 102 has an associated agent 104, and the service provider that provides the service 102 has an associated daemon 106. An application 101 executing on the end user device is any application functionality that generates a request for a protected resource located at the service 102. A typical application is a web browser, mobile app, native application code, or the like. As shown, the core network 108 (sometime referred to herein as an orchestration layer) comprises a number of components including an authentication factors runtime 110 that receives and manages ambient and active factors data, an engine 112 that uses the authentication factor and other data to manage and control the continuous trust processing, typically according to a circle's security policy, a relay 114 (such as Wireguard™, a secure VPN protocol and tunneling software) through which traffic flows pass, a DNS proxy 116 to facilitate hostname-based access to the continuous trust architecture and to locate services within the core network, together with one or more back-end databases and data stores, and related services. These databases and data stores include, for example: a key-value store 118 that stores authentication data, and a time-series database 120 that stores telemetry extracted from the traffic flows. As also depicted, an SDK 122 (or other such library) is provided to provide or facilitate one or more ancillary functions such as routing service traffic through the core, integrating identity management and other such services with the network, and to facilitate a third party service's leveraging of cores services. The SDK 122 typically is implemented as one or more application programming interfaces (APIs) to which third party service functions are hooked into the core network. Using these services, the SDK associates security profile(s) with circle(s) and their associated registered entities.


In this example embodiment, the DNS proxy 116 interacts with other Domain Name Services (DNS) on the publicly-routable Internet 124 in a known manner. To this end, hostnames that trigger access to the CUT architecture are received by the DNS proxy 116 (and, as necessary, Internet DNS) to obtain IP addresses of orchestration layer components, such as the relay through which traffic flows. In this embodiment, the CUT network itself is distinct from the Internet (as opposed to being an overlay), in which case an Internet bridge 125 is provided to provide a mechanism by which traffic flows enter and exit the core network, which in effect is air-gapped from the public Internet 124 otherwise. An example Internet bridge is a switch, router, proxy (e.g., Hyper), or the like. With this arrangement, the end user 100 or the participating service 102, as the case may be, safely interacts (with other entities) on the core network as opposed to the Internet generally, thereby significantly reducing the exposure of the end user or the service to Internet-based threats and attacks. While it is preferred that the end user or service always use the core network, this is not a requirement; for example, the service 102 may interact with the CUT-registered end user 100 over the core network 108 while also interacting with other end users (who are not registered with the core network) directly. In the latter case, a return path (not depicted) is provided from the service (or the daemon) to the public Internet, possibly through the Internet bridge. As also noted above, while in the preferred embodiment all communications with the Internet (by the participating entities) occur through the Internet bridge, this is not required, e.g., when the CUT is configured as an overlay on top of the publicly-routable Internet.


Referring back to FIG. 1, and as noted above, preferably all traffic originating from the client side and intended for the server-side, and vice-versa, during a session flows through the core network 108 and, in particular, through the relay 114 that is logically and physically positioned between the agent 104 and the daemon 106. As noted above, a representative relay is a secure VPN tunnel-based solution, such as Wireguard, although this is not a limitation. Wireguard provides a virtual network interface that has a unique public key (pkey) that is used to encrypt or decrypt the traffic at the tunnel endpoints. The pkey can also be used to facilitate validation of user identity, as will be described in further detail below. In an alternative embodiment, the traffic (or at least content portions thereof) is secured and delivered end-to-end with IPSec tunnels, HTTP/TCP with custom encryption, firewall-punching protocols (STUN, TURN, etc.) or the like depending on implementation. Generalizing, the nature and type of traffic that is delivered over end-to-end communication path should be broadly construed as covering any IP-based traffic regardless of the application layer, security wrapper, or the like.


Once again referring back to FIG. 1, the authentication factors runtime 110, operating in conjunction with the engine 112, provides initial and continuous authentication of the user or device during a given interaction session with the service provider. Together, module 110 and engine 112 are sometimes referred to as a controller (or continuous trust controller). As depicted, and as will be described further below, typically continuous authentication is based on authentication factors that are passed by the agent 104, such as one or more ambient factors, one or more active factors, or combinations thereof, as well as telemetry 113 extracted from the traffic flow(s). The telemetry 113 is collected as time-series data in the time-series database 120 and provided to the engine 112 for processing according to the security policy provided from the SDK 122. As depicted by the arrows, and depending on the determination(s) made by the engine, the agent may be instructed to increase the number of authentication factors that it provides to the authentication module (for evaluation of the entity's continued right to have access), or the daemon may be structured to block the session from proceeding further.


More generally, and based on one or more authentication factors and/or the telemetry, the engine is configured to dynamically change a given authentication requirement during an interaction session. Thus, in one example, at the outset of a workflow, only a first authentication requirement (e.g., weak authentication) is enforced; as the workflow continues, the engine then determines that a second authentication requirement (e.g., a somewhat higher authentication) is then needed, e.g., to ensure continued compliance with the security policy. The nature and scope of this authentication step-up in this example depends on the policy, the telemetry, and perhaps other factors (e.g., time of day, client location, service load, and others). Even later, and as the session continues, the engine may then determine to step-up the authentication requirement to an even higher level, e.g., because of the nature or type of follow-on request being made by the client. This is just a representative example, as the change that occurs to the authentication requirements may be a step-down (a relaxation) of the authentication requirement. Another example would be the authentication requirement being stepped-up, but then later returned to a lower authentication requirement. All of these examples are merely illustrative. Upon the initial authentication request, and/or whenever the engine determines that the authentication requirement should change, the engine receives the session, circle and user identifiers (and any associated PKC values), and makes the determination(s) about the nature and type of authentication requirement(s) that are then enforced (or continued to be enforced) by the system for the on-going session. To this end, the engine computes a set of trust indexes (also referred to herein as scores), typically one trust index per authentication factor, and uses this scoring data to determine if, when and to what degree the authentication requirement should be adjusted to maintain the continuous trust.


As noted above, in one embodiment a tunnel established between an end user and a service may have an associated unique public key (pkey). FIG. 1 depicts the public key being used. In addition to its use for encryption and decryption, this key may be leveraged to validate user identity. For example, assume that during a particular interaction or session the system determines or reaches a “lower confidence” in the user's identity. At this point, an agent that has access to the pkey can trigger processing of an ambient (or even an active) authentication factor, thus enabling the system to associated identity with the pkey. (As a variant, some other local key may be tied to the user's identity, or a key exchange may be used). The service daemon then uses the pkey to both decrypt the traffic and to validate that that the traffic indeed originated from the identity. How the pkey is transmitted to the daemon is implementation-specific.



FIG. 2 depicts a typical session among the user and the service provider, with traffic flows through the core. As used herein, a session typically involves a one or more client requests directed to the service provider service 202, and one or more responses to the one or more client requests directed to the client (from the service). The UML diagram depicted shows a single request, and an associated response. There is no requirement for a one-to-one correspondence between a request, and a response, or vice-versa. In this example, the request originates from application 201 and is intercepted by agent 204 operating on the device's loopback address in the usual case. As depicted, an encrypted tunnel 226 is or has been established between the agent 204 on the client side, and the daemon 206 on the server-side. The tunnel is associated with a public key (pkey) uniquely associated to the virtual network interface of the tunnel that is associated with the agent-daemon pair. Typically, there is a distinct tunnel per agent-daemon. At step (1), the agent 204 makes a request to the authentication module 210, passing a message comprising {user/pass/src/pkey}, where “user” is an identifier associated with the user, “pass” is the user's password or other secret, “src” is a source identifier for the client entity (e.g., an IP address), and “pkey” is the tunnel public key. The authentication factors runtime module 210 checks the information against the key-value store (in FIG. 1) and, if the authentication is accepted, forwards a message to the engine 212 at step (2) that comprises {id/session/src/pkey}, wherein “id” refers to the CUT Identifier, and “session” refers to the session. At step (3), the authentication module 210 notifies the agent 204 that the session (here, with the core network) is established. In response, and at step (4), the agent 204 encrypts the original request and forwards the encrypted request, together with source identifier and a destination identifier associated with the service provider, to the relay 214. In response, and at step (5), the relay 214 provides the engine 212 a message comprising {src/dest/len/timestamp}, where “len” is the length in bytes of the encrypted request and associated metadata, and “timestamp” corresponds to a time at which the relay received that encrypted request. At step (6), the relay 214 forward the encrypted request package (including the “src” and “dest” metadata) to the daemon 206. At step (7), the daemon responds by sending the engine 212 a message comprising {src/pkey/dest/timestamp/request body hash}, where “request body hash” is a hash value that the daemon has computed over the encrypted request. The request body hash typically is computed using a hash function such as MD5, SHA-1, or the like. As depicted in step (8), the {id/session}message is periodically called and cached on the daemon 206 as a keep-alive indication for the session.


To complete the request-response flow, at step (9) the daemon 206, which has applied the public key and recovered the request (from the encrypted request), forwards that request to the service 202 in a message comprising {src/dest/request/id/session}. At step (10), the service 202 responds to the daemon 206 with the response to the request. The nature of the response of course will depend on the request. The daemon then re-applies the encryption to the service response using the key to generate an encrypted response. At step (11), the encrypted response is shown as being returned through the core network to the agent 204, which in turn removes the encryption and passes the response (in the clear) back to the application 210. This competes the request-response flow.


As also depicted in FIG. 2, and during the request-response flow (which as noted above may involve one or more requests, and one or more responses), the engine 212 performs the continuous trust analysis as indicated by the analysis operation at step (12). This is a continuous operation that, as noted above, takes into consideration the policy (circle), the authentication factors (ambient, actual, or combinations), the nature of a given request, the nature of a given response, other system, application or device metadata, and so forth, and that results in one or more control signals that maintain or modify the initial or continuing authentication requirement(s) for the given session, to thereby facilitate the continuous trust. One such control signal is depicted at step (13), as a BLOCK session signal provided by the engine 212 to the daemon 216. This results (in this example) with a termination of the session. As noted above, blocking the on-going session from continuing is just one or many possible authentication control options that include: no change, an authentication step-up, an authentication step-down, and combinations.


As noted, the above-described architecture is configured for multi-tenancy, and is highly-scalable to handle large numbers of requests from large numbers of requesting entities. As the system scales out, a continuous universal trust solution is then provided. Because preferably all traffic flows through the core network, and in addition to providing the per-session continuous authentication, the system can readily terminate sessions in the event of a cyber or other attack, and it can readily enforce circle policy compliance. Although it is described that all traffic flows occur through the core network, this may not always be possible, e.g., some video chat applications require peer-to-peer connectivity; in these circumstances, the system may whitelist certain of such applications while enforcing that the remaining traffic to and from a registered end user client and to and from a registered service provider otherwise flow through the network.


As noted above, it is not required that the system be implemented with an “agent” or a “daemon,” as the functionality of these components may be built into or other integrated with the client or server, as the case may be. An “agent” that allows the system to collect and implement at least the ambient authentication factors greatly improves user experience, as this obviates one or more logins to the participating services. Also, the use of a dedicated agent that hooks into and intercepts potentially the entire traffic profile to and from the end user client enables the system to carry out more detailed behavioral analysis that is then leveraged to provide the continuous authentication. Telemetry data is readily captured as it flows through the infrastructure, all within requiring explicit SDK calls from the service provider.


Enabling Technologies


Typically, the computing platform or infrastructure that provides the core network is managed and operated “as-a-service” by a service provider entity. In one embodiment, the platform is accessible over the publicly-routed Internet at a particular domain, or sub-domain. The platform is a securely-connected infrastructure (typically via SSL/TLS connections), and that infrastructure includes data encrypted at rest, e.g., in an encrypted database, and in transit. The computing platform typically comprises a set of applications implemented as network-accessible services. One or more applications (services) may be combined with one another. An application (service) may be implemented using a set of computing resources that are co-located or themselves distributed. Typically, an application is implemented using one or more computing systems. The computing platform (or portions thereof) may be implemented in a dedicated environment, in an on-premises manner, as a cloud-based architecture, or some hybrid.


The system may be implemented on-premises (e.g., in an enterprise network), in a cloud computing environment, or in a hybrid infrastructure. An individual end user or an autonomous application, typically accesses the system using a user application executing on a computing device (e.g., mobile phone, tablet, laptop or desktop computer, Internet-connected appliance, etc.). In a typical use case, a user application is a mobile application (app) that a user obtains from a publicly-available source, such as a mobile application storefront. The platform may be managed and operated by a service provider. Although typically the platform is network-accessible, e.g., via the publicly-routed Internet, the computing system may be implemented in a standalone or on-premises manner. In addition, one or more of the identified components may interoperate with some other enterprise computing system or application.


Authentication step-up or step-down determinations made by the engine may leverage machine learning. Thus, in a variant embodiment, the core network includes a machine learning system. The nature and type of Machine Learning (ML) algorithms that are used for determining whether changes to the continuous authentication should be implemented may vary. As is known, ML algorithms iteratively learn from the data, thus allowing the system to find hidden insights without being explicitly programmed where to look. ML tasks are typically classified into various categories depending on the nature of the learning signal or feedback available to a learning system, namely supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, the algorithm trains on labeled historic data and learns general rules that map input to output/target. The discovery of relationships between the input variables and the label/target variable in supervised learning is done with a training set, and the system learns from the training data. In this approach, a test set is used to evaluate whether the discovered relationships hold and the strength and utility of the predictive relationship is assessed by feeding the model with the input variables of the test data and comparing the label predicted by the model with the actual label of the data. The most widely used supervised learning algorithms are Support Vector Machines, linear regression, logistic regression, naive Bayes, and neural networks.


One or more functions of the computing platform of this disclosure may be implemented in a cloud-based architecture. As is well-known, cloud computing is a model of service delivery for enabling on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. Available services models that may be leveraged in whole or in part include: Software as a Service (SaaS) (the provider's applications running on cloud infrastructure); Platform as a service (PaaS) (the customer deploys applications that may be created using provider tools onto the cloud infrastructure); Infrastructure as a Service (IaaS) (customer provisions its own processing, storage, networks and other computing resources and can deploy and run operating systems and applications).


The platform may comprise co-located hardware and software resources, or resources that are physically, logically, virtually and/or geographically distinct. Communication networks used to communicate to and from the platform services may be packet-based, non-packet based, and secure or non-secure, or some combination thereof.


More generally, the techniques described herein are provided using a set of one or more computing-related entities (systems, machines, processes, programs, libraries, functions, or the like) that together facilitate or provide the described functionality described above. In a typical implementation, a representative machine on which the software executes comprises commodity hardware, an operating system, an application runtime environment, and a set of applications or processes and associated data, which provide the functionality of a given system or subsystem. As described, the functionality may be implemented in a standalone machine, or across a distributed set of machines.


Typically, but without limitation, a client device is a mobile device, such as a smartphone, tablet, or wearable computing device, laptop or desktop. A typical mobile device comprises a CPU (central processing unit), computer memory, such as RAM, and a drive. The device software includes an operating system (e.g., Google® Android™, or the like), and generic support applications and utilities. The device may also include a graphics processing unit (GPU). The mobile device also includes a touch-sensing device or interface configured to receive input from a user's touch and to send this information to processor. The touch-sensing device typically is a touch screen. The mobile device comprises suitable programming to facilitate gesture-based control, in a manner that is known in the art.


Generalizing, the mobile device is any wireless client device, e.g., a cellphone, pager, a personal digital assistant (PDA, e.g., with GPRS NIC), a mobile computer with a smartphone client, network-accessible Internet of Things (IOT) device, or the like. Other mobile devices in which the technique may be practiced include any access protocol-enabled device (e.g., an Android™-based device, or the like) that is capable of sending and receiving data in a wireless manner using a wireless protocol. Typical wireless protocols are: WiFi, GSM/GPRS, CDMA or WiMax. These protocols implement the ISO/OSI Physical and Data Link layers (Layers 1 & 2) upon which a traditional networking stack is built, complete with IP, TCP, SSL/TLS and HTTP.


Each above-described process preferably is implemented in computer software as a set of program instructions executable in one or more processors, as a special-purpose machine.


While the above describes a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary, as alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, or the like. References in the specification to a given embodiment indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic.


While the disclosed subject matter has been described in the context of a method or process, the subject matter also relates to apparatus for performing the operations herein. This apparatus may be a particular machine that is specially constructed for the required purposes, or it may comprise a computer otherwise selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including an optical disk, a CD-ROM, and a magnetic-optical disk, a read-only memory (ROM), a random access memory (RAM), a magnetic or optical card, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.


A given implementation of the computing platform is software that executes on a hardware platform running an operating system such as Linux. A machine implementing the techniques herein comprises a hardware processor, and non-transitory computer memory holding computer program instructions that are executed by the processor to perform the above-described methods.


The functionality may be implemented with other application layer protocols besides HTTP/HTTPS, or any other protocol having similar operating characteristics.


There is no limitation on the type of computing entity that may implement the client-side or server-side of the connection. Any computing entity (system, machine, device, program, process, utility, or the like) may act as the client or the server.


While given components of the system have been described separately, one of ordinary skill will appreciate that some of the functions may be combined or shared in given instructions, program sequences, code portions, and the like. Any application or functionality described herein may be implemented as native code, by providing hooks into another application, by facilitating use of the mechanism as a plug-in, by linking to the mechanism, and the like.


The platform functionality may be co-located or various parts/components may be separately and run as distinct functions, perhaps in one or more locations (over a distributed network).


What is claimed follows below.

Claims
  • 1. A method of continuous and universal trust in a network operating environment, comprising: configuring a secure communication path between each of a set of first entities, and a second entity, wherein each of the set of first entities and the second entity are each authenticated independently and separately from one another and all traffic and data flows between a first entity and the second entity pass through the secure communication path;during a communication session originating by each first entity and directed to the second entity: determining whether an authentication requirement associated with a permitted access to a protected resource associated with the second entity has been met, wherein the authentication requirement is specified in a security policy associated with a trust circle, and wherein the set of first entities comprise a trust circle;upon a determination that the authentication requirement has been met, permitting the first entity to access the protected resource associated with the second entity via the secure communication path;based on one or more authentication factors and other data, determining whether the authentication requirement should be modified;upon a determination that the authentication factor should be modified, adjusting the authentication requirement; andenforcing the adjusted authentication requirement for at least some remaining portion of the communication session;wherein for, at least first and second first entities, authentication requirements are dynamically adjusted during the communication sessions with the second entity such that the first and second first entities have differing authentication requirements in their respective communication sessions.
  • 2. The method as described in claim 1, wherein the trust circle is associated with a security policy.
  • 3. The method as described in claim 1, further including registering each first entity of the set of first entities in the trust circle.
  • 4. The method as described in claim 1, wherein the authentication factors include one or more ambient authentication factors, one or more active authentication factors, and combinations thereof.
  • 5. The method as described in claim 1, wherein adjusting the authentication requirement is one of: maintaining the authentication requirement, an authentication step-up, and an authentication step-down.
  • 6. The method as described in claim 1, provided as-a-service.
  • 7. The method as described in claim 1, wherein the other data comprises telemetry collected in association with the first entity.
  • 8. The method as described in claim 1, wherein the traffic and data flows include a request for the protected resource, and a response to the request.
  • 9. The method as described in claim 1, wherein the request and the response are encrypted end-to-end through the secure communication path.
  • 10. A computing infrastructure for continuous and universal trust in a network operating environment, comprising: at least first and second first entities;a second entity distinct from the at least first and second first entities; anda core network configured to establish and maintain a secure communication path between the first or second first entity and the second entity, each of the first and second first entities and the second entity authenticated to the core network independently and separately from one another;the core network comprising a relay through which all traffic and data flows between the first or second first entity and the second entity pass, and a controller, the controller configured for each respective first or second first entity to: determine whether an authentication requirement associated with a permitted access to a protected resource associated with the second entity has been met, wherein the authentication requirement is specified in a security policy;upon a determination that the authentication requirement has been met, permitting the respective first or second first entity to access the protected resource associated with the second entity via the secure communication path;based on one or more authentication factors and other data, determining whether the authentication requirement should be modified;upon a determination that the authentication factor should be modified, adjusting the authentication requirement; andenforcing the adjusted authentication requirement for at least some remaining portion of the communication session;wherein authentication requirements are dynamically adjusted during the communication sessions with the second entity such that the first and second first entities have differing authentication requirements in their respective communication sessions.
  • 11. The computing infrastructure as described in claim 10, wherein the first and second first entities and the second entity are organized in a trust circle associated with the security policy.
  • 12. The computing infrastructure as described in claim 10, wherein the core network is configured as an overlay network in association with the network operating environment.
  • 13. The computing infrastructure as described in claim 10, wherein the core network is configured as a standalone network in association with the network operating environment.
  • 14. The computing infrastructure as described in claim 10, wherein authentication requirements applied to the first first entity different from authentication requirements applied to the second first entity.
  • 15. The computing infrastructure as described in claim 10, wherein an authentication requirement is associated with one of: an ambient authentication factor, an active authentication factor, and a combination of an ambient authentication factor and an active authentication factor.
US Referenced Citations (185)
Number Name Date Kind
7788700 Feezel et al. Aug 2010 B1
7788708 Doane Aug 2010 B2
7836510 Angal Nov 2010 B1
7891001 Greenawalt et al. Feb 2011 B1
8832848 Banerjee Sep 2014 B1
8925053 Mehta Dec 2014 B1
9077715 Satish Jul 2015 B1
9213855 Corrales Dec 2015 B1
9608964 Stutch Mar 2017 B2
10110569 Coelho Oct 2018 B1
10164982 Lazarovitz Dec 2018 B1
10673862 Threlkeld Jun 2020 B1
10693914 Sander Jun 2020 B2
10701044 Sander Jun 2020 B2
10771572 Gupta Sep 2020 B1
10951606 Shahidzadeh Mar 2021 B1
10963842 Resheff Mar 2021 B1
11095662 Bhatnagar Aug 2021 B2
11412003 Lyon Aug 2022 B1
11483316 Rajiyah Oct 2022 B1
11539533 Hamel Dec 2022 B1
11743256 Chen Aug 2023 B1
11838762 Shahidzadeh Dec 2023 B1
11886607 De Mastri Jan 2024 B1
11936671 Meller et al. Mar 2024 B1
11997219 Russell May 2024 B1
12130941 Burckard Oct 2024 B2
12170669 Friend Dec 2024 B2
20040111398 England Jun 2004 A1
20050021964 Bhatnagar Jan 2005 A1
20070136786 Le Van Gong Jun 2007 A1
20070143128 Tokarev Jun 2007 A1
20070180078 Murphy Aug 2007 A1
20070297405 He Dec 2007 A1
20080083021 Doane Apr 2008 A1
20080092223 Setia Apr 2008 A1
20080130522 Pennington Jun 2008 A1
20080228775 Abhyanker Sep 2008 A1
20080301779 Garg Dec 2008 A1
20090094041 Buss Apr 2009 A1
20100071056 Cheng Mar 2010 A1
20100107219 Thompson Apr 2010 A1
20100287600 King et al. Nov 2010 A1
20110072274 Leoutsarakos Mar 2011 A1
20120042160 Nakhjiri Feb 2012 A1
20120209970 Scipioni et al. Aug 2012 A1
20120232945 Tong Sep 2012 A1
20120302212 Ross Nov 2012 A1
20130086669 Sondhi Apr 2013 A1
20130091582 Chen Apr 2013 A1
20130185772 Jaudon Jul 2013 A1
20130191903 Jaudon Jul 2013 A1
20130195026 Johnsson Aug 2013 A1
20130205373 Jaudon Aug 2013 A1
20130232543 Cheng Sep 2013 A1
20130252585 Moshir Sep 2013 A1
20130311582 Thai Nov 2013 A1
20130332257 Scheinost Dec 2013 A1
20140136349 Dave May 2014 A1
20140143826 Sharp May 2014 A1
20140164305 Lynch Jun 2014 A1
20140282942 Berkman Sep 2014 A1
20140289528 Baghdasaryan Sep 2014 A1
20140359789 Pitt Dec 2014 A1
20150072654 Moshir Mar 2015 A1
20150082396 Theebaprakasam Mar 2015 A1
20150207813 Reybok Jul 2015 A1
20150227727 Grigg Aug 2015 A1
20150227728 Grigg Aug 2015 A1
20160013966 Vaidyanathan Jan 2016 A1
20160028737 Srinivasan Jan 2016 A1
20160055324 Agarwal Feb 2016 A1
20160135020 Moshir May 2016 A1
20160156609 Grigg Jun 2016 A1
20160373352 Sharma Dec 2016 A1
20160373932 Yang Dec 2016 A1
20170093829 Gitlin Mar 2017 A1
20170093899 Horesh Mar 2017 A1
20170111364 Rawat Apr 2017 A1
20170124506 Khan May 2017 A1
20170171231 Reybok, Jr. Jun 2017 A1
20170331826 Rodniansky Nov 2017 A1
20170346823 Wadley Nov 2017 A1
20180012433 Ricci Jan 2018 A1
20180077745 Hase Mar 2018 A1
20180121646 Sakanashi May 2018 A1
20180129797 Rush May 2018 A1
20180131692 Katz-Oz May 2018 A1
20180150890 Gupta May 2018 A1
20180191501 Lindemann Jul 2018 A1
20180278600 Eleish Sep 2018 A1
20180309570 Amini et al. Oct 2018 A1
20180324197 Zettel, II Nov 2018 A1
20180324207 Reybok, Jr. Nov 2018 A1
20180332017 Childress Nov 2018 A1
20180332065 Gupta Nov 2018 A1
20180337911 Coggin Nov 2018 A1
20190013948 Mercuri Jan 2019 A1
20190058702 Kurian Feb 2019 A1
20190058992 Kurian Feb 2019 A1
20190109839 Reston Apr 2019 A1
20190199813 Mohammed Jun 2019 A1
20190251765 Masters Aug 2019 A1
20190272361 Kursun Sep 2019 A1
20190281465 Moshir Sep 2019 A1
20190311791 St. Paul Oct 2019 A1
20190312871 Syomichev Oct 2019 A1
20190325154 Divakaran Oct 2019 A1
20190335002 Bogineni Oct 2019 A1
20190342298 Chen Nov 2019 A1
20190349469 Skogen Nov 2019 A1
20190378078 Áristeía-Æsír Dec 2019 A1
20200004520 Storm Jan 2020 A1
20200007530 Mohamad Abdul Jan 2020 A1
20200021591 Hecht Jan 2020 A1
20200052882 Tummalapenta Feb 2020 A1
20200074333 Bandyopadhyay Mar 2020 A1
20200117442 Robertson Apr 2020 A1
20200145425 Chauhan et al. May 2020 A1
20200196378 Cao Jun 2020 A1
20200322169 Michaud Oct 2020 A1
20200322321 Lynn Oct 2020 A1
20200322330 Lynn et al. Oct 2020 A1
20200382323 Keselman Dec 2020 A1
20200395107 Griffin Dec 2020 A1
20200412541 Yang Dec 2020 A1
20210004454 Chester Jan 2021 A1
20210036865 Young Feb 2021 A1
20210124817 Todd Apr 2021 A1
20210160261 Kwatra May 2021 A1
20210173910 Kursun Jun 2021 A1
20210218732 Lakshminarayana Jul 2021 A1
20210226944 Michaud et al. Jul 2021 A1
20210306456 Kanagala Sep 2021 A1
20210374709 Suresh Dec 2021 A1
20210377056 Grainger Dec 2021 A1
20210385202 Halford Dec 2021 A1
20210392048 Olden Dec 2021 A1
20220012256 Robertson Jan 2022 A1
20220014359 Spencer Jan 2022 A1
20220067669 Griffin Mar 2022 A1
20220075695 Khan Mar 2022 A1
20220078194 Grover Mar 2022 A1
20220085984 Khandani Mar 2022 A1
20220094722 Lutton Mar 2022 A1
20220095116 Abdollahian Mar 2022 A1
20220103568 Suenaga Mar 2022 A1
20220138692 Abraham May 2022 A1
20220150313 Borg May 2022 A1
20220166883 Bohannon May 2022 A1
20220210173 Katmor et al. Jun 2022 A1
20220245222 Boyd Aug 2022 A1
20220337631 Bhattacharya et al. Oct 2022 A1
20220360607 Amiga Nov 2022 A1
20220398319 Cho Dec 2022 A1
20220406304 Marzorati Dec 2022 A1
20230054446 LaFever Feb 2023 A1
20230170956 Elshafie Jun 2023 A1
20230179605 Ali Jun 2023 A1
20230222047 Simonetto Jul 2023 A1
20230237349 Donoho et al. Jul 2023 A1
20230244775 Alexander Aug 2023 A1
20230247003 Chanak et al. Aug 2023 A1
20230251936 Chang Aug 2023 A1
20230300930 Suto Sep 2023 A1
20230306428 Ratnakaram Sep 2023 A1
20230336663 Lekas Oct 2023 A1
20230379319 Saini Nov 2023 A1
20230379350 Barton et al. Nov 2023 A1
20230384996 Koujiro Nov 2023 A1
20240040383 Saini et al. Feb 2024 A1
20240073251 Nowak Feb 2024 A1
20240098097 Hinchliffe Mar 2024 A1
20240098577 Purkayastha Mar 2024 A1
20240152857 Mass May 2024 A1
20240243904 Rosonina Jul 2024 A1
20240244068 Crabtree Jul 2024 A1
20240273066 Neystadt Aug 2024 A1
20240281796 Finlow-Bates Aug 2024 A1
20240314551 Ping Sep 2024 A1
20240323034 Kumar Sep 2024 A1
20240364730 Jones Oct 2024 A1
20240378423 Gunnai Nov 2024 A1
20240380575 Movva Nov 2024 A1
20240414061 Palavalli Dec 2024 A1
Foreign Referenced Citations (1)
Number Date Country
WO-2018069566 Apr 2018 WO