The present disclosure generally relates to computer networking systems and methods. More particularly, the present disclosure relates to systems and methods for enforcing policy based on assigned user risk scores in a cloud-based system.
The traditional view of an enterprise network (i.e., corporate, private, etc.) included a well-defined perimeter defended by various appliances (e.g., firewalls, intrusion prevention, advanced threat detection, etc.). In this traditional view, mobile users utilize a Virtual Private Network (VPN), etc. and have their traffic backhauled into the well-defined perimeter. This worked when mobile users represented a small fraction of the users, i.e., most users were within the well-defined perimeter. However, this is no longer the case—the definition of the workplace is no longer confined to within the well-defined perimeter, and with applications moving to the cloud, the perimeter has extended to the Internet. This results in an increased risk for the enterprise data residing on unsecured and unmanaged devices as well as the security risks in access to the Internet. Cloud-based security solutions have emerged, such as Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA), available from Zscaler, Inc., the applicant and assignee of the present application.
The present disclosure relates to systems and methods for enforcing policy based on assigned user risk scores in a cloud-based system. In an embodiment, steps include receiving a request to access a resource; determining whether a user associated with the request is allowed to access the resource, wherein the determining is based on a risk score of the user; and responsive to the user being permitted to access the resource, stitching together a connection between a cloud-based system, the resource, and the device to provide access to the resource.
The steps can further include receiving the risk score from a security system associated with the cloud-based system; storing the risk score in a user database; and retrieving the risk score from the user database prior to the determining. The determining can be based on any of an original risk score and an override risk score. The original risk score can be the score which is received from the security software such as ZIA for user risk level, while the override score can be a score which overrides the original score via admin UI, this may be needed if the security software determines [“in error”/false positive] high risk for a user, then an administrator can override the score to allow access to the resource for the users so they are not blocked. In various embodiments, the override score will always take precedence over the original score. The steps can include receiving the override risk score from an admin User Interface (UI) prior to the determining. The steps can include receiving a policy configuration from an admin User Interface (UI) prior to the determining, and determining whether the user is allowed to access the resource based on the policy and the risk score. The stitching together the connections can include the device creating a connection to the cloud-based system and a connector associated with the resource creating a connection to the cloud-based system, to enable the device and the resource to communicate. The steps can include determining, based on the risk score, the user is not allowed to access the resource; and notifying the user that the resource does not exist. The steps can include identifying the user as belonging to one of a plurality of risk levels, wherein the risk levels include any of low, medium, high, and critical based on the risk score; and one of allowing or blocking the user from accessing the resource based on the user's risk level. The resource can be located in one of a public cloud, a private cloud, and an enterprise network, and wherein the request originates from a device that is remote over the Internet.
The present disclosure is illustrated and described herein with reference to the various drawings, in which like reference numbers are used to denote like system components/method steps, as appropriate, and in which:
Zscaler Private Access (ZPA) is a cloud service that provides seamless, zero trust access to private applications running on the public cloud, within the data center, within an enterprise network, etc. As described herein, ZPA is referred to as zero trust access to private applications or simply a zero trust access service. Here, applications are never exposed to the Internet, making them completely invisible to unauthorized users. The service enables the applications to connect to users via inside-out connectivity versus extending the network to them. Users are never placed on the network. This Zero Trust Network Access (ZTNA) approach supports both managed and unmanaged devices and any private application (not just web apps).
This Zero Trust Network Access (ZTNA) approach provides significant security in avoiding direct exposure of applications to the Internet. Rather, this ZTNA approach dials out from a connector. However, enterprise applications contain critical resources, and it is critical that any device accessing such applications, even though a ZTNA approach, are monitored.
The paradigm of the virtual private access systems and methods is to give users network access to get to an application, not to the entire network. If a user is not authorized to get the application, the user should not be able to even see that it exists, much less access it. The virtual private access systems and methods provide a new approach to deliver secure access by decoupling applications from the network, instead providing access with a lightweight software connector, in front of the applications, an application on the user device, a central authority to push policy, and a cloud to stitch the applications and the software connectors together, on a per-user, per-application basis.
With the virtual private access, users can only see the specific applications allowed by policy. Everything else is “invisible” or “dark” to them. Because the virtual private access separates the application from the network, the physical location of the application becomes irrelevant-if applications are located in more than one place, the user is automatically directed to the instance that will give them the best performance. The virtual private access also dramatically reduces configuration complexity, such as policies/firewalls in the data centers. Enterprises can, for example, move applications to Amazon Web Services or Microsoft Azure, and take advantage of the elasticity of the cloud, making private, internal applications behave just like the marketing leading enterprise applications. Advantageously, there is no hardware to buy or deploy because the virtual private access is a service offering to users and enterprises.
The cloud-based firewall can provide Deep Packet Inspection (DPI) and access controls across various ports and protocols as well as being application and user aware. The URL filtering can block, allow, or limit website access based on policy for a user, group of users, or entire organization, including specific destinations or categories of URLs (e.g., gambling, social media, etc.). The bandwidth control can enforce bandwidth policies and prioritize critical applications such as relative to recreational traffic. DNS filtering can control and block DNS requests against known and malicious destinations.
The cloud-based intrusion prevention and advanced threat protection can deliver full threat protection against malicious content such as browser exploits, scripts, identified botnets and malware callbacks, etc. The cloud-based sandbox can block zero-day exploits (just identified) by analyzing unknown files for malicious behavior. Advantageously, the cloud-based system 100 is multi-tenant and can service a large volume of the users 102. As such, newly discovered threats can be promulgated throughout the cloud-based system 100 for all tenants practically instantaneously. The antivirus protection can include antivirus, antispyware, antimalware, etc. protection for the users 102, using signatures sourced and constantly updated. The DNS security can identify and route command-and-control connections to threat detection engines for full content inspection.
The DLP can use standard and/or custom dictionaries to continuously monitor the users 102, including compressed and/or SSL-encrypted traffic. Again, being in a cloud implementation, the cloud-based system 100 can scale this monitoring with near-zero latency on the users 102. The cloud application security can include CASB functionality to discover and control user access to known and unknown cloud services 106. The file type controls enable true file type control by the user, location, destination, etc. to determine which files are allowed or not.
For illustration purposes, the users 102 of the cloud-based system 100 can include a mobile device 110, a headquarters (HQ) 112 which can include or connect to a data center (DC) 114, Internet of Things (IoT) devices 116, a branch office/remote location 118, etc., and each includes one or more user devices (an example user device 300 is illustrated in
Further, the cloud-based system 100 can be multi-tenant, with each tenant having its own users 102 and configuration, policy, rules, etc. One advantage of the multi-tenancy and a large volume of users is the zero-day/zero-hour protection in that a new vulnerability can be detected and then instantly remediated across the entire cloud-based system 100. The same applies to policy, rule, configuration, etc. changes-they are instantly remediated across the entire cloud-based system 100. As well, new features in the cloud-based system 100 can also be rolled up simultaneously across the user base, as opposed to selective and time-consuming upgrades on every device at the locations 112, 114, 118, and the devices 110, 116.
Logically, the cloud-based system 100 can be viewed as an overlay network between users (at the locations 112, 114, 118, and the devices 110, 116) and the Internet 104 and the cloud services 106. Previously, the IT deployment model included enterprise resources and applications stored within the data center 114 (i.e., physical devices) behind a firewall (perimeter), accessible by employees, partners, contractors, etc. on-site or remote via Virtual Private Networks (VPNs), etc. The cloud-based system 100 is replacing the conventional deployment model. The cloud-based system 100 can be used to implement these services in the cloud without requiring the physical devices and management thereof by enterprise IT administrators. As an ever-present overlay network, the cloud-based system 100 can provide the same functions as the physical devices and/or appliances regardless of geography or location of the users 102, as well as independent of platform, operating system, network access technique, network access provider, etc.
There are various techniques to forward traffic between the users 102 at the locations 112, 114, 118, and via the devices 110, 116, and the cloud-based system 100. Typically, the locations 112, 114, 118 can use tunneling where all traffic is forward through the cloud-based system 100. For example, various tunneling protocols are contemplated, such as Generic Routing Encapsulation (GRE), Layer Two Tunneling Protocol (L2TP), Internet Protocol (IP) Security (IPsec), customized tunneling protocols, etc. The devices 110, 116, when not at one of the locations 112, 114, 118 can use a local application that forwards traffic, a proxy such as via a Proxy Auto-Config (PAC) file, and the like. An application of the local application is the application 350 described in detail herein as a connector application. A key aspect of the cloud-based system 100 is all traffic between the users 102 and the Internet 104 or the cloud services 106 is via the cloud-based system 100. As such, the cloud-based system 100 has visibility to enable various functions, all of which are performed off the user device in the cloud.
The cloud-based system 100 can also include a management system 120 for tenant access to provide global policy and configuration as well as real-time analytics. This enables IT administrators to have a unified view of user activity, threat intelligence, application usage, etc. For example, IT administrators can drill-down to a per-user level to understand events and correlate threats, to identify compromised devices, to have application visibility, and the like. The cloud-based system 100 can further include connectivity to an Identity Provider (IDP) 122 for authentication of the users 102 and to a Security Information and Event Management (SIEM) system 124 for event logging. The system 124 can provide alert and activity logs on a per-user 102 basis.
Establishing a zero trust architecture requires visibility and control over the environment's users and traffic, including that which is encrypted; monitoring and verification of traffic between parts of the environment; and strong multifactor authentication (MFA) methods beyond passwords, such as biometrics or one-time codes. This is performed via the cloud-based system 100. Critically, in a zero trust architecture, a resource's network location is not the biggest factor in its security posture anymore. Instead of rigid network segmentation, your data, workflows, services, and such are protected by software-defined microsegmentation, enabling you to keep them secure anywhere, whether in your data center or in distributed hybrid and multicloud environments.
The core concept of zero trust is simple: assume everything is hostile by default. It is a major departure from the network security model built on the centralized data center and secure network perimeter. These network architectures rely on approved IP addresses, ports, and protocols to establish access controls and validate what's trusted inside the network, generally including anybody connecting via remote access VPN. In contrast, a zero trust approach treats all traffic, even if it is already inside the perimeter, as hostile. For example, workloads are blocked from communicating until they are validated by a set of attributes, such as a fingerprint or identity. Identity-based validation policies result in stronger security that travels with the workload wherever it communicates—in a public cloud, a hybrid environment, a container, or an on-premises network architecture.
Because protection is environment-agnostic, zero trust secures applications and services even if they communicate across network environments, requiring no architectural changes or policy updates. Zero trust securely connects users, devices, and applications using business policies over any network, enabling safe digital transformation. Zero trust is about more than user identity, segmentation, and secure access. It is a strategy upon which to build a cybersecurity ecosystem.
Terminate every connection: Technologies like firewalls use a “passthrough” approach, inspecting files as they are delivered. If a malicious file is detected, alerts are often too late. An effective zero trust solution terminates every connection to allow an inline proxy architecture to inspect all traffic, including encrypted traffic, in real time—before it reaches its destination—to prevent ransomware, malware, and more.
Protect data using granular context-based policies: Zero trust policies verify access requests and rights based on context, including user identity, device, location, type of content, and the application being requested. Policies are adaptive, so user access privileges are continually reassessed as context changes.
Reduce risk by eliminating the attack surface: With a zero trust approach, users connect directly to the apps and resources they need, never to networks (see ZTNA). Direct user-to-app and app-to-app connections eliminate the risk of lateral movement and prevent compromised devices from infecting other resources. Plus, users and apps are invisible to the internet, so they cannot be discovered or attacked.
Of note, the cloud-based system 100 is an external system meaning it is separate from tenant's private networks (enterprise networks) as well as from networks associated with the devices 110, 116, and locations 112, 118. Also, of note, the present disclosure describes a private node 150P that is both part of the cloud-based system 100 and part of a private network. Further, the term nodes as used herein with respect to the cloud-based system 100 (including enforcement nodes, service edge nodes, etc.) can be one or more servers, including physical servers, virtual machines (VM) executed on physical hardware, appliances, custom hardware, compute resources, clusters, etc., as described above, i.e., the nodes 150 contemplate any physical implementation of computer resources. In some embodiments, the nodes 150 can be Secure Web Gateways (SWGs), proxies, Secure Access Service Edge (SASE), etc.
The nodes 150 are full-featured secure internet gateways that provide integrated internet security. They inspect all web traffic bi-directionally for malware and enforce security, compliance, and firewall policies, as described herein, as well as various additional functionality. In an embodiment, each node 150 has two main modules for inspecting traffic and applying policies: a web module and a firewall module. The nodes 150 are deployed around the world and can handle hundreds of thousands of concurrent users with millions of concurrent sessions. Because of this, regardless of where the users 102 are, they can access the Internet 104 from any device, and the nodes 150 protect the traffic and apply corporate policies. The nodes 150 can implement various inspection engines therein, and optionally, send sandboxing to another system. The nodes 150 include significant fault tolerance capabilities, such as deployment in active-active mode to ensure availability and redundancy as well as continuous monitoring.
In an embodiment, customer traffic is not passed to any other component within the cloud-based system 100, and the nodes 150 can be configured never to store any data to disk. Packet data is held in memory for inspection and then, based on policy, is either forwarded or dropped. Log data generated for every transaction is compressed, tokenized, and exported over secure Transport Layer Security (TLS) connections to the log routers 154 that direct the logs to the storage cluster 156, hosted in the appropriate geographical region, for each organization. In an embodiment, all data destined for or received from the Internet is processed through one of the nodes 150. In another embodiment, specific data specified by each tenant, e.g., only email, only executable files, etc., is processed through one of the nodes 150.
Each of the nodes 150 may generate a decision vector D=[d1, d2, . . . , dn] for a content item of one or more parts C=[c1, c2, . . . , cm]. Each decision vector may identify a threat classification, e.g., clean, spyware, malware, undesirable content, innocuous, spam email, unknown, etc. For example, the output of each element of the decision vector D may be based on the output of one or more data inspection engines. In an embodiment, the threat classification may be reduced to a subset of categories, e.g., violating, non-violating, neutral, unknown. Based on the subset classification, the node 150 may allow the distribution of the content item, preclude distribution of the content item, allow distribution of the content item after a cleaning process, or perform threat detection on the content item. In an embodiment, the actions taken by one of the nodes 150 may be determinative on the threat classification of the content item and on a security policy of the tenant to which the content item is being sent from or from which the content item is being requested by. A content item is violating if, for any part C=[c1, c2, . . . , cm] of the content item, at any of the nodes 150, any one of the data inspection engines generates an output that results in a classification of “violating.”
The central authority 152 hosts all customer (tenant) policy and configuration settings. It monitors the cloud and provides a central location for software and database updates and threat intelligence. Given the multi-tenant architecture, the central authority 152 is redundant and backed up in multiple different data centers. The nodes 150 establish persistent connections to the central authority 152 to download all policy configurations. When a new user connects to a node 150, a policy request is sent to the central authority 152 through this connection. The central authority 152 then calculates the policies that apply to that user 102 and sends the policy to the node 150 as a highly compressed bitmap.
The policy can be tenant-specific and can include access privileges for users, websites and/or content that is disallowed, restricted domains, DLP dictionaries, etc. Once downloaded, a tenant's policy is cached until a policy change is made in the management system 120. The policy can be tenant-specific and can include access privileges for users, websites and/or content that is disallowed, restricted domains, DLP dictionaries, etc. When this happens, all of the cached policies are purged, and the nodes 150 request the new policy when the user 102 next makes a request. In an embodiment, the node 150 exchange “heartbeats” periodically, so all nodes 150 are informed when there is a policy change. Any node 150 can then pull the change in policy when it sees a new request.
The cloud-based system 100 can be a private cloud, a public cloud, a combination of a private cloud and a public cloud (hybrid cloud), or the like. Cloud computing systems and methods abstract away physical servers, storage, networking, etc., and instead offer these as on-demand and elastic resources. The National Institute of Standards and Technology (NIST) provides a concise and specific definition which states cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Cloud computing differs from the classic client-server model by providing applications from a server that are executed and managed by a client's web browser or the like, with no installed client version of an application required. Centralization gives cloud service providers complete control over the versions of the browser-based and other applications provided to clients, which removes the need for version upgrades or license management on individual client computing devices. The phrase “Software as a Service” (SaaS) is sometimes used to describe application programs offered through cloud computing. A common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “the cloud.” The cloud-based system 100 is illustrated herein as an example embodiment of a cloud-based system, and other implementations are also contemplated.
As described herein, the terms cloud services and cloud applications may be used interchangeably. The cloud service 106 is any service made available to users on-demand via the Internet, as opposed to being provided from a company's on-premises servers. A cloud application, or cloud app, is a software program where cloud-based and local components work together. The cloud-based system 100 can be utilized to provide example cloud services, including Zscaler Internet Access (ZIA), Zscaler Private Access (ZPA), and Zscaler Digital Experience (ZDX), all from Zscaler, Inc. (the assignee and applicant of the present application). Also, there can be multiple different cloud-based systems 100, including ones with different architectures and multiple cloud services. The ZIA service can provide the access control, threat prevention, and data protection described above with reference to the cloud-based system 100. ZPA can include access control, microservice segmentation, etc. The ZDX service can provide monitoring of user experience, e.g., Quality of Experience (QoE), Quality of Service (QOS), etc., in a manner that can gain insights based on continuous, inline monitoring. For example, the ZIA service can provide a user with Internet Access, and the ZPA service can provide a user with access to enterprise resources instead of traditional Virtual Private Networks (VPNs), namely ZPA provides Zero Trust Network Access (ZTNA). Those of ordinary skill in the art will recognize various other types of cloud services 106 are also contemplated. Also, other types of cloud architectures are also contemplated, with the cloud-based system 100 presented for illustration purposes.
The application 350 is configured to auto-route traffic for seamless user experience. This can be protocol as well as application-specific, and the application 350 can route traffic with a nearest or best fit node 150. Further, the application 350 can detect trusted networks, allowed applications, etc. and support secure network access. The application 350 can also support the enrollment of the user device 300 prior to accessing applications. The application 350 can uniquely detect the users 102 based on fingerprinting the user device 300, using criteria like device model, platform, operating system, etc. The application 350 can support Mobile Device Management (MDM) functions, allowing IT personnel to deploy and manage the user devices 300 seamlessly. This can also include the automatic installation of client and SSL certificates during enrollment. Finally, the application 350 provides visibility into device and app usage of the user 102 of the user device 300.
The application 350 supports a secure, lightweight tunnel between the user device 300 and the cloud-based system 100. For example, the lightweight tunnel can be HTTP-based. With the application 350, there is no requirement for PAC files, an IPSec VPN, authentication cookies, or user 102 setup.
The processor 202 is a hardware device for executing software instructions. The processor 202 may be any custom made or commercially available processor, a Central Processing Unit (CPU), an auxiliary processor among several processors associated with the server 200, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the server 200 is in operation, the processor 202 is configured to execute software stored within the memory 210, to communicate data to and from the memory 210, and to generally control operations of the server 200 pursuant to the software instructions. The I/O interfaces 204 may be used to receive user input from and/or for providing system output to one or more devices or components.
The network interface 206 may be used to enable the server 200 to communicate on a network, such as the Internet 104. The network interface 206 may include, for example, an Ethernet card or adapter or a Wireless Local Area Network (WLAN) card or adapter. The network interface 206 may include address, control, and/or data connections to enable appropriate communications on the network. A data store 208 may be used to store data. The data store 208 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof.
Moreover, the data store 208 may incorporate electronic, magnetic, optical, and/or other types of storage media. In one example, the data store 208 may be located internal to the server 200, such as, for example, an internal hard drive connected to the local interface 212 in the server 200. Additionally, in another embodiment, the data store 208 may be located external to the server 200 such as, for example, an external hard drive connected to the I/O interfaces 204 (e.g., SCSI or USB connection). In a further embodiment, the data store 208 may be connected to the server 200 through a network, such as, for example, a network-attached file server.
The memory 210 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory 210 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 210 may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor 202. The software in memory 210 may include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory 210 includes a suitable Operating System (O/S) 214 and one or more programs 216. The operating system 214 essentially controls the execution of other computer programs, such as the one or more programs 216, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs 216 may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein.
The processor 302 is a hardware device for executing software instructions. The processor 302 can be any custom made or commercially available processor, a CPU, an auxiliary processor among several processors associated with the user device 300, a semiconductor-based microprocessor (in the form of a microchip or chipset), or generally any device for executing software instructions. When the user device 300 is in operation, the processor 302 is configured to execute software stored within the memory 310, to communicate data to and from the memory 310, and to generally control operations of the user device 300 pursuant to the software instructions. In an embodiment, the processor 302 may include a mobile optimized processor such as optimized for power consumption and mobile applications. The I/O interfaces 304 can be used to receive user input from and/or for providing system output. User input can be provided via, for example, a keypad, a touch screen, a scroll ball, a scroll bar, buttons, a barcode scanner, and the like. System output can be provided via a display device such as a Liquid Crystal Display (LCD), touch screen, and the like.
The network interface 306 enables wireless communication to an external access device or network. Any number of suitable wireless data communication protocols, techniques, or methodologies can be supported by the network interface 306, including any protocols for wireless communication. The data store 308 may be used to store data. The data store 308 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store 308 may incorporate electronic, magnetic, optical, and/or other types of storage media.
The memory 310 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, etc.), and combinations thereof. Moreover, the memory 310 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 310 may have a distributed architecture, where various components are situated remotely from one another but can be accessed by the processor 302. The software in memory 310 can include one or more software programs, each of which includes an ordered listing of executable instructions for implementing logical functions. In the example of
The paradigm of virtual private access systems and methods is to give users network access to get to an application and/or file share, not to the entire network. If a user is not authorized to get the application, the user should not be able even to see that it exists, much less access it. The virtual private access systems and methods provide an approach to deliver secure access by decoupling applications 402, 404 from the network, instead of providing access with a connector 400, in front of the applications 402, 404, an application on the user device 300, a central authority 152 to push policy, and the cloud-based system 100 to stitch the applications 402, 404 and the software connectors 400 together, on a per-user, per-application basis.
With the virtual private access, users can only see the specific applications 402, 404 allowed by the central authority 152. Everything else is “invisible” or “dark” to them. Because the virtual private access separates the application from the network, the physical location of the application 402, 404 becomes irrelevant-if applications 402, 404 are located in more than one place, the user is automatically directed to the instance that will give them the best performance. The virtual private access also dramatically reduces configuration complexity, such as policies/firewalls in the data centers. Enterprises can, for example, move applications to Amazon Web Services or Microsoft Azure, and take advantage of the elasticity of the cloud, making private, internal applications behave just like the marketing leading enterprise applications. Advantageously, there is no hardware to buy or deploy because the virtual private access is a service offering to end-users and enterprises.
The user 102 needs to access the Internet 104, the SaaS/public cloud systems for the applications 402, and the enterprise network 410. Again, conventionally, the solution for secure communication, the user 102 has a VPN connection through the firewall 412 where all data is sent to the enterprise network 410, including data destined for the Internet 104 or the SaaS/public cloud systems for the applications 402. Furthermore, this VPN connection dials into the enterprise network 410. The systems and methods described herein provide the VPN architecture 405, which provides a secure connection to the enterprise network 410 without bringing all traffic, e.g., traffic for the Internet 104 or the SaaS/public cloud systems, into the enterprise network 410 as well as removing the requirement for the user 102 to dial into the enterprise network 410.
Instead of the user 102 creating a secure connection through the firewall 412, the user 102 connects securely to a VPN device 420 located in the cloud-based system 100 through a secure connection 422. Note, the cloud-based system 100 can include a plurality of VPN devices 420. The VPN architecture 405 dynamically routes traffic between the user 102 and the Internet 104, the SaaS/public cloud systems for the applications 402, and securely with the enterprise network 410. For secure access to the enterprise network 410, the VPN architecture 405 includes dynamically creating connections through secure tunnels between three entities: the VPN device 420, the cloud, and an on-premises redirection proxy 430. The connection between the cloud-based system 100 and the on-premises redirection proxy 430 is dynamic, on-demand and orchestrated by the cloud-based system 100. A key feature of the systems and methods is its security at the edge of the cloud-based system 100—there is no need to punch any holes in the existing on-premises firewall 412. The on-premises redirection proxy 430 inside the enterprise network 410 “dials out” and connects to the cloud-based system 100 as if too were an end-point via secure connections 440, 442. This on-demand dial-out capability and tunneling authenticated traffic back to the enterprise network 410 is a key differentiator.
The VPN architecture 405 includes the VPN devices 420, the on-premises redirection proxy 430, a topology controller 450, and an intelligent DNS proxy 460. The VPN devices 420 can be Traffic (VPN) distribution servers and can be part of the cloud-based system 100. In an embodiment, the cloud-based system 100 can be a security cloud such as available from Zscaler, Inc. (www.zscaler.com) performing functions on behalf of every client that connects to it: a) allowing/denying access to specific Internet sites/apps-based on security policy and absence/presence of malware in those sites, and b) set policies on specific SaaS apps and allowing/denying access to specific employees or groups.
The on-premises redirection proxy 430 is located inside a perimeter of the enterprise network 410 (inside the private cloud or inside the corporate data center—depending on the deployment topology). It is connected to a local network and acts as a “bridge” between the users 102 outside the perimeter and apps that are inside the perimeter through the secure connections 440, 442. But, this “bridge” is always closed—it is only open to the users 102 that pass two criteria: a) they must be authenticated by an enterprise authentication service 470, and b) the security policy in effect allows them access to “cross the bridge.”
When the on-premises redirection proxy 430 starts, it establishes a persistent, long-lived connection 472 to the topology controller 450. The topology controller 450 connects to the on-premises redirection proxy 430 through a secure connection 472 and to the cloud-based system 100 through a secure connection 480. The on-premises redirection proxy 430 waits for instruction from the topology controller 450 to establish tunnels to specific VPN termination nodes, i.e., the VPN devices 420, in the cloud-based system 100. The on-premises redirection proxy 430 is most expediently realized as custom software running inside a virtual machine (VM). The topology controller 450, as part of the non-volatile data for each enterprise, stores the network topology of a private network of the enterprise network 410, including, but not limited to, the internal domain name(s), subnet(s) and other routing information.
The DNS proxy 460 handles all domain names to Internet Protocol (IP) Address resolution on behalf of endpoints (clients). These endpoints are user computing devices—such as mobile devices, laptops, tablets, etc. The DNS proxy 460 consults the topology controller 450 to discern packets that must be sent to the Internet 104, the SaaS/public cloud systems, vs. the enterprise network 410 private network. This decision is made by consulting the topology controller 450 for information about a company's private network and domains. The DNS proxy 460 is connected to the user 102 through a connection 482 and to the cloud-based system 100 through a connection 484.
The VPN device 420 is located in the cloud-based system 100 and can have multiple points-of-presence around the world. If the cloud-based system 100 is a distributed security cloud, the VPN device 420 can be located with nodes 150. In general, the VPN device 420 can be implemented as software instances on the nodes 150, as a separate virtual machine on the same physical hardware as the nodes 150, or a separate hardware device such as the server 200, but part of the cloud-based system 100. The VPN device 420 is the first point of entry for any client wishing to connect to the Internet 104, SaaS apps, or the enterprise private network. In addition to doing traditional functions of a VPN server, the VPN device 420 works in concert with the topology controller 450 to establish on-demand routes to the on-premises redirection proxy 430. These routes are set up for each user on demand. When the VPN device 420 determines that a packet from the user 102 is destined for the enterprise private network, it encapsulates the packet and sends it via a tunnel between the VPN device 420 and the on-premises redirection proxy 430. For packets meant for the Internet 104 or SaaS clouds, the VPN device 420 can forwards it to the nodes 150—to continue processing as before or send directly to the Internet 104 or SaaS clouds.
For non-enterprise requests, the cloud-based system 100 forwards the request per policy (step 550). Here, the cloud-based system 100 can forward the request based on the policy associated with the enterprise network 410 and the user 102. With the identity of the user and the enterprise they belong to, the VPN server will contact the topology controller 450 and pre-fetch the enterprise private topology. For enterprise requests, the topology controller 450 fetches a private topology of the enterprise network 410, instructs the redirection proxy 430 to establish an outbound tunnel to the VPN server, the redirection proxy 430 establishes the outbound tunnel, and requests are forward between the user 102 and the enterprise network 410 securely (step 560). Here, the DNS proxy 460 works with the topology controller 450 to determine the local access in the enterprise network 410, and the topology controller 450 works with the redirection proxy 430 to dial out a secure connection to the VPN server. The redirection proxy 430 establishes an on-demand tunnel to the specific VPN server so that it can receive packets meant for its internal network.
Advantageously, the systems and methods avoid the conventional requirement of VPN tunneling all data into the enterprise network 410 and hair-pinning non-enterprise data back out. The systems and methods also allow the enterprise network 410 to have remote offices, etc. without requiring large hardware infrastructures—the cloud-based system 100 bridges the users 102, remote offices, etc. to the enterprise network 410 in a seamless manner while removing the requirement to bring non-enterprise data through the enterprise network 410. This recognizes the shift to mobility in enterprise applications. Also, the VPN tunnel on the user 102 can leverage and use existing VPN clients available on the user devices 300. The cloud-based system 100, through the VPN architecture 405, determines how to route traffic for the user 102 efficiently-only enterprise traffic is routed securely to the enterprise network 410. Additionally, the VPN architecture 405 removes the conventional requirement of tunneling into the enterprise network 410, which can be an opportunity for security vulnerabilities. Instead, the redirection proxy 430 dials out of the enterprise network 410.
The systems and methods provide, to the user (enterprise user), a single, seamless way to connect to Public and Private clouds—with no special steps needed to access one vs. the other. To the IT Admin, the systems and methods provide a single point of control and access for all users-security policies and rules are enforced at a single global cloud chokepoint-without impacting user convenience/performance or weakening security.
The virtual private access is a new technique for the users 102 to access the file shares and applications 402, 404, without the cost, hassle or security risk of VPNs, which extend network access to deliver app access. The virtual private access decouples private internal applications from the physical network to enable authorized user access to the file shares and applications 402, 404, without the security risk or complexity of VPNs. That is, virtual private access takes the “Network” out of VPNs.
In the virtual private access, the users 102, the file shares and applications 402, 404, are communicatively coupled to the cloud-based system 100, such as via the Internet 104 or the like. On the client-side, at the users 102, the applications 402, 404 provision both secure remote access and optionally accessibility to the cloud-based system 100. The application 402, 404 establishes a connection to the closest node 150 in the cloud-based system 100 at startup and may not accept incoming requests.
At the file shares and applications 402, 404, the lightweight connectors 400 sit in front of the applications 402, 404. The lightweight connectors 400 become the path to the file shares and applications 402, 404 behind it, and connect only to the cloud-based system 100. The lightweight connectors 400 can be lightweight, ephemeral binary, such as deployed as a virtual machine, to establish a connection between the file shares and applications 402, 404 and the cloud-based system 100, such as via the closest node 150. The lightweight connectors 400 do not accept inbound connections of any kind, dramatically reducing the overall threat surface. The lightweight connectors 400 can be enabled on a standard VMware platform; additional lightweight connectors 400 can be created in less than 5 seconds to handle additional application instances. By not accepting inbound connections, the lightweight connectors 400 make the file shares and applications 402, 404 “dark,” removing a significant threat vector.
The policy can be established and pushed by policy engines in the central authority 152, such as via a distributed cluster of multi-tenant policy engines that provide a single interface for all policy creation. Also, no data of any kind transits the policy engines. The nodes 150 in the security cloud stitch connections together, between the users 102 and the file shares and applications 402, 404, without processing traffic of any kind. When the user 102 requests an application in the file shares and applications 402, 404, the policy engine delivers connection information to the application 350 and app-side nodes 150, which includes the location of a single nodes 150 to provision the client/app connection. The connection is established through the nodes 150, and is encrypted with a combination of the customer's client and server-side certificates. While the nodes 150 provision the connection, they do not participate in the key exchange, nor do they have visibility into the traffic flows.
Advantageously, the virtual private access provides increased security in that the file shares and applications 402, 404 are visible only to the users 102 that are authorized to access them; unauthorized users are not able to even see them. Because application access is provisioned through the cloud-based system 100, rather than via a network connection, the virtual private access makes it impossible to route back to applications. The virtual private access is enabled using the application 350, without the need to launch or exit VPN clients. The application access just works in the background enabling application-specific access to individual contractors, business partners or other companies, i.e., the users 102.
The virtual private access provides capital expense (CAPEX) and operating expense (OPEX) reductions as there is no hardware to deploy, configure, or maintain. Legacy VPNs can be phased out. Internal IT can be devoted to enabling business strategy, rather than maintaining network “plumbing.” Enterprises can move apps to the cloud on their schedule, without the need to re-architect, set up site-to-site VPNs or deliver a substandard user experience.
The virtual private access provides easy deployment, i.e., put lightweight connectors 400 in front of the file shares and applications 402, 404, wherever they are. The virtual private access will automatically route to the location that delivers the best performance. Wildcard app deployment will discover applications upon request, regardless of their location, then build granular user access policies around them. There is no need for complex firewall rules, Network Address Translation issues or policy juggling to deliver application access. Further, the virtual private access provides seamless integration with existing Single Sign-On (SSO) infrastructure.
The virtual private access process 750 is described with reference to both the user 102, the cloud-based system 100, and the enterprise file share and application 402, 404. First, the user 102 is executing the application 350 on the user device 300, in the background. The user 102 launches the application 350 and can be redirected to an enterprise ID provider or the like to sign on, i.e., a single sign on, without setting up new accounts. Once authenticated, Public Key Infrastructure (PKI) certificate 720 enrollment occurs, between the user 102 and the node 150A. With the application 350 executing on the user device, the user 102 makes a request to the enterprise file share and application 402, 404, e.g., intranet.company.com, crm.company.com, etc. (step 752). Note, the request is not limited to web applications and can include anything such as a remote desktop or anything handling any static Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) applications.
This request is intercepted by the node 150A and redirected to the central authority 152, which performs a policy lookup for the user 102 and the user device 300 (step 754), transparent to the user 102. The central authority 152 determines if the user 102 and the user device 300 are authorized for the enterprise file share and application 402, 404. Once authorization is determined, the central authority 152 provides information to the nodes 150A, 150B, 150C, the application 350, and the lightweight connectors 400 at the enterprise file share and application 402, 404, and the information can include the certificates 720 and other details necessary to stitch secure connections between the various devices. Specifically, the central authority 152 can create connection information with the best nodes 150 for joint connections, from the user 102 to the enterprise file share and application 402, 404, and the unique tokens (step 756). With the connection information, the node 150A connects to the user 102, presenting a token, and the node 150C connects to the lightweight connector 400, presenting a token (step 758). Now, a connection is stitched between the user 102 to the enterprise file share and application 402, 404, through the application 350, the nodes 150A, 150B, 150C, and the lightweight connector 400.
Comparison—VPN with Virtual Private Access
In an embodiment, a virtual private access method implemented by a cloud-based system, includes receiving a request to access resources from a user device, wherein the resources are located in one of a public cloud and an enterprise network and the user device is remote therefrom on the Internet; forwarding the request to a central authority for a policy look up and for a determination of connection information to make an associated secure connection through the cloud-based system to the resources; receiving the connection information from the central authority responsive to an authorized policy look up; and creating secure tunnels between the user device and the resources based on the connection information. Prior to the receiving, a user executes an application on the user device, provides authentication, and provides the request with the application operating on the user device. The application can be configured to connect the user device to the cloud-based system, via an optimized cloud node based on a location of the user device. The resources can be communicatively coupled to a lightweight connector operating on a computer and communicatively coupled between the resources and the cloud-based system. The virtual private access method can further include detecting the resources based on a query to the lightweight connector. The lightweight connector can be prevented from accepting inbound connections, thereby preventing access of the resources external from the public cloud or the enterprise network. The creating secure tunnels can include creating connections between one or more cloud nodes in the cloud-based system, wherein the one or more cloud nodes do not participate in a key exchange, and the one or more cloud nodes do not have data access to traffic on the secure tunnels. The creating secure tunnels can include creating connections between one or more cloud nodes in the cloud-based system, wherein the one or more cloud nodes create the secure tunnels based on a combination of a client-side certificate and a server-side certificate. The secure tunnels can be created through software on the user device, the cloud-based system, and a lightweight connector operating on a computer associated with the resources, thereby eliminating dedicated hardware for virtual private network connections.
In another embodiment, a cloud-based system adapted to implement virtual private access includes one or more cloud nodes communicatively coupled to one another; wherein each of the one or more cloud nodes includes one or more processors and memory storing instructions that, when executed, cause the one or more processors to receive a request to access resources from a user device, wherein the resources are located in one of a public cloud and an enterprise network and the user device is remote therefrom on the Internet; forward the request to a central authority for a policy look up and for a determination of connection information to make an associated secure connection through the cloud-based system to the resources; receive the connection information from the central authority responsive to an authorized policy look up; and create secure tunnels between the user device and the resources based on the connection information. Prior to reception of the request, a user executes an application on the user device, provides authentication, and provides the request with the application operating on the user device. The application can be configured to connect the user device to the cloud-based system, via an optimized cloud node based on a location of the user device. The resources can be communicatively coupled to a lightweight connector operating on a computer and communicatively coupled between the resources and the cloud-based system. The memory storing instructions that, when executed, can further cause the one or more processors to detect the resources based on a query to the lightweight connector. The lightweight connector can be prevented from accepting inbound connections, thereby preventing access of the resources external from the public cloud or the enterprise network. The secure tunnels can be created through connections between one or more cloud nodes in the cloud-based system, wherein the one or more cloud nodes do not participate in a key exchange, and the one or more cloud nodes do not have data access to traffic on the secure tunnels. The secure tunnels can be created through connections between one or more cloud nodes in the cloud-based system, wherein the one or more cloud nodes create the secure tunnels based on a combination of a client-side certificate and a server-side certificate. The secure tunnels can be created through software on the user device, the cloud-based system, and a lightweight connector operating on a computer associated with the resources, thereby eliminating dedicated hardware for virtual private network connections.
Software stored in a non-transitory computer readable medium including instructions executable by a system, which in response to such execution causes the system to perform operations including receiving a request to access resources from a user device, wherein the resources are located in one of a public cloud and an enterprise network and the user device is remote therefrom on the Internet; forwarding the request to a central authority for a policy look up and for a determination of connection information to make an associated secure connection through the cloud-based system to the resources; receiving the connection information from the central authority responsive to an authorized policy look up; and creating secure tunnels between the user device and the resources based on the connection information. The resources can be communicatively coupled to a lightweight connector operating on a computer and communicatively coupled between the resources and the cloud-based system, and wherein the instructions executable by the system, which in response to such execution can further cause the system to perform operations including detecting the resources based on a query to the lightweight connector.
In an embodiment, a method includes connecting to a client at a Virtual Private Network (VPN) device in a cloud-based system; forwarding requests from the client for the Internet or public clouds accordingly; and for requests for an enterprise associated with the client, contacting a topology controller to fetch a topology of the enterprise, causing a tunnel to be established from the enterprise to the VPN device, and forwarding the requests for the enterprise through the tunnel to the cloud-based system for proactive monitoring; and providing a secure connection from the cloud-based system back to the enterprise, including internal domain and subnets associated with the enterprise. The method can further include authenticating, via an authentication server, the client prior to the connecting and associated the client with the enterprise. The method can further include, subsequent to the connecting, setting a Domain Name Server (DNS) associated with the cloud-based system to provide DNS lookups for the client. The method can further include utilizing the DNS to determine a destination of the requests; and, for the requests for the enterprise, contacting the topology controller to pre-fetch the topology of the enterprise. The method can further include operating an on-premises redirection proxy within the enterprise, wherein the on-premises redirection proxy is configured to establish the tunnel from the enterprise to the VPN device. Secure tunnels to the enterprise are dialed out from the enterprise by the on-premises redirection proxy. The on-premises redirection proxy is a virtual machine operating behind a firewall associated with the enterprise. The on-premises redirection proxy is configured as a bridge between the client and applications inside the enterprise. The VPN device operates on a cloud node in the cloud-based system, and wherein the cloud-based system includes a distributed security cloud. The VPN device can include one of a software instance on a cloud node or a virtual machine on the cloud node. The topology controller includes a network topology of the enterprise, including internal domain names and subnets.
In another embodiment, a cloud-based system includes one or more Virtual Private Network (VPN) servers, wherein one or more clients connect securely to the one or more VPN servers; a topology controller communicatively coupled to the one or more VPN servers; a Domain Name Server (DNS) communicatively coupled to the topology controller and the one or more VPN servers; and a redirection proxy located in a private network and communicatively coupled to the one or more VPN servers and the topology controller; wherein requests from the one or more clients to the private network cause on demand secure connections being established by the redirection proxy to associated VPN servers in a cloud-based system, wherein the on demand secure connections provide connectivity to the private network including internal domain and subnets associated with the private network, and wherein the cloud-based system performs proactive monitoring. Requests from the one or more clients outside of the private network are forwarded without traversing the private network. The redirection proxy maintains a persistent connection to the topology controller and establishes secure tunnels to the one or more VPN servers based on direction from the topology controller. The topology controller includes a network topology of the private network, including internal domain names and subnets. The VPN servers operate on cloud nodes in a distributed security cloud.
In yet another embodiment, a VPN system includes a network interface, a data store, and a processor, each communicatively coupled together; and memory storing instructions that, when executed, cause the processor to establish a secure tunnel with a client; forward requests from the client to the Internet accordingly; and for requests to an enterprise, contact a topology controller to fetch a topology of the enterprise, cause a tunnel to be established from the enterprise to the VPN system, and forwarding the requests for the enterprise through the tunnel and the secure tunnel, wherein the secure tunnel is achieved by using an on-demand dial-out and tunneling traffic authentication. The memory storing instructions that, when executed, further cause the processor to cause the tunnel to be established from the enterprise to the VPN system through an on premises redirection proxy located within the enterprise.
Browser (web) isolation is a technique where a user's browser or apps are physically isolated away from the user device, the local network, etc. thereby removing the risks of malicious code, malware, cyberattacks, etc. This has been shown to be an effective technique for enterprises to reduce attacks. Techniques for browser isolation are described in commonly-assigned U.S. patent application Ser. No. 16/702,889, filed Dec. 4, 2019, and entitled “Cloud-based web content processing system providing client threat isolation and data integrity,” the contents of which are incorporated by reference herein. Traditionally browser isolation was focused on removing the risks of malicious code, malware, cyberattacks, etc. U.S. patent application Ser. No. 16/702,889 describes an additional use case of preventing data exfiltration. That is, because no data is delivered to the local system (e.g., to be processed by web content through the local web browser), none of the confidential or otherwise sensitive data can be retained on the local system.
The secure access can interoperate with browser isolation through the cloud-based system 100, to prevent data exfiltration, which is extremely critical as this is customer-facing data which adds to the sensitivity and liability, and also accessible to external users (customers). This functionality forces customers to interact with the B2B applications via an isolated, contained environment.
When a user 102 with the user device 300 is located on the enterprise network 410, the traffic between the user 102 and the applications 404 stay on the enterprise network 410 and consistent policies are applied for on-premise and remote. The private service edge node 150P can be located in a branch office, in a central office with tunnels to branch offices, etc. Of note, the private service edge node 150P is located with the applications 404 and the connector 400 and this proximity reduces latency.
The private service edge node 150P can be hosted in a public cloud, on-site as a Virtual Machine (VM), in a container, on physical servers, etc. The private service edge node 150P is publicly accessible such as via an IP address; the connector 400 is not publicly accessible—it dials out. The private service edge node 150P can include listen IP addresses and publish IP addresses or domains. The listen IP addresses are a set of IP addresses that the private service edge node 150P uses for accepting incoming connections, and this can be specified or all IP addresses. The publish IP addresses or domains, if specified, are required for connection to the private service edge node 150P. If these are specified, one of the entries is provided to the applications 350, e.g., randomly selected.
The following table illustrates example user 102 and user device 300 scenarios.
With private application access, only an authenticated user can access the applications 402, 404; unauthenticated users see that the applications 402, 404 do not exist. However, an authenticated user can be an untrusted user or on an untrusted device. The security concerns with an untrusted user include access to sensitive information by query manipulation via web form; performing function elevation by URL manipulation; gaining access to internal resources via web server; etc. For example, an untrusted user can guess passwords of various accounts successfully, such as default/empty username and passwords (password spraying), stolen credentials for internal apps (credential stuffing), test default service accounts credentials, scripted login attempts (BOT), etc.
The security concerns with an untrusted device include the user's browser executes scripts and sends the user's cookie to the attacker's server, e.g., XSS, Cookie stealing; can case Denial of Service (DOS) on target application (not DDOS), e.g., user's browser initiates large number of connection requests to target application, scripted traffic overwhelms applications (BOT); and can copy of sensitive data on a non-corporate device.
The core functionality of the WAAP 600 includes OAWSP rule coverage, custom and standard HTTP header inspection, and multiple operation modes. The HTTP header inspection includes write-your-own signatures, regular expressions are supported, and logical operations are supported. The multiple modes of operation can include monitor-only, block mode, and redirect. The objective of the WAAP 600 is to protect the applications 402, 404 from compromised user devices 300 as well as from untrusted users 102.
The establishing security controls can be via a dashboard to an admin, via the cloud-based system, where there is a repository of predefined controls as well as opportunities to write your own controls. The predefined controls can be OWASP rules.
The building a security profile can also be via the dashboard. There can be inspection controls and inspection profiles. The inspection controls are the rules-custom or predefined. The inspection profiles are collections of the rules, an order or rank of rule importance, common or control specific actions, overrides, etc. That is, the inspection controls are general rules. The inspection profiles are applications of specific rules granular on a per application 402, 404 basis, per tenant and per user basis.
Finally, the WAAP 600 implements policy driven inspection and action. This includes granular, criteria-based inspection, adding a policy model to private application access and applying a security profile based on criteria.
UC1: OWASP Top-10 Inspection and Visibility—Provide visibility into user traffic going to internal applications. What type of attacks are targeted to internal web applications. OWASP Top-10 coverage is the most basic. Show how apps are evaluating against OWASP Top-10.
UC2: Prevent malicious data upload to internal applications—Prevent malware upload to applications behind the connector 400. Monitor if untrusted user is doing sensitive data download and block such attempts by users.
UC3: Ease of configuration for native private application controls—Reduce burden on my admins to configure application security rules.
UC4: Monitor for potentially malicious application and user behavior—Provide visibility into unexpected application or user behavior including APIs. Too many errors, too many open connections, unexpected crashes, unexpected resource requests etc. Anything unusual that can potentially indicate that it is not a typical user-application interaction.
UC5: HTTP header and content rewrite—Rewrite content. Applications and access built assuming reverse-proxy solution. Rewrite headers to make sure that applications do not break with native security controls and apps do not see unexpected out of bound values.
UC6: SQL Injection/signature based attacks—Web applications sending untrusted data to an interpreter in construction of SQL calls can be exploited by modifying parameter values in the browser to execute commands such as fetching additional data, invoking SPs, deletion of records etc. Prevalent in legacy code. Untrusted users can access potentially sensitive data by exploiting such vulnerabilities.
UC7: Broken Authentication/Session Management—The session ID or token binds the user authentication credentials (in the form of a user session) to the user HTTP traffic and the appropriate access controls enforced by the web application. Typical session hijacking that involves brute force, non-random session ID calculation, cookie hijacking.
UC8: External Entity Processing (XXE)—A weakly configured XML parser can process XML input containing a reference to an external entity. Attackers can execute DoS, cause exposure of confidential data, disclosure of local files etc. Attacker may pivot to other internal systems since XXE occurs relative to the app processing XML doc. This can lead to CSRF attack.
UC9: Application Configuration Vulnerabilities—Unnecessary ports, service, account and privilege configurations have the potential to increase attack surface. Also, default accounts and passwords make applications more susceptible to attacks. Detection of common application misconfigurations is a must to have capability of a WAF.
UC10: User gains access to privileged resources—A user gains access to sensitive information by query manipulation via web form (*.*/empty parameters) or performs function elevation by URL manipulation app1.mycompany.com/order/home.jsp?role=3
UC11: Malicious script stored on web server and executed on every user call (Stored XSS)—Typical precursor to this is the malicious script being sent through unvalidated vulnerable input. Once saved in database, the script will be executed on functions such as page load. Also used as one of the common ways to steal user cookies.
UC12: Custom HTTP Headers & Response—Custom HTTP headers are used sometimes to implement particular logic on the server side. It is important to inspect custom headers to make sure that the values are within acceptable bounds. Even if an application throws errors or causes unexpected behavior, do not communicate the error codes back to the user. This might help an untrusted user to cause more unintended behavior on application. Customize the responses being sent.
UC13: Insecure Deserialization—Common attack vector for API, Microservices and client side MVC causing arbitrary remote code execution. Attackers exploited this in a vulnerable Equifax web app during the 2017 data breach.
UC14: Zeroaccess Reporting—In a Zeroaccess attack, a single attacker must normally establish hundreds of RPC connections. We have no idea how many attackers we might be facing as we have a single IP address that aggregates a large number of systems.
UC15: Brute force, credential stuffing and overwhelming application. A user may able to brute force values for hidden fields or preset query string parameters. Lack of access control over privileged functions within an internal web application is common. It may allow privilege escalation once a user is authenticated.
The following tables illustrates features and functions of the WAAP 600.
A firewall policy (or rule) is an exact description of what the firewall is supposed to do with particular traffic. When enabled, the firewall always have at least one active rule, although usually multiple rules are employed to differentiate traffic varieties by {source, destination, and application} and treat them differently. In general, firewall policy consists of matching criteria, an action, and some attributes:
The firewall supports a policy construct, to determine where firewall policy is enforced during an overall order of operation of packet flow through the cloud node 502. In an embodiment, there are three types of policy, namely, firewall policy, NAT policy, and DNS policy.
The firewall policy construct supports a rule order, status, criteria, and action. Policies are matched in the rule order in which they were defined. The status is enabled or disabled.
All components of the matching criteria are optional and if skipped imply “any.” A session matches a rule when all matching criteria components of the rule are satisfied (TRUE) by the session. If a session matches any element of a component (i.e., one of the IPs in a group), then the entire component is matched.
The present disclosure provides systems and methods for utilizing user risk analytics gathered from various cloud security systems for enforcing private access policies and rules. Zscaler Internet Access (ZIA) is a service that generates user risk information by tracking user behavior and analyzing traffic patterns within the cloud-based system 100. As part of the present systems and methods, various embodiments are adapted to utilize the risk information generated by ZIA, or other cloud security systems, to enforce policies.
In various embodiments, an S3 object store is used for saving risk files at preconfigured intervals, such as on a daily basis. In the context of the present disclosure, the private access systems are responsible for consuming the risk file and storing it in a user database for future use by policy rules in a private access broker component. Additionally, administrators can have the option to override user risk assessments, either to address false positives or to unblock user access.
An admin User Interface (UI) 614 is used to create new policies, to enforce risk policies, and override existing risk for single users, subsets of the users, or user groups. The actions performed at the admin UI 614 are facilitated via the management API 616. In various embodiments, the override functionality is implemented in the sync 610 API. The override value is saved directly in the user DB 612 and audit requests are sent to the management API 616. The broker 618 is adapted to receive policies and user risk information for enforcement. The enforcement can include any of the policy enforcement processes described herein for allowing and blocking access to resources on a per-user or per-tenant basis. In case of General Data Protection Regulation (GPRD) compliance requirements, the storage service 602, risk parser 604, queue 606, and risk consumer 608 can be deployed additionally in EU zones.
In various embodiments, the one or more cloud security systems, such as ZIA, upload files with user risk information to a risk hub 600. The risk hub 600 includes the storage service 602, risk parser 604, and queue 606 components. Responsive to receiving risk information, files with risk information are stored in the storage service 602. The risk parser 604 is adapted to download the files from the storage service 602, transform the results into a risk message, and sent it to the queue 606. The risk consumer 608 receives the message/messages from the queue 606 and sends it to the sync 610 service. The risk consumer 608 will choose the sync 610 service based on customer ID and its user DB 612 associated region. The sync 610 stores the user risk information in the user DB 612. The broker 618 is adapted to receive the user risk information from the user DB 612 and uses it to apply policies for the users to allow or deny access. For example, access can be denied if a user or group of users have a high risk score, whereas access can be allowed based on lower risk scores. Risk scores can be contemplated as a score between 0-100, and can be calculated at the one or more cloud security systems via one or more machine learning models based on user behavior. It will be appreciated that the broker can be adapted to enforce policies for the private access systems described herein in combination with the user risk policy enforcement described. Additionally, such processes can be facilitated by one or more nodes 150 of the cloud-based system. That is, the functions of the components described herein can be performed by one or more nodes 150 of the cloud-based system 100. Further, the decision to allow or deny access to a resource such as an application, via a connector 400, can be based on a combination of policies.
Various cloud security systems such as ZIA can upload files with user risk information to the storage service 602 with write access. In embodiments, the storage service will have restricted write access for ZIA scripts and read/write access for the risk parser 604. The systems analyze user activity data for a preconfigured time period, for example, in a previous 2 week window, and comes up with a risk score at preconfigured intervals, for example, every day. This information is then received by the storage service 602 and stored in the required format. Additionally, the systems can identify a set of preconfigured user behaviors which are deemed risky and send an alert to administrators stating the user risk. Based on this, the users risk level can be changed to critical via the override process. When this alert is received, administrators can post a file to the storage service 602 with only that particular user with a risk score of 100 (Max score). If the user exists in the full file the next day, the risk of this user can be reset to whatever the new value is.
The risk parser 604 service is adapted to read the file into buffer and in parallel consume that buffer line by line. The risk parser 604 is further adapted to maintain optimal buffer capacity, implement resume and retry logic for storage service connectors, and queue connectors. The risk parser 604 can poll the storage service 602 for newly uploaded risk files for each configured period of time. Once a new file is found, it processes the file and moves it to a processed location in the storage service 602. In various embodiments, there are three locations within the storage service 602 to store objects that store processed files. These locations include processes-success, processes-failed, and processed-outdated. The processed-outdated section holds files that are beyond the expiration and will not be published in order to minimize false positives/negatives due to expired risk data. such an expiration can be based on a preconfigured time span, such as marking risk files as being expired due to the file being above a certain age.
The queue 606 can be adapted to utilize Apache Kafka, or any other distributed data store service. The risk files can include the following:
The risk parser 604 processes each line of a risk file individually and creates a risk block based on the information contained. The risk block is generated using a configuration, where csv_values is a custom object generated from mappings. An example risk block is shown below.
It can be seen that each risk block contains information related to the user, their risk score, and the source of the risk information. Once files are processed, they can be deleted, stored at the storage service 602, or transferred to a secondary storage location. In relation to utilizing S3 buckets as the storage service 602, the files can be transferred to S3 Glacier for cost optimization. Additionally, the risk parser 604 is adapted to produce and report metrics to a risk hub central metric store or the user DB 612 for storage. These metrics can include files size bytes, user risk messages count, successfully parsed counter, file download error encountered counter, file parse error encountered counter, queue error encountered counter, etc.
In various embodiments, the purpose of the risk consumer 608 is to consume a message/block containing user information and its associated risk from a queue (risk block), process the message, and then post the results to the destination service. This service resides in the private access control plane within the logging zone along with the queue. The user database serves as the destination for storing user risks. The broker 618 will retrieve the associated user risk from the user database, in some embodiments using Wally. The intermediate ingestion service that will be utilized is called sync 610. The risk consumer 608 will batch the risk messages and call an update user risk score API, implemented by sync 610.
In some embodiments, the one or more user DBs 612 include tables for persisting user risk. User risk tables contain entries for each user and what the associated risk is for that user. Again, in various embodiments, this associated risk is persisted as a score. Both original and override scores can exist in the user risk table. That is, every user can have two risk score entries including original and override. Separate entries are used so that the act of changing the override entries does not impact the original entries, the original entries being the risk score assigned to a user by the one or more cloud security systems. Additionally, the risk tables can further include the risk parser 604 metrics.
As described, administrators can create access policies that are based on risk scores assigned to users, either original scores or override scores. In some embodiments, policies can take into consideration both original scores or override scores. That is, a policy can determine an action to be performed based on the relation of the original scores or override scores. For example, based on the policy, the systems can allow a user access based on an override risk score being low even if the original risk score is high. Similarly, the systems can block access based on an override risk score being high even if the original risk score is low. The alternative can also be contemplated where policy can give priority to an override score or an original score in order to make a decision. It will be appreciated that the terms low risk score and high risk score are in relation to scores being between 0-100 where 100 is high and 0 is low. More particularly, policy decisions can be based on whether the risk score, original or override, is one of low, medium, high, or critical. In an embodiments, the following scores are associated. LOW-->0-29, MEDIUM-->30-59, HIGH-->60-79, CRITICAL-->80-100. Further, both original scores (i.e., scores assigned by the security systems) and override scores have expirations.
Additionally, in an exemplary use case, a user has created a tunnel connection while the user is assigned a low risk score. If, for any reason, the users risk score is updated to a higher risk score, based on the configured policy, the tunnel can be terminated if the updated higher risk score is above the policies threshold. This can be triggered by an updated original risk score or an override score assigned to the user during their session.
The term risk entry refers to the data that is persisted in the user DB 612 after risk information is processed. That is, a risk entry associated with a user includes the information of the risk table, the metrics from the risk parser 604, and any override scores assigned to a user via the admin UI 614. an unknown risk score can be assigned to a user where the original and override risk scores do not exist, or only the original score exists and has expired (based on its expiry). Customers can have policies to allow access to the resource where the risk score is not present/exists for the users yet.
In a use case, when the client authenticates, if it is a SCIM user, then all the registration for the SCIM table happens. If the risk score is available for the SCIM user in the user DB 612, then it is associated with the client connection. When an tunnel connection happens for the SCIM User, the policy evaluation engine will verify the risk factor mapping from the rule criteria from the UI and the risk factor associated with the client connector from the user DB. In various embodiments, this policy evaluation happens for every tunnel connection. If the risk score changes, as the registration to the table was done before, a callback updates the current value for the risk score, so tunnel policy evaluation happens correctly for next tunnel connection.
The present systems can be utilized for populating a connection state field responsive to a user login. When user logs in, the client connection will have the corresponding LOW, MEDIUM, HIGH, CRITICAL risk field populated based on info received from the user DB 612. The systems reference the risk entry associated with the user and either populated the risk field or does not based on whether the original or override score is expired. Again, an unknown risk score can be given to a user where the original and override risk scores do not exist, or only the original score exists and has expired. Customers can have policies to allow access to the resource where the risk score is not present/exists for the users yet.
Again, the present disclosure provides systems and methods for utilizing user risk signals gathered from various cloud security systems for enforcing private access policies and rules. The systems are adapted to identify risky users in real time and deny access to customers resources. The systems further provide administrators the ability to override risk scores if user risk analytics creates a false positive/negative value. Various embodiments leverage user risk analytics from one or more cloud security systems.
The process 650 can further include receiving the risk score from a security system associated with the cloud-based system; storing the risk score in a user database; and retrieving the risk score from the user database prior to the determining. The determining can be based on any of an original risk score and an override risk score. The steps can include receiving the override risk score from an admin User Interface (UI) prior to the determining. The steps can include receiving a policy configuration from an admin User Interface (UI) prior to the determining, and determining whether the user is allowed to access the resource based on the policy and the risk score. The stitching together the connections can include the device creating a connection to the cloud-based system and a connector associated with the resource creating a connection to the cloud-based system, to enable the device and the resource to communicate. The steps can include determining, based on the risk score, the user is not allowed to access the resource; and notifying the user that the resource does not exist. The steps can include identifying the user as belonging to one of a plurality of risk levels, wherein the risk levels include any of low, medium, high, and critical based on the risk score; and one of allowing or blocking the user from accessing the resource based on the user's risk level. The resource can be located in one of a public cloud, a private cloud, and an enterprise network, and wherein the request originates from a device that is remote over the Internet.
It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device such as hardware, software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory), Flash memory, and the like. When stored in the non-transitory computer readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims. Moreover, it is noted that the various elements, operations, steps, methods, processes, algorithms, functions, techniques, etc., described herein can be used in any and all combinations with each other.
Number | Date | Country | Kind |
---|---|---|---|
202441000954 | Jan 2024 | IN | national |