ZERO-TRUST IP ADDRESS RESOLUTION IN CLOUD SERVICES

Information

  • Patent Application
  • 20250193152
  • Publication Number
    20250193152
  • Date Filed
    April 26, 2024
    a year ago
  • Date Published
    June 12, 2025
    22 days ago
Abstract
Disclosed are systems, apparatuses, methods, and computer-readable media for zero-trust IP address resolution in cloud services. A method includes: receiving, at an egress node of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address; in response to receiving the network request, resolving a second IP address based on the first domain name; and inserting the second IP address into the network request in place of the first IP address.
Description
TECHNICAL FIELD

The disclosure relates generally to cloud networking and, more specifically but not exclusively, to systems and techniques for zero-trust IP address resolution in cloud services.


BACKGROUND

Public clouds are third-party, off-premises cloud platforms that deliver computing resources, such as virtual machines, storage, and applications, over the Internet. Services provided by public cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, are shared among multiple customers. Public clouds offer scalability, cost efficiency, and flexibility as organizations can access and pay for resources on a pay-as-you-go model. Pay-as-you-go is particularly beneficial for customers with fluctuating workloads and enables enterprises to scale resources up or down based on demand. However, the shared nature of public clouds raises considerations regarding security, compliance, and data privacy, and customers need to carefully evaluate their specific requirements and choose appropriate providers.


Many customers also have private clouds, which are dedicated infrastructure that is either on-premises or hosted by a third-party. Private clouds are designed exclusively for a single customer, providing greater control over resources and data. Private clouds are suitable for entities with stringent security and compliance requirements, allowing the entities to customize and manage the infrastructure according to specific needs. Entities use private clouds to retain control over critical business applications, sensitive data, or when regulatory compliance mandates demand a higher level of data governance.


Hybrid and multi-cloud approaches have become popular to adapt to the benefit of public and private clouds. Hybrid clouds allow organizations to enjoy the scalability of public clouds while retaining certain workloads in a private, more controlled environment. Multi-cloud strategies involve using services from multiple public cloud providers, offering redundancy, flexibility, and the ability to choose the best-suited services for specific tasks.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the disclosure may be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 is a conceptual diagram of a networking environment associated with a cloud security platform that integrates into different cloud providers in accordance with some aspects of the disclosure;



FIG. 2 is a conceptual diagram of a cloud security platform that integrates into different cloud service providers in accordance with some aspects of the disclosure;



FIG. 3 illustrates a block diagram of a data path pipeline 300 and integration with hardware in accordance with some aspects of the disclosure;



FIG. 4 illustrates a data path pipeline for forward packet flows and proxy packet flow of a cloud security platform in accordance with some aspects of the disclosure;



FIG. 5 is a conceptual diagram illustrating a cloud security platform integrated into a multi-cloud service in accordance with some aspects of the disclosure;



FIG. 6 is a conceptual diagram illustrating applying a single configuration to firewalls in a multi-cloud firewall deployment via a control plane in accordance with some aspects of the disclosure.



FIG. 7 is an illustration of data objects that can be applied to enable a single configuration in a multi-cloud firewall deployment in accordance with some aspects of the disclosure.



FIG. 8 is an illustration of various rules that can applied to a cloud security provider in accordance with some aspects of the disclosure.



FIG. 9 illustrates an example method for applying a single configuration to firewalls in a multi-cloud firewall deployment in accordance with some aspects of the disclosure.



FIG. 10 illustrates a block diagram of a system-on-chip (SoC) used in a network device to perform various functions in hardware in accordance with various aspects of the disclosure; and



FIG. 11 shows an example of a computing system, which may be for example any computing device that may implement components of the system.





DESCRIPTION

Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described to avoid obscuring the description. References to one or an embodiment in the present disclosure may be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.


Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.


The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.


Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods, and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.


Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the herein disclosed principles. The features and advantages of the disclosure may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or may be learned by the practice of the principles set forth herein.


Overview

Cloud network providers include various companies such as Google, Apple, Amazon, Microsoft, DigitalOcean, Vercel, Alibaba, Netlify, Redhat OpenShift, Oracle, and many other entities. Each cloud provider offers a range of services, from foundational infrastructure, which is referred to Infrastructure as a Service (IaaS), platforms for application development and deployment, which is referred to as platform as a service (PaaS), and fully managed software applications, which is referred to as software as a service (SaaS). Cloud providers maintain a network of geographically distributed data centers that host servers, storage, and networking equipment and allow customers to deploy resources in proximity to their target audience for improved performance and redundancy, including content delivery networks (CDN) and edge compute services.


Virtualization technology is a foundational aspect of cloud providers and enables the creation of virtual instances of servers, storage, and network resources within a geographic region. Cloud providers also deploy resource orchestration tools to manage the dynamic allocation and scaling of these virtual resources based on demand. Fundamentally, cloud providers establish robust, high-speed connections between their data centers and form a global network backbone. This backbone ensures low-latency communication and facilitates data transfer between different regions.


Conventional security within cloud providers deploy a range of security measures, including encryption, firewalls, identity and access management, and compliance certifications, to safeguard customer data and ensure the integrity of their services. Cloud services are designed to be elastic, allowing customers to dynamically scale resources up or down based on demand to handle varying workloads efficiently.


Cloud providers offer various managed services, such as databases, machine learning, analytics, runtimes, and other aspects that allow customers to leverage advanced functionalities without the need for deep expertise in those domains. Various application programming interfaces (APIs) can be exposed by a cloud provider that enables users to programmatically interact with and manage their resources and allow integration with third-party tools and the automation of various tasks.


Fundamentally, in past server architectures, a server was defined with a fixed internet protocol (IP) address. In cloud-based computing, IP addresses are dynamic and enable the resources within the cloud providers. Cloud environments require dynamic scaling to accommodate varying workloads and dynamic IP addresses allow for the automatic allocation and release of addresses as resources are provisioned or de-provisioned. The dynamic addresses also allow service elasticity to respond to increasing or decreasing resources, cost efficiency, automation and orchestration of tools within the cloud integration and deployment environment, load balancing, high availability and failover, adaptable network topology, and increase resource utilization.


Cloud security is a fundamental issue as customers typically may deploy resources and integrate into resources of different cloud providers. While the clouds have a generic infrastructure configuration with a spine network topology that routes traffic to a top-of-rack (TOR) switch and servers within the racks, clouds are still configured differently and have different requirements. For example, some cloud providers emphasize different geographical markets; cloud providers can emphasize different business segments (e.g., healthcare, government, etc.) and configure services according to their intended market.


Cloud security has become an important aspect of networking today because there are significant challenges. For example, data breaches are a significant concern in the cloud because unauthorized access to sensitive data, either through misconfigurations or cyberattacks, can lead to data exposure and compromise the confidentiality of information. Misconfigurations of cloud services, such as incorrectly configured access controls or insecure storage settings, can create vulnerabilities and may expose data to unauthorized users or attackers.


Another important aspect of cloud security is identity management. Improper management of user identities and access privileges can result in unauthorized access. Inadequate or improperly implemented encryption can lead to data exposure. This includes data in transit, data at rest, and data during processing. Ensuring end-to-end encryption is crucial for maintaining data confidentiality.


Cloud providers use shared infrastructure and technologies. If a vulnerability is discovered in a shared component, multiple clients could be affected simultaneously. Regular security updates and patches are essential to mitigate this risk, and there is an increased market for third-party services that integrate into cloud provider services.


Organizations may fail to conduct thorough due diligence when selecting a cloud service provider (CSP). Inadequate assessment of a provider's security measures, compliance standards, and data protection practices can result in security gaps.


The evolving landscape of cybersecurity introduces new threats and attack vectors. Cloud security solutions must continuously adapt to address emerging threats, such as zero-day vulnerabilities and advanced persistent threats (APTs). These attacks can come from many different sources, and monitoring these threats can be too difficult for entities.


The cloud is dynamic, connected, and encrypted. Customers of cloud providers primarily care about their business operations and not the infrastructure behind the business operations. In the current environment, customers of CSPs need to implement instruction protection services (IPS), instruction detection services (IDS), web application firewalls (WAF), as well as provide egress security. Customers may also need to implement data loss prevention services (DLP) to comply with sensitive information requirements.


Dynamic IP addresses are necessary for cloud-based services and provide the flexibility and scalability to meet the dynamic demands of modern applications. In cloud environments, resources are often provisioned and de-provisioned dynamically based on workload requirements. Dynamic IP addresses allow cloud service providers to efficiently allocate and manage IP addresses at runtime to optimize resource utilization and reduce operational overhead. This dynamic allocation ensures that resources are not tied to specific IP addresses permanently, enabling efficient resource allocation and reallocation as needed.


However, dynamic IP addresses can present challenges, particularly in maintaining consistent network access and security. Since IP addresses can change frequently, managing access controls and maintaining secure communication between services becomes more complex. Applications relying on static IP addresses for configuration or security purposes may encounter disruptions or vulnerabilities when IP addresses are dynamically reassigned. In many cases, clients resolve the address of a target service due to the dynamic nature of resources in the cloud.


In the cloud, data exfiltration, which is the unauthorized and deliberate extraction or transfer of sensitive or confidential data from a network or system by an attacker, is a persistent concern. Data exfiltration often occurs from within an organization's internal network, often through covert means, intending to transfer it to an external location under the control of the malicious actor. Data exfiltration can occur through various methods, including exploiting vulnerabilities in network infrastructure, utilizing malware or malicious software, leveraging insider threats, or employing social engineering tactics to trick authorized users into divulging sensitive information.


Disclosed are systems, apparatuses, methods, computer readable medium, and circuits for zero-trust IP address resolution in cloud services. The disclosed systems and techniques can prevent data exfiltration based on resolving addresses for all egress traffic from within a private network.


According to at least one example, a method for zero-trust IP address resolution in cloud services is disclosed. The method includes: receiving, at an egress gateway of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address; in response to receiving the network request, resolving a second IP address based on the first domain name; and inserting the second IP address into the network request in place of the first IP address.


In another example, a firewall of a cloud security platform for configuring resources of a service distributed across multiple cloud service providers is provided that includes a storage (e.g., a memory configured to store data, such as virtual content data, one or more images, etc.) and one or more processors (e.g., implemented in circuitry) coupled to the memory and configured to execute instructions and, in conjunction with various components (e.g., a network interface, a display, an output device, etc.), cause the one or more processors to: receive, at an egress gateway of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address; in response to receiving the network request, resolve a second IP address based on the first domain name; and insert the second IP address into the network request in place of the first IP address.


Example Embodiments


FIG. 1 is a conceptual diagram of a networking environment 100 associated with a cloud security platform that integrates into different cloud providers according to some aspects of the disclosure. In some aspects, the networking environment 100 includes a plurality of applications 102 that are connected to a cloud security platform 104 that is configured for various aspects of cloud security. The cloud security platform 104 comprises a compute layer that is configured to discover applications and network resources, deploy cloud-based firewalls and management, and provide multi-cloud policy and control from a single endpoint.


The applications 102 include various forms, such as distributed cloud-based applications, edge-based applications (e.g., webapps), desktop-based applications, mobile phone applications, and so forth. The third-party services 106 include various services, such as cloud service providers and other services that are integrated into the cloud security platform 104. For example, the cloud security platform 104 may be configured to use different services for specialty functions that are consistent for each customer of the cloud security platform 104. Non-limiting examples of different services include various types of communication services (e.g., mail servers, communication platforms, etc.), security-oriented services (e.g., monitoring services such as Splunk), search services, storage services (e.g., relational databases, document databases, time-series databases, graph databases, etc.), authentication services, and so forth.


The cloud security platform 104 is configured to be deployed within various infrastructure environments in a PaaS manner. The cloud security platform 104 includes networking infrastructure 108 for connecting the application 102 to the cloud security platform 104. The cloud security platform 104 includes a plurality of servers 110 that are geographically distributed, with each server 110 being managed by with various operating systems (OS) 112, runtimes 114, middleware 116, virtual machines (VM) 118, APIs 120, and management services 122. In some aspects, the cloud security platform 104 includes a runtime 114 refers to the environment that the middleware 116 will execute within to control various aspects of the cloud security platform 104. For example, the VMs 118 may be Kubernetes containers and the middleware 116 may be configured to add or remove hardware resources within cloud providers dynamically.


The cloud security platform 104 also exposes one or more APIs 120 for allowing the applications 102 to interact with the cloud security platform 104. The APIs 120 enable a customer to surface information, interact with information within the cloud security platform 104, and perform other low-level functions to supplement the security services of the cloud security platform 104. The API 120 is also configured to integrate with other third-party services (e.g., the third-party service 106) to perform various functions. For example, the API 120 may access a customer's resources in a cloud service provider (e.g., a third-party service 106) to monitor for threats, analyze configurations, retrieve logs, monitor communications, and so forth. In one aspect, the API 120 integrates with third-party cloud providers in an agnostic manner and allows the cloud security platform 104 to perform functions dynamically across cloud providers. For example, the API 120 may dynamically scale resources, allow resources to join a cluster (e.g., a cluster of controller instances), implement security rules from the cloud security platform 104 into the corresponding cloud provider, and other functions that enable a cloud-agnostic and service-agnostic integrated platform. For example, in some cases, the API 120 is configured to integrate with other security services to retrieve alerts of specific assets to reduce exposure to malicious actors.


The cloud security platform 104 also includes management services 122 for managing various resources of a customer. In some aspects, the management services 122 can manage resources including a controller (e.g., the controller 210 in FIG. 2), data resources (e.g., a data plane 270 in FIG. 2), and various integrations (e.g., a gateway 250, third-party services 252, cloud providers 254 in FIG. 2). For example, the management services 122 may allow the customer to manage various third-party resources such as a cloud-based relational database, a cloud-based document database, a cloud-based storage service (e.g., various implementations of the S3 API) and so forth.


In one aspect, the management services 122 include an onboarding user experience that connects to various cloud providers (e.g., using the API 120) and allows onboarding of different cloud resources. The management services 122 also provides a cloud-agnostic approach to managing resources across different cloud providers, such as scaling up identical resources in different regions using different cloud providers. As an example, some cloud providers do not have a significant presence in the Far East, and the management services 122 are configured to activate similar resources in a first geographical region (e.g., in Europe) and a second geographical region (e.g., Asia) with similar configurations in different cloud providers.


The cloud security platform 104 is configured to provide security across and within cloud providers in different contexts. For example, the cloud security platform 104 provides protection and security mechanisms in different flows. The cloud security platform 104 is configured to provide varying levels of protection based on flow, packet, encryption and other mechanisms. In one aspect, the cloud security platform 104 is configured to protect forwarding flows and packet flows.


Forwarding flow refers to the set of rules and decisions that determine how network devices handle incoming packets without inspecting packet and traffic contents. A forwarding flow involves making decisions based on information such as destination IP address, media access control (MAC) address, and routing tables to determine the outgoing interface for the packet and typically includes actions like address resolution (e.g., an address resolution protocol (ARP) for IP to MAC address mapping), updating MAC tables, and forwarding the packet to the appropriate interface, and various rules to apply based on configuration and policies.


A proxy flow comprises both forward proxy and reverse proxy functions and inspects the content of encrypted flows and access control. In some aspects, the cloud security platform 104 decrypts encrypted traffic to ensure malicious actors are not exploiting vulnerabilities in TLS-encrypted applications, and prevents data exfiltration (e.g., DLP) or connection to inappropriate universal resource indicators (URIs).


The cloud security platform 104 is also configured to handle packets differently based on security, such as policies related to IPS and a WAF. WAF protects various web applications from online threats, such as SQL injection, cross-site scripting (XSS), authentication spoofing, and other potential security. For example, a WAF filters and monitors traffic by inspecting headers (e.g., a JSON-encoded object in a hypertext transfer protocol (HTTP) header).


The cloud security platform 104 provides real-time discovery of multi-cloud workloads, at-scale, for virtual private clouds (VPCs) and cloud accounts. Real-time discovery also enables finding security gaps and improving defensive posture. The cloud security platform 104 also provides a dataplane management using gateways (e.g., the gateway 250 in FIG. 2) that provides self-healing via in-band/transparent diagnostics), seamless upgrade (e.g., no downtime or user intervention), and auto-scaling. The cloud security platform 104 may implement a containerized service (e.g., Kubernetes) to enable scale out deployments with a high service level agreement (SLA) without having to maintain network security infrastructure and integrate with cloud-native networking to enable automation of distributed and centralized (hub-n-spoke) architectures for ingress, egress, east-west (including microsegmentation) and hybrid cloud configurations. The cloud security platform 104 maintains traffic within cloud account boundaries and customers retain control of their private encryption keys without needing to share encryption keys with the control plane (e.g., the controller 210 of FIG. 2).



FIG. 2 is a conceptual diagram of a cloud security platform that integrates into different cloud service providers in accordance with some aspects of the disclosure.


In some aspects, the cloud security platform 200 separates compute and data storage functions and enables a multi-tenancy to support different customers while maintaining data separation when needed. For example, the compute components are separated into a controller 210 and data storage components are implemented in a data plane 270. The controller 210 may be a collection of Kubernetes-based services that deploy a low latency connection (e.g., a remote procedure call (RPC) such as gRPC, Websockets, WebTransport, etc.) to connect various endpoints and enable bidirectional streaming, preventing connection setup and teardown. Each service within the controller 210 scales up or down horizontally based on load.


The controller 210 includes a configuration engine 212, an analytics engine 214, and a resources engine 216. The configuration engine 212 configures the various components and provides various services such as webhooks 218, a dashboard 220, an API 222, and a workflow 224.


In one aspect, the webhooks 218 module configures an asynchronous method of communication between different applications or services in real-time. In a webhook configuration, one application can register an endpoint URI with another, specifying where it should send data when a particular event occurs. When the event triggers, the originating system automatically pushes data to the registered URI, allowing the receiving application to process and act upon the information immediately. In some aspects, the webhooks 218 modules implement an observer pattern, with a dependent component providing a URI to the observed data source.


The dashboard 220 provides a user experience to a customer of the cloud security platform 104 and provides various integration modules, onboarding platforms, monitoring tools, and other functions for customers to access.


In some aspects, the APIs 222 can be various libraries to interact with various services, either through a dashboard 220 interface, a command line interface (not shown), or other tooling (not shown). The APIs 222 can also be API endpoints of the cloud security platform 104 or an API library associated with a third-party service (e.g., third-party services 252), or APIs associated with the cloud providers 254. In one aspect, the APIs 222 can include an agnostic API library that is configured to interact with the cloud providers 254 using a single API interface to scale resources, respond to security incidents, or other functions. This API 222 can be accessed via a command line interface or may be distributed to customers via various package management services.


The workflow 224 module can be various components that enable a customer to perform various tasks, such as managing specific resources, deploying services, communicating with team members regarding issues, and so forth. For example, the workflow 224 module can interact with the gateways 250 and an administration engine 248 to manage resources, access to resources, and deployment of various resources (e.g., deploy infrastructure with Terraform).


The analytics engine 214 is configured to integrate with gateways 250 and various third-party services 252 to monitor various events, services, and other operations. The analytics engine 214 includes a watch server 226 that is configured to disambiguate information from multiple sources of information (e.g., the gateway 250, the third-party services 252, etc.) to provide a holistic view of cloud networking operations. The analytics engine 214 may also be configured to interact with various components of the data plane 270 such as a metrics controller 242 and a data lake controller 246.


In some aspects, the resources engine 216 receives resources from cloud providers 254 and includes various components to route information and store information. The 216 includes an inventory router 228, logs 230 (e.g., a cache of logs for various functions), an inventory server 232, and a logs server 234. The components of the resources engine 216 are configured to disambiguate and combine information in an agnostic and standardized manner and store various resources in the data plane 270. For example, the resources engine 216 stores and receives events from an events controller 244 and also stores and receives logs in the data lake controller 246. In some aspects, the inventory router 228 and the inventory server 232 build an evergreen model of the customer's cloud accounts and subscriptions and create an address object for security policy management for the cloud security platform 200. The address object represents a segment of the customer's subscription based on cloud-native attributes (e.g., Security Group, ASG, customer-defined tags) and maps to a collection of IP Addresses which is automatically refreshed and synchronized with the gateway 250.


The data plane 270 includes various components to separate various types of information associated with the control plane and interconnected third-party services 252 and cloud providers 254. For example, the data plane 270 includes a configuration controller 240 that stores inventory information of a customer and various configuration information. In one example, the cloud providers 254 use different metrics for decisions pertaining to scaling deployed resources, and the configuration controller 240 stores information that allows the controller 210 to scale resources within the cloud providers 254 in a standardized manner. In some aspects, the configuration controller 240 may include storage mechanisms such as a relational database, a document database, and other high-availability storage mediums. The storage mechanisms can be on-premises resources or off-premises or cloud-based solutions such as various cloud-based relational or document databases (e.g., Redis, MySQL, MongoDB, etc.).


The data plane 270 also includes a metrics controller 242 that is configured to interact with custom metrics data or a third-party service for metrics analysis (e.g., Amazon CloudWatch). The events controller 244 is configured to handle and store events and various queues. For example, the events controller can include a Kafka server for handling real-time data feeds and event-driven applications. The metrics controller 242 may use a publish-subscribe model in which producers (e.g., a third-party service, internal components of the controller 210, a gateway 250, etc.) publish data streams and a consumer subscribes to receive and process these streams in a fault-tolerant and distributed manner. The metrics controller 242 may handle massive amounts of data with low latency and high throughput.


The data lake controller 246 provides a long-term and scalable storage mechanism and associated services. For example, the data lake controller 246 may include a cloud-based S3 API for storing data to various cloud services (e.g., amazon web services (AWS), DigitalOcean, OpenShift) or on-premises services (e.g., MinIO, etc.). The data lake controller 246 may also include a search-based mechanism such as ElasticSearch for large-scale and efficient search of contents within the non-volatile cloud storage mechanisms. In some aspects, the data lake controller 246 stores network logs and implements search functionality (e.g., Snowflake) for large-scale ad hoc queries for security research and analysis.


The cloud security platform 200 also includes an administration engine 248, a gateway 250, and integrations into various third-party services 106. The administration engine 248 may include authentication services (e.g., AuthO, Okta) to verify identity and provide authentication mechanisms (e.g., access tokens), and may include infrastructure as code (IaC) tools such as Terraform to automate the process of creating, updating, and managing the specified infrastructure across various cloud providers or on-premises environments.


The cloud security platform 200 includes gateways 250 that are deployed into various integration points, such as cloud providers. The gateways 250 an ingress and egress points of the cloud security platform 200 and are configured to monitor traffic, provide information to the controller 210, dynamically scale based on the cloud security platform 200, and provide security to a customer's cloud infrastructure. For example, gateways 250 may implement a transparent forward and reverse proxy to manage traffic. The gateways 250 may also include a cloud-based firewall that is configured to filter malicious traffic using various dynamic detection policies.


The cloud security platform 200 also integrates into various third-party services 252 for various purposes such as receiving threat-related intelligence (e.g., Spunk, Talos, etc.). The third-party services 252 also include different types of infrastructure components such as managing mobile devices, implementing cloud-based multimedia communication services, business analytics, network analytics (e.g., reverse address lookup), certificate services, security information and event management (SIEM), and so forth.



FIG. 3 illustrates a block diagram of a data path pipeline 300 and integration with hardware in accordance with some aspects of the disclosure.


In some aspects, the data path pipeline 300 comprises a single-pass firewall architecture that uses a single-pass flow without expensive context switches and memory copy operations. In a single-pass flow, processing is not duplicated multiple times on a packet. For example, TCP/IP receive and transmission operations are only performed a single time. This is different from existing next-generation firewalls (NGFW). The data path pipeline 300 uses fibers with flexible stages completely running in user-space and, therefore, does not incur a penalty for kernel-user context switches, which are expensive in high bandwidth and low latency operations. The data path pipeline 300 provides advanced web traffic inspection comparable to WAFs to secure all traffic flows and break the attack kill chain in multiple places, raising the economic costs for attackers. The data path pipeline 300 also captures packets of live attacks into a cloud storage bucket without significant performance degradation and enables a rule-based capture on a per-session and attack basis.


The data path pipeline 400 is also configured to be flexible and stages of processing are determined on a per-flow basis. For example, application 1 to application 2 may implement an L4 firewall and IPS inspection, application 3 to application 4 may implement an L4 firewall, a transport layer security (TLS) proxy, and IPS, and an internet client to web application 5 implements an L4 firewall, TLS proxy, IPS, and WAF.


In some aspects, the data path pipeline 300 includes various filters (e.g., malicious IP filter), geographic IP filter, fully qualified domain name (FQDN) filter) to filter both forwarding flows and proxy flows, as well as an L4 firewall to restrict traffic based on conventional techniques.


The data path pipeline 300 may also be integrated with a hardware offload 302 (e.g., a field programmable gate arrays (FPGA) of a cloud provider, an application specific integrated circuit (ASIC), etc.) that includes additional functionality that does not impact throughput. In one aspect, a cloud provider may offer a hardware offload or an accelerator function to implement a specialized function. For example, the hardware offload 302 includes a cryptographic engine 304, an API detection engine 306, a decompression engine 308, a regex engine 310, and a fast pattern engine 312 to offload operations into hardware.


In one aspect, the data path pipeline 300 includes high throughput decryption and re-encryption to enable inspection of all encrypted flows using the cryptographic engine 304. By contrast, traditional NGFWs provide a throughput of around 10% for inspecting encrypted flows. The data path pipeline 300 may use a decompression engine 308 to decrypt compressed traffic and perform deep packet inspection. For example, the data path pipeline 300 also uses a userspace Linux TCP/IP driver, in addition to network address translation (NAT) in conjunction with the API detection engine 306 and the decompression engine 308 to eliminate problematic and malicious flows.


The data path pipeline 300 includes a transparent reverse and forward proxy to isolate clients and servers without exposing internal details, a layer 7 firewall to rate limit and protect applications and APIs, and secure user access by looking up end-user-specific identity from an identity provider (IDP) and provide zero trust network access (ZTNA). The data path pipeline 300 includes a WAF pipeline and an IPS pipeline to detect malicious and problematic flows in conjunction with a regex engine 310 and a fast pattern engine 312. For example, the WAF pipeline may implement protection for web applications, including OWASP Top 10, using a core ruleset and application-specific rules for frameworks and common content management tools like PHP, Joomla, and WordPress. The data path pipeline 300 includes IDS and IPS to block known vulnerabilities and provide virtual patching until the applications can be patched with updated security fixes, application identification to block traffic based on client, server or application payload, DLP loss and filtering, URI filtering, antivirus and anti-malware features to prevent malware files from being transferred for ingress (malicious file uploads), east-west lateral attacks (moving toolkits) and egress flows (e.g., botnets).



FIG. 4 illustrates a data path pipeline 400 for forward packet flows and proxy packet flow of a cloud security platform in accordance with some aspects of the disclosure.


The data path pipeline 400 comprises a L4 firewall 402, a user space receive TCP/IP stack 404, a TLS receive proxy 406, a WAF 408, an IPS 410, a TLS transmit proxy 412, and a user space transmit TCP/IP stack 414 and illustrates the flow of forwarding flows and proxy flows, and points at which packets may be dropped/accepted using an L4 firewall, a WAF, and/or IPS.


For example, the data path pipeline 400 may be implemented as a user-space driver (e.g., a data path packet driver (DPDK) that receives forwarding and proxy flows, computes hashes, and provides the packet to the worker. In this case, a worker is part of a distributed instance of a gateway and provides the flows to the L4 firewall 402. For example, the L4 firewall 402, or a transport layer firewall, may inspect traffic and filter traffic based on source and destination IP/port.


The user space receives TCP/IP stack 404 is configured to perform the receive processing of forwarding and proxy flows. For example, the user space receive TCP/IP stack 404 handles framing, addressing, and error detection within TCP/IP and further identifies per-flow processing based on policies and rules of the cloud security platform. For example, some forwarding flows are provided to the user space transmit TCP/IP stack 414, some forwarding flows are provided to the IPS 410, and proxy flows are forwarded to the TLS receive proxy 406. The TLS receive proxy 406 manages the TLS decryption process in the event further inspection is warranted based on the policies and rules, and then provides the proxy flows to either the IPS 410 or the WAF 408 based on a policy.


The IPS 410 examines its content, headers, and contextual information. Deep packet inspection involves analyzing the payload and looking for patterns, signatures, or anomalies that may indicate malicious activity. The IPS compares the packet against a database of known attack signatures and employs heuristic analysis to detect deviations from expected behavior. Additionally, it assesses factors such as source and destination addresses, ports, and protocol compliance. If the IPS identifies a packet as potentially malicious, it can take proactive measures, such as blocking the packet, alerting administrators, or initiating predefined security policies to prevent the exploitation of vulnerabilities and safeguard the network from intrusion attempts.


The WAF 408 monitors, filters, and analyzes HTTP traffic in real-time and actively looks for and blocks common web vulnerabilities such as SQL injection, XSS, and other application-layer attacks. By examining and validating HTTP requests and responses, the WAF can detect and block malicious traffic, ensuring that only legitimate requests reach the web application. WAFs often employ rule-based policies, signature-based detection, and behavioral analysis to identify and mitigate potential security risks.


The TLS transmit proxies 412 reassembles the proxy flows and contextual information and provides the proxy flows to the user space transmit TCP/IP stack 414, which reassembles the packet and forwards any traffic. As shown in FIG. 4, flows can be dropped at different points identified in the user space receive TCP/IP stack 404, after the WAF 408 or the IPS 410, or in the user space receive user space transmit TCP/IP stack 414.



FIG. 5 is a conceptual diagram 500 illustrating a cloud security platform integrated into a multi-cloud service in accordance with some aspects of the disclosure. In some aspects, a service 505 may be geographically distributed and implemented using multiple CSPs. The CSPs can be distributed for many different reasons, for example a particular CSP does not have service available in particular countries. In other aspects, the CSPs may not be distributed geographically to a sufficient extent. Pricing can also be a concern, with lower cost alternatives being available in high-bandwidth regions. For at least these reasons, the service 505 may be distributed across CSP 510, CSP 520, and CSP 530 in this example.


Each service distributed across CSP 510, the CSP 520, and the CSP 530 may include an ingress gateway 511, a load balancer 512, a frontend 513, a backend 514, and an egress gateway 515. In some aspects, the ingress gateway 511 and the egress gateway 515 may be provided by the cloud security platform and provide an agnostic interface to control flows into each different CSP. For example, the cloud security platform can scale resources in a consistent manner, provide malicious content filtering, attack denial, rate limiting, and other services to protect the service 505 at the corresponding CSP. The cloud security platform can also perform the corresponding services to a recipient of the service (not shown).


The cloud security platform abstracts specific details associated with each CSP into a single interface and allows an administrator of the service 505 a common control plane for controlling resources within each CSP and protecting services within each CSP and the service 505.



FIG. 6 is a conceptual diagram of a system 600 that includes a compromised client that causes exfiltration of sensitive data from within a private or secured network. For example, the system 600 includes a client 605 having a malicious component 608 (e.g., malware, a trojan, spyware, etc.) that is configured to forward data to a malicious server 625. The malicious component 608 may tacitly acquire information (e.g., documents), probe networks, and perform other unauthorized transactions based on the client 605 having trust within the network, and may send this data to the malicious server 625 to identify additional weaknesses, provide sensitive data to allow social engineering or other penetration testing, and so forth.


In some cases, data exfiltration can occur by sending the acquired information from the client 605 through a firewall 610. In some cases, the firewall 610 may includes other network components (e.g., routers, egress firewall, front ends, back ends, load balancers, etc.) and is illustrated as an ingress or egress firewall for simplicity. The packets transmitted from the client 605 based on the malicious component 608 may be structured to evade detection by mimicking a destination. For example, a packet is transmitted with a destination of acme.com:443:1.1.3.4, with acme.com being a FQDN, 443 corresponding to a logical port associated with secure HTTP traffic, and an IPv4 address of 1.1.3.4.


The firewall 610 is configured to send the egress packet from the internal network based on implicit or explicit trust between the client 605 and the firewall 610. As described above, client devices may resolve the IP address of the FQDN due to the dynamic nature of cloud services and then transmit packets to the external network via the firewall 610. For example, the firewall 610 is configured to use the universal resource locator (URL) (e.g., the FQDN plus a query string) to determine whether the request is valid and then transmits the packet to an external router 620.


In this case, the packet traversed a boundary 615 between an internal private network (e.g., a network of a business, government entity, etc.) and an external public network. The external router 620 receives the packet and then routes the packet based on the destination IPv4 address. In this case, the packet has a destination IPv4 address corresponding to the malicious server 625, and the packet is not forwarded to the acme.com server 630. This is a current issue in existing cloud service providers and allows exfiltration of data if any device internal to the entity becomes compromised with malicious instructions.


In some cases, the firewall 610 can integrated into a CSP (e.g., AWS) illustrated in FIG. 5. The CSPs similarly trust egress network data provided within the private network, which can result in exfiltration of sensitive data from private networks (e.g., private clouds).



FIG. 7 is a sequence diagram of a system 700 for zero-trust IP address resolution in cloud services in accordance with some aspects of the disclosure. In some aspects, the system 700 includes a client 702 that is compromised with malicious instructions, such as malware 704. The client 702 is trusted within a network. For example, the client 702 is connected to a private cloud via a secured connection. In some cases, the client 702 may have received a malicious package via a secured channel within the private network. For example, the malware 704 may be deployed via email, an external source, a malicious payload that exploits a protocol or communication mechanism or an application, and so forth. The client 702 may be connected to a firewall 706 (e.g., through a private network, which is not shown to reduce unnecessary components to understand the disclosure). As will be described below, the private network may also have a domain name system (DNS) resolver 708 and is configured to modify packets and provide packets to an external router 710 based on modifications to the packet.


In one aspect, the malware 704 may cause the client 702 to send a packet 712. For example, the packet 712 includes a destination address of www.acme.com:555:1.2.3.4. The firewall receives the packet 712, and then sends a DNS request 714 to the DNS resolver to resolve the IP address based on the FQDN in the destination address (e.g., acme.com). The DNS resolver 708 resolves the IP address of the service on the FQDN (e.g., 10.1.1.3) and sends the IP address 716 associated with the FQDN to the firewall 706.


The firewall 706 then modifies the packet 712 at block 718 by replacing the IP address of the original packet 712 with the IP address 716. The firewall 706 thereby provides a zero-trust policy to packets coming from within the network and substitutes a resolved IP address based on the domain name in the packet 712.


In some aspects, the firewall 706 may also modify the port in the destination address of the packet 712. For example, the firewall 706 may identify an application associated with the packet (e.g., an HTTP request, etc.). As described above, the hardware accelerators can be used to perform a deep packet inspection and may identify metadata corresponding to various protocols. The firewall 706 may determine that the port number does not correspond to a standard port number or an industry-specific port number associated with the protocol, and may substitute the port in the packet 712 with a standardized port. Table 1 below identifies various ports used for different common services.













Port
Service







20, 21
File Transfer Protocol (FTP)


22
Secure Shell (SSH)


25
Simple Mail Transfer Protocol (SMTP)


53
DNS


80
HTTP


123
Network Time Protocol (NTP)


179
Border Gateway Protocol (BGP)


443
HTTPS


500
Internet Key Exchange (IKE) for VPN tunnels


583
encrypted SMTP


3389
Remote Desktop Protocol (RDP)









At block 720, the firewall 706 may then process the packet under normal processing rules and may determine to drop the packet (e.g., block the packet) or transmit the packet based on firewall 706 rules. In this manner, the firewall 706 provides a dual layer of security by not trusting resolved IP addresses and may prevent data loss and exfiltration by also inspecting contents within the packet based on the normal policies of the firewall 706.


In this case, the firewall 706 determines that the acme.com domain name is trusted and the modified IP packet 722 can be sent to the external router 710. However, as the modified IP packet 722 has been modified to send information to the IP address of a trusted server and at a trusted port, the external server can reject the packet as it does not comport with an expected service. For example, the acme.com server that receives the packet cannot decode the packet because the data is in a format intended for a different destination identified in the packet 712.



FIG. 8 is a conceptual diagram of a system 800 illustrating preventing exfiltration in accordance with some aspects of the disclosure. The system includes a client 805 that includes malware 808, and the malware 808 causes the client 805 to send a packet that is addressed with acme.com, port 555, and IP address 1.2.3.4. A firewall 810 receives the packet and requests a DNS resolver 812 to resolve the IP address of acme.com. The DNS resolver returns the IPv4 address of the acme.com server, which is 10.1.1.3. The firewall 810 may also inspect the packet to identify that the request is a TLS-encrypted HTTP request. In this case, the firewall 810 may determine that the normal port is 443 for TLS-encrypted HTTP requests.


The firewall 810 then replaces the destination IP address and the port with the resolved information (e.g., 10.1.13 and port 443). After modifying the destination address and port of the packet, the firewall 810 can perform processing on the packet to ensure that the content within the package corresponds to policies outlined within the firewall. These policies can be applied to both malicious actors at the state level or the corporate level. For example, in the event the malware 808 is designed to evade DLP techniques, the firewall 810 may ensure that data within the packet is not prohibited based on the destination domain name.


In some aspects, the firewall 810 may also compare the resolved IP address and the IP address within the packet. The firewall 810 may elect to deny the packet processing based on clear differences in IP addresses. For example, if the resolved IP address is in a different country than the IP address identified in the packet, the firewall 810 may drop the packet.


After the modified packet passes all firewall 810 policies, the packet is sent and traverses a boundary 815 between the private network (e.g., as noted above, the firewall 810 may be part of a VPC of a CSP) and a public network. The router 820 receives and forwards the modified packet to the acme.com server 830 at 10.1.14 on port 443.



FIG. 9 illustrates an example method for zero-trust IP address resolution in cloud services. Although the example method 900 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the method 900. In other examples, different components of an example device or system that implements the method 900 may perform functions at substantially the same time or in a specific sequence. Although a computing device (e.g., using the system-on-chip (SoC) 1000, etc.) is described as performing the method, this example is for descriptive purposes. The method may be performed in a distributed manner using cloud computing, various containers, microservices, and other techniques. In some aspects, the computing system described in FIG. 11 comprises functionality associated with a controller of the cloud security platform.


In some aspects, the computing system may be a controller (e.g., the controller 210) configured to deploy services and firewalls in a multi-cloud application. The computing system may store single firewall configuration information that can be applied to a plurality of CSPs. As further described below, the computing system can generate derived firewall information for a particular CSP for ingress or egress of network traffic.


At block 902, the computing system may receive, at an egress gateway of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address. For example, the egress gateway may be configured for forward and reverse proxy operations and may also filter network traffic leaving a private cloud (e.g., a firewall).


At block 904, the computing system may, in response to receiving the network request, resolve a second IP address based on the first domain name. For example, the first domain name may be www.acme.com, but the destination IP address in the packet may not correspond to acme.com. The could is dynamic, and IP addresses are continually changing. To that end, the egress gateway may resolve an IP address of the first domain name (e.g., 10.1.1.3).


At block 906, the computing system may insert the second IP address into the network request in place of the first IP address. For example, the initial network request may be addressed to 1.2.3.4, which is under the control of a malicious actor within a geographic region. However, the IP address is residential gateway that has been compromised and is forwarding information to a command and control server. However, by replacing the first IP address with the second IP address, the egress firewall is configured to prevent IP address spoofing and ensure that the packet is sent to the identified domain name. That is, the egress gateway does not trust resolved IP addresses from clients and performs its own DNS lookup, and then modifies the network request.


At block 908, the computing system may, after inserting the second IP address into the network request, process the network request based on the second IP address and the first domain name. For example, the egress gateway may apply different policies to ensure that the network request corresponds to normal activity. In the event that the packet does not correspond to normal activity, for example by identifying sensitive information within the packet, the egress gateway can block the network request (e.g., drop the network request without transmission).


In some aspects, the computing system may block the network request before processing the network request, or during the processing of the network request at block 908. In one aspect, the computing system may compare the first IP address and the second IP address to block the network request based on a comparison of the first IP address to the second IP address. For example, the computing system may determine a country associated with the second IP address and a country associated with the IP address. In the event the countries are different, the computing system may block the request. Spoofing destination is a strong indicator of a malicious attack, and blocking the request may be warranted. In another aspect, the computing system may block the network request just based on an analysis of the second IP address. For example, the computing system may maintain a block list, and the IP addresses on the block list are always blocked.


In some aspects, the computing system may use the port in the network request. In some cases, malicious actors put services on uncommon ports to prevent the detection of compromised devices. For example, the computing system may determine a service associated with the network request and modify the network request based on a comparison of a port identified in the network request to a conventional port associated with the service. For example, if the packet is deemed a TLS-encrypted packet that is conventionally carried on port 443 and the packet is allocated to port 555, the computing system may remap the port to 443 in the network request.


In some aspects, the computing system may use a reverse lookup based on the first IP address. For example, the computing system may obtain a second domain name based on a reverse lookup of the first IP address at the egress node.


After performing the analyses described above, the computing device may send the network request to the second IP address based on whether the network request has been blocked. In this manner, the computing system prevents IP address spoofing from sources within the network to reduce data exfiltration from devices within the private cloud network.



FIG. 10 illustrates a block diagram of an SoC 1000 in a network device to perform various functions in hardware in accordance with various aspects of the disclosure. For example, the SoC 1000 may include fixed hardware components and programmable hardware components to perform various network tasks. In one illustrative aspect, the SoC 1000 includes a programmable network processor 1002 (e.g., a network processing unit (NPU), etc.), a programmable NPU host 1004, counters and meters 1006, telemetry 1010, an NPU database 1012, a shared buffer 1014, a web scale circuit 1016, a time stamper 1018, a synchronous Ethernet (SyncE) circuit 1020, and a serializer/deserializer 1022. In some cases, the SoC 1000 may be configured to execute bytecode instructions to be supplemented with additional functions. For example, the SoC 1000 may be configured to execute various functions described above.


The programmable network processor 1002 may be programmed to perform functions that are conventionally performed by integrated circuits (IC) that are specific to switching, routing line card, and routing fabric. The programmable network processor 1002 may be programmable using the programming protocol-independent packet processors (P4) language, which is a domain-specific programming language for network devices for processing packets. The programmable network processor 1002 may have a distributed P4 NPU architecture that may execute at a line rate for small packets with complex processing. The programmable network processor 1002 may also include optimized and shared NPU fungible tables. In some aspects, the programmable network processor 1002 supports a unified software development kit (SDK) to provide consistent integrations across different network infrastructures and simplifies networking deployments. The SoC 1000 may also include embedded processors to offload various processes, such as asynchronous computations.


The programmable network processor 1002 includes a programmable NPU host 1004 that may be configured to perform various management tasks, such as exception processing and control-plane functionality. In one aspect, the programmable NPU host 1004 may be configured to perform high-bandwidth offline packet processing such as, for example, operations, administration, and management (OAM) processing and MAC learning.


The SoC 1000 includes counters and meters 1006 for traffic policing, coloring, and monitoring. As an example, the counters and meters 1006 include programmable counters used for flow statistics and OAM loss measurements. The programmable counters may also be used for port utilization, microburst detection, delay measurements, flow tracking, elephant flow detection, congestion tracking, etc.


The telemetry 1010 is configured to provide in-band telemetry information such as per-hop granular data in the forwarding plane. The telemetry 1010 may observe changes in flow patterns caused by microbursts, packet transmission delay, latency per node, and new ports in flow paths. The NPU database 1012 provides data storage for one or more devices, for example, the programmable network processor 1002 and the programmable NPU host 1004. The NPU database 1012 may include different types of storage, such as key-value pair, block storage, etc.


In some aspects, the SoC 1000 includes a shared buffer 1014 that may be configured to buffer data, configurations, packets, and other content. The shared buffer 1014 may be utilized by various components such as the programmable network processor 1002 and the programmable NPU host 1004. A web scale circuit 1016 may be configured to dynamically allocate resources within the SoC 1000 for scale, reliability, consistency, fault tolerance, etc.


In some aspects, the SoC 1000 may also include a time of day (ToD) time stamper 1018 and a SyncE circuit 1020 for distributing a reference to subordinate devices. For example, the time stamper 1018 may support IEEE-1588 for ToD functions. In some aspects, the time stamper 1018 includes support for a precision timing protocol (PTP) for distributing frequency and/or phase to enable subordinate devices to synchronize with the SoC 1000 for nano-second level accuracy.


The serializer/deserializer 1022 is configured to serialize and deserialize packets into electrical signals and data. In one aspect, the serializer/deserializer 1022 supports sending and receiving data using non-return-to-zero (NRZ) modulation or pulse amplitude modulation 4-level (PAM4) modulation. In one illustrative aspect, the hardware components of the SoC 1000 provide features for terabit-level performance based on flexible port configuration, nanosecond-level timing, and programmable features. Non-limiting examples of hardware functions that the SoC 1000 may support include IP tunneling, multicast, NAT, port address translation (PAT), security and quality of service (QoS) access control lists (ACLs), equal cost multiple path (ECMP), congestion management, distributed denial of service (DDos) migration using control plane policing, telemetry, timing and frequency synchronization, and so forth.



FIG. 11 is a diagram illustrating an example of a system for implementing certain aspects of the present technology. In particular, FIG. 11 illustrates an example of computing system 1100, which can be for example any computing device making up internal computing system, a remote computing system, a camera, or any component thereof in which the components of the system are in communication with each other using connection 1105. Connection 1105 can be a physical connection using a bus, or a direct connection into processor 1110, such as in a chipset architecture. Connection 1105 can also be a virtual connection, networked connection, or logical connection.


In some aspects, computing system 1100 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple data centers, a peer network, etc. In some aspects, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some aspects, the components can be physical or virtual devices.


Example computing system 1100 includes at least one processing unit (a central processing unit (CPU) or processor) 1110 and connection 1105 that couples various system components including system memory 1115, such as ROM 1120 and RAM 1125 to processor 1110. Computing system 1100 can include a cache 1112 of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 1110.


Processor 1110 can include any general purpose processor and a hardware service or software service, such as services 1132, 1134, and 1136 stored in storage device 1130, configured to control processor 1110 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 1110 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.


To enable user interaction, computing system 1100 includes an input device 1145, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 1100 can also include output device 1135, which can be one or more of a number of output mechanisms. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 1100. Computing system 1100 can include communications interface 1140, which can generally govern and manage the user input and system output. The communication interface may perform or facilitate receipt and/or transmission wired or wireless communications using wired and/or wireless transceivers, including those making use of an audio jack/plug, a microphone jack/plug, a universal serial bus (USB) port/plug, an Apple®Lightning® port/plug, an Ethernet port/plug, a fiber optic port/plug, a proprietary wired port/plug, a Bluetooth® wireless signal transfer, a BLE wireless signal transfer, an IBEACON® wireless signal transfer, an RFID wireless signal transfer, near-field communications (NFC) wireless signal transfer, dedicated short range communication (DSRC) wireless signal transfer, 802.11 WiFi wireless signal transfer, WLAN signal transfer, Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), IR communication wireless signal transfer, Public Switched Telephone Network (PSTN) signal transfer, Integrated Services Digital Network (ISDN) signal transfer, 3G/4G/5G/LTE cellular data network wireless signal transfer, ad-hoc network signal transfer, radio wave signal transfer, microwave signal transfer, infrared signal transfer, visible light signal transfer, ultraviolet light signal transfer, wireless signal transfer along the electromagnetic spectrum, or some combination thereof. The communications interface 1140 may also include one or more Global Navigation Satellite System (GNSS) receivers or transceivers that are used to determine a location of the computing system 1100 based on receipt of one or more signals from one or more satellites associated with one or more GNSS systems. GNSS systems include, but are not limited to, the US-based GPS, the Russia-based Global Navigation Satellite System (GLONASS), the China-based BeiDou Navigation Satellite System (BDS), and the Europe-based Galileo GNSS. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.


Storage device 1130 can be a non-volatile and/or non-transitory and/or computer-readable memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, a floppy disk, a flexible disk, a hard disk, magnetic tape, a magnetic strip/stripe, any other magnetic storage medium, flash memory, memristor memory, any other solid-state memory, a compact disc read only memory (CD-ROM) optical disc, a rewritable compact disc (CD) optical disc, digital video disk (DVD) optical disc, a blu-ray disc (BDD) optical disc, a holographic optical disk, another optical medium, a secure digital (SD) card, a micro secure digital (microSD) card, a Memory Stick® card, a smartcard chip, a EMV chip, a subscriber identity module (SIM) card, a mini/micro/nano/pico SIM card, another IC chip/card, RAM, static RAM (SRAM), dynamic RAM (DRAM), ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash EPROM (FLASHEPROM), cache memory (L1/L2/L3/L4/L5/L #), resistive random-access memory (RRAM/ReRAM), phase change memory (PCM), spin transfer torque RAM (STT-RAM), another memory chip or cartridge, and/or a combination thereof.


The storage device 1130 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 1110, it causes the system to perform a function. In some aspects, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1110, connection 1105, output device 1135, etc., to carry out the function. The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as CD or DVD, flash memory, memory or memory devices. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like.


In some examples, the processes described herein (e.g., method 900, and/or other process described herein) may be performed by a computing device or apparatus. In one example, the method 900 can be performed by a computing device having a computing architecture of the computing system 1100 shown in FIG. 11.


In some cases, the computing device or apparatus may include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component(s) that are configured to carry out the steps of processes described herein. In some examples, the computing device may include a display, one or more network interfaces configured to communicate and/or receive the data, any combination thereof, and/or other component(s). The one or more network interfaces can be configured to communicate and/or receive wired and/or wireless data, including data according to the 3G, 4G, 5G, and/or other cellular standard, data according to the Wi-Fi (802.11x) standards, data according to the Bluetooth™ standard, data according to the IP standard, and/or other types of data.


The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphical processing units (GPUs), digital signal processors (DSPs), CPUs, and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.


In some aspects the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.


Specific details are provided in the description above to provide a thorough understanding of the aspects and examples provided herein. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the aspects in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the aspects.


Individual aspects may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but may have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.


Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.


Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable medium. A processor(s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.


The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure.


In the foregoing description, aspects of the application are described with reference to specific aspects thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative aspects of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above-described application may be used individually or jointly. Further, aspects can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate aspects, the methods may be performed in a different order than that described.


One of ordinary skill will appreciate that the less than (“<”) and greater than (“>”) symbols or terminology used herein can be replaced with less than or equal to (“≤”) and greater than or equal to (“≥”) symbols, respectively, without departing from the scope of this description.


Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.


The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly.


Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B.


The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.


The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as RAM such as synchronous dynamic random access memory (SDRAM), ROM, non-volatile random access memory (NVRAM), EEPROM, flash memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.


The program code may be executed by a processor, which may include one or more processors, such as one or more DSPs, general purpose microprocessors, an ASIC, FPGAs, or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein.

Claims
  • 1. A method of a firewall for preventing data exfiltration, comprising: receiving, at an egress node of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address;in response to receiving the network request, resolving a second IP address based on the first domain name; andinserting the second IP address into the network request in place of the first IP address.
  • 2. The method of claim 1, further comprising: after inserting the second IP address into the network request, processing the network request based on the second IP address and the first domain name.
  • 3. The method of claim 1, further comprising: blocking the network request based on a comparison of the first IP address to the second IP address.
  • 4. The method of claim 1, further comprising: blocking the network request based on the second IP address.
  • 5. The method of claim 1, further comprising: determining a service associated with the network request; andmodifying the network request based on a comparison of a port identified in the network request to a conventional port associated with the service.
  • 6. The method of claim 1, further comprising: obtaining a second domain name based on a reverse lookup of the first IP address at the firewall.
  • 7. The method of claim 1, further comprising: sending the network request to the second IP address.
  • 8. A computing device for performing a function, comprising: at least one memory; andat least one processor coupled to the at least one memory and configured to: receive, at an egress node of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address;in response to receiving the network request, resolve a second IP address based on the first domain name; andinsert the second IP address into the network request in place of the first IP address.
  • 9. The computing device of claim 8, wherein the at least one processor is configured to: after inserting the second IP address into the network request, process the network request based on the second IP address and the first domain name.
  • 10. The computing device of claim 8, wherein the at least one processor is configured to: block the network request based on a comparison of the first IP address to the second IP address.
  • 11. The computing device of claim 8, wherein the at least one processor is configured to: block the network request based on the second IP address.
  • 12. The computing device of claim 8, wherein the at least one processor is configured to: determine a service associated with the network request; andmodify the network request based on a comparison of a port identified in the network request to a conventional port associated with the service.
  • 13. The computing device of claim 8, wherein the at least one processor is configured to: obtain a second domain name based on a reverse lookup of the first IP address at a firewall.
  • 14. The computing device of claim 8, wherein the at least one processor is configured to: send the network request to the second IP address.
  • 15. A non-transitory computer readable medium for zero-trust IP address resolution in cloud services, comprising: a storage configured to store instructions; anda processor configured to execute the instructions and cause the processor to: receive, at an egress node of a network, a network request from a client device within the network, the network request including a first domain name and a first IP address;in response to receiving the network request, resolve a second IP address based on the first domain name; andinsert the second IP address into the network request in place of the first IP address.
  • 16. The non-transitory computer readable medium of claim 15, wherein the processor is configured to execute the instructions and cause the processor to: after inserting the second IP address into the network request, process the network request based on the second IP address and the first domain name.
  • 17. The non-transitory computer readable medium of claim 15, wherein the processor is configured to execute the instructions and cause the processor to: block the network request based on a comparison of the first IP address to the second IP address.
  • 18. The non-transitory computer readable medium of claim 15, wherein the processor is configured to execute the instructions and cause the processor to: block the network request based on the second IP address.
  • 19. The non-transitory computer readable medium of claim 15, wherein the processor is configured to execute the instructions and cause the processor to: determine a service associated with the network request; andmodify the network request based on a comparison of a port identified in the network request to a conventional port associated with the service.
  • 20. The non-transitory computer readable medium of claim 15, wherein the processor is configured to execute the instructions and cause the processor to: obtain a second domain name based on a reverse lookup of the first IP address at a firewall.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application priority to U.S. provisional application No. 63/609,196, titled “Fiber-Based Acceleration of Data Path Execution,” and filed on Dec. 12, 2023, which is expressly incorporated by reference herein in its entirety.

Provisional Applications (1)
Number Date Country
63609196 Dec 2023 US