The disclosure relates generally to cloud networking and, more specifically but not exclusively, to systems and techniques for machine learning of data path rules for gateway services.
Public clouds are third-party, off-premises cloud platforms that deliver computing resources, such as virtual machines, storage, and applications, over the Internet. Services provided by public cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, are shared among multiple customers. Public clouds offer scalability, cost efficiency, and flexibility as organizations can access and pay for resources on a pay-as-you-go model. Pay-as-you-go is particularly beneficial for customers with fluctuating workloads and enables enterprises to scale resources up or down based on demand. However, the shared nature of public clouds raises considerations regarding security, compliance, and data privacy, and customers need to carefully evaluate their specific requirements and choose appropriate providers.
Many customers also have private clouds, which are dedicated infrastructure that is either on-premises or hosted by a third-party. Private clouds are designed exclusively for a single customer, providing greater control over resources and data. Private clouds are suitable for entities with stringent security and compliance requirements, allowing the entities to customize and manage the infrastructure according to specific needs. Entities use private clouds to retain control over critical business applications, sensitive data, or when regulatory compliance mandates demand a higher level of data governance.
Hybrid and multi-cloud approaches have become popular to adapt to the benefit of public and private clouds. Hybrid clouds allow organizations to enjoy the scalability of public clouds while retaining certain workloads in a private, more controlled environment. Multi-cloud strategies involve using services from multiple public cloud providers, offering redundancy, flexibility, and the ability to choose the best-suited services for specific tasks.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure may be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described to avoid obscuring the description. References to one or an embodiment in the present disclosure may be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.
Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods, and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the herein disclosed principles. The features and advantages of the disclosure may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or may be learned by the practice of the principles set forth herein.
Cloud network providers include various companies such as Google, Apple, Amazon, Microsoft, DigitalOcean, Vercel, Alibaba, Netlify, Redhat OpenShift, Oracle, and many other entities. Each cloud provider offers a range of services, from foundational infrastructure, which is referred to Infrastructure as a Service (IaaS), platforms for application development and deployment, which is referred to as platform as a service (PaaS), and fully managed software applications, which is referred to as software as a service (SaaS). Cloud providers maintain a network of geographically distributed data centers that host servers, storage, and networking equipment and allow customers to deploy resources in proximity to their target audience for improved performance and redundancy, including content delivery networks (CDN) and edge compute services.
Virtualization technology is a foundational aspect of cloud providers and enables the creation of virtual instances of servers, storage, and network resources within a geographic region. Cloud providers also deploy resource orchestration tools to manage the dynamic allocation and scaling of these virtual resources based on demand. Fundamentally, cloud providers establish robust, high-speed connections between their data centers and form a global network backbone. This backbone ensures low-latency communication and facilitates data transfer between different regions.
Conventional security within cloud providers deploy a range of security measures, including encryption, firewalls, identity and access management, and compliance certifications, to safeguard customer data and ensure the integrity of their services. Cloud services are designed to be elastic, allowing customers to dynamically scale resources up or down based on demand to handle varying workloads efficiently.
Cloud providers offer various managed services, such as databases, machine learning, analytics, runtimes, and other aspects that allow customers to leverage advanced functionalities without the need for deep expertise in those domains. Various application programming interfaces (APIs) can be exposed by a cloud provider that enables users to programmatically interact with and manage their resources and allow integration with third-party tools and the automation of various tasks.
Fundamentally, in past server architectures, a server was defined with a fixed internet protocol (IP) address. In cloud-based computing, IP addresses are dynamic and enable the resources within the cloud providers. Cloud environments require dynamic scaling to accommodate varying workloads and dynamic IP addresses allow for the automatic allocation and release of addresses as resources are provisioned or de-provisioned. The dynamic addresses also allow service elasticity to respond to increasing or decreasing resources, cost efficiency, automation and orchestration of tools within the cloud integration and deployment environment, load balancing, high availability and failover, adaptable network topology, and increase resource utilization.
Cloud security is a fundamental issue as customers typically may deploy resources and integrate into resources of different cloud providers. While the clouds have a generic infrastructure configuration with a spine network topology that routes traffic to a top-of-rack (TOR) switch and servers within the racks, clouds are still configured differently and have different requirements. For example, some cloud providers emphasize different geographical markets; cloud providers can emphasize different business segments (e.g., healthcare, government, etc.) and configure services according to their intended market.
Cloud security has become an important aspect of networking today because there are significant challenges. For example, data breaches are a significant concern in the cloud because unauthorized access to sensitive data, either through misconfigurations or cyberattacks, can lead to data exposure and compromise the confidentiality of information. Misconfigurations of cloud services, such as incorrectly configured access controls or insecure storage settings, can create vulnerabilities and may expose data to unauthorized users or attackers.
Another important aspect of cloud security is identity management. Improper management of user identities and access privileges can result in unauthorized access. Inadequate or improperly implemented encryption can lead to data exposure. This includes data in transit, data at rest, and data during processing. Ensuring end-to-end encryption is crucial for maintaining data confidentiality.
Cloud providers use shared infrastructure and technologies. If a vulnerability is discovered in a shared component, multiple clients could be affected simultaneously. Regular security updates and patches are essential to mitigate this risk, and there is an increased market for third-party services that integrate into cloud provider services.
Organizations may fail to conduct thorough due diligence when selecting a cloud service provider (CSP). Inadequate assessment of a provider's security measures, compliance standards, and data protection practices can result in security gaps.
The evolving landscape of cybersecurity introduces new threats and attack vectors. Cloud security solutions must continuously adapt to address emerging threats, such as zero-day vulnerabilities and advanced persistent threats (APTs). These attacks can come from many different sources, and monitoring these threats can be too difficult for entities.
The cloud is dynamic, connected, and encrypted. Customers of cloud providers primarily care about their business operations and not the infrastructure behind the business operations. In the current environment, customers of CSPs need to implement instruction protection services (IPS), instruction detection services (IDS), web application firewalls (WAF), as well as provide egress security. Customers may also need to implement data loss prevention services (DLP) to comply with sensitive information requirements.
A gateway uses data path rules to provide network traffic management, security enforcement, and access control. Data path rules are meticulously crafted to regulate the flow of data entering and leaving the network, ensuring that data adheres to established policies and meets security standards. With thousands of rules potentially in place, the granularity and specificity of these regulations can vary widely, encompassing factors such as IP addresses, protocols, ports, content inspection criteria, and quality of service parameters. Each rule defines a set of conditions and corresponding actions, dictating how incoming and outgoing packets should be handled by the gateway. For instance, rules may allow or deny access based on source and destination addresses, prioritize critical applications over non-essential traffic, or inspect payloads for signs of malicious activity.
The proliferation of rules within a gateway is a result of the complex and diverse requirements of modern networks. The gateway and other components of the network provide critical business functions such as protecting sensitive data, mitigating cyber threats, and optimizing network performance. Different departments or user groups may have distinct needs, necessitating tailored access controls and traffic management policies. Moreover, compliance mandates, industry regulations, and evolving threat landscapes contribute to the need for comprehensive rule sets that address various scenarios and contingencies. In this context, the sheer volume of rules reflects the multifaceted nature of network management and security, requiring a nuanced approach to rule creation and maintenance to effectively safeguard the integrity and functionality of the network infrastructure.
In the event that a gateway service is reconfigured, instances of the gateway have to completely restart as the gateway configuration can modify the parsing of data path rules. As indicated above, the number of data path rules can be very large and require the gateway to load and parse each data path rule. This can cause the gateway to be unavailable for several minutes because the gateway must load, parse, and cache the rules in volatile memory. This process can also incur multiple loads of data path rules. For example, a particular rule can be parsed and cached while other rules are being parsed, and then quickly loaded back into the gateway for traffic processing.
Disclosed are systems, apparatuses, methods, computer readable medium, and circuits for machine learning of data path rules for gateway services. The disclosed systems and techniques can reduce migrations between configurations and enable a multi-cloud services to implement changes without service interruption.
According to at least one example, a method includes: receiving a command related to a configuration of a firewall of a gateway; obtaining a subset of rules from a collection of rules to apply to a second instance of the firewall that replaces a first instance of the firewall, wherein the collection of rules are expressly associated with the firewall of the gateway; loading the subset of rules and the second instance of the firewall; and when the second instance is available after loading the subset of rules, processing network traffic of the gateway based on the subset of rules.
In another example, a controller of a cloud security platform is provided that includes a storage (e.g., a memory configured to store data, such as virtual content data, one or more images, etc.) and one or more processors (e.g., implemented in circuitry) coupled to the memory and configured to execute instructions and, in conjunction with various components (e.g., a network interface, a display, an output device, etc.), cause the one or more processors to: receive a command related to a configuration of a firewall of a gateway; obtain a subset of rules from a collection of rules to apply to a second instance of the firewall that replaces a first instance of the firewall, wherein the collection of rules are expressly associated with the firewall of the gateway; load the subset of rules and the second instance of the firewall; and when the second instance is available after loading the subset of rules, process network traffic of the gateway based on the subset of rules.
The applications 102 include various forms, such as distributed cloud-based applications, edge-based applications (e.g., webapps), desktop-based applications, mobile phone applications, and so forth. The third-party services 106 include various services, such as cloud service providers and other services that are integrated into the cloud security platform 104. For example, the cloud security platform 104 may be configured to use different services for specialty functions that are consistent for each customer of the cloud security platform 104. Non-limiting examples of different services include various types of communication services (e.g., mail servers, communication platforms, etc.), security-oriented services (e.g., monitoring services such as Splunk), search services, storage services (e.g., relational databases, document databases, time-series databases, graph databases, etc.), authentication services, and so forth.
The cloud security platform 104 is configured to be deployed within various infrastructure environments in a PaaS manner. The cloud security platform 104 includes networking infrastructure 108 for connecting the application 102 to the cloud security platform 104. The cloud security platform 104 includes a plurality of servers 110 that are geographically distributed, with each server 110 being managed by with various operating systems (OS) 112, runtimes 114, middleware 116, virtual machines (VM) 118, APIs 120, and management services 122. In some aspects, the cloud security platform 104 includes a runtime 114 refers to the environment that the middleware 116 will execute within to control various aspects of the cloud security platform 104. For example, the VMs 118 may be Kubernetes containers and the middleware 116 may be configured to add or remove hardware resources within cloud providers dynamically.
The cloud security platform 104 also exposes one or more APIs 120 for allowing the applications 102 to interact with the cloud security platform 104. The APIs 120 enable a customer to surface information, interact with information within the cloud security platform 104, and perform other low-level functions to supplement the security services of the cloud security platform 104. The API 120 is also configured to integrate with other third-party services (e.g., the third-party service 106) to perform various functions. For example, the API 120 may access a customer's resources in a cloud service provider (e.g., a third-party service 106) to monitor for threats, analyze configurations, retrieve logs, monitor communications, and so forth. In one aspect, the API 120 integrates with third-party cloud providers in an agnostic manner and allows the cloud security platform 104 to perform functions dynamically across cloud providers. For example, the API 120 may dynamically scale resources, allow resources to join a cluster (e.g., a cluster of controller instances), implement security rules from the cloud security platform 104 into the corresponding cloud provider, and other functions that enable a cloud-agnostic and service-agnostic integrated platform. For example, in some cases, the API 120 is configured to integrate with other security services to retrieve alerts of specific assets to reduce exposure to malicious actors.
The cloud security platform 104 also includes management services 122 for managing various resources of a customer. In some aspects, the management services 122 can manage resources including a controller (e.g., the controller 210 in
In one aspect, the management services 122 include an onboarding user experience that connects to various cloud providers (e.g., using the API 120) and allows onboarding of different cloud resources. The management services 122 also provides a cloud-agnostic approach to managing resources across different cloud providers, such as scaling up identical resources in different regions using different cloud providers. As an example, some cloud providers do not have a significant presence in the Far East, and the management services 122 are configured to activate similar resources in a first geographical region (e.g., in Europe) and a second geographical region (e.g., Asia) with similar configurations in different cloud providers.
The cloud security platform 104 is configured to provide security across and within cloud providers in different contexts. For example, the cloud security platform 104 provides protection and security mechanisms in different flows. The cloud security platform 104 is configured to provide varying levels of protection based on flow, packet, encryption and other mechanisms. In one aspect, the cloud security platform 104 is configured to protect forwarding flows and packet flows.
Forwarding flow refers to the set of rules and decisions that determine how network devices handle incoming packets without inspecting packet and traffic contents. A forwarding flow involves making decisions based on information such as destination IP address, media access control (MAC) address, and routing tables to determine the outgoing interface for the packet and typically includes actions like address resolution (e.g., an address resolution protocol (ARP) for IP to MAC address mapping), updating MAC tables, and forwarding the packet to the appropriate interface, and various rules to apply based on configuration and policies.
A proxy flow comprises both forward proxy and reverse proxy functions and inspects the content of encrypted flows and access control. In some aspects, the cloud security platform 104 decrypts encrypted traffic to ensure malicious actors are not exploiting vulnerabilities in TLS-encrypted applications, and prevents data exfiltration (e.g., DLP) or connection to inappropriate universal resource indicators (URIs).
The cloud security platform 104 is also configured to handle packets differently based on security, such as policies related to IPS and a WAF. WAF protects various web applications from online threats, such as SQL injection, cross-site scripting (XSS), authentication spoofing, and other potential security. For example, a WAF filters and monitors traffic by inspecting headers (e.g., a JSON-encoded object in an HTTP header).
The cloud security platform 104 provides real-time discovery of multi-cloud workloads, at-scale, for virtual private clouds (VPCs) and cloud accounts. Real-time discovery also enables finding security gaps and improving defensive posture. The cloud security platform 104 also provides a dataplane management using gateways (e.g., the gateway 250 in
In some aspects, the cloud security platform 200 separates compute and data storage functions and enables a multi-tenancy to support different customers while maintaining data separation when needed. For example, the compute components are separated into a controller 210 and data storage components are implemented in a data plane 270. The controller 210 may be a collection of Kubernetes-based services that deploy a low latency connection (e.g., a remote procedure call (RPC) such as gRPC, Websockets, WebTransport, etc.) to connect various endpoints and enable bidirectional streaming, preventing connection setup and teardown. Each service within the controller 210 scales up or down horizontally based on load.
The controller 210 includes a configuration engine 212, an analytics engine 214, and a resources engine 216. The configuration engine 212 configures the various components and provides various services such as webhooks 218, a dashboard 220, an API 222, and a workflow 224.
In one aspect, the webhooks 218 module configures an asynchronous method of communication between different applications or services in real-time. In a webhook configuration, one application can register an endpoint URI with another, specifying where it should send data when a particular event occurs. When the event triggers, the originating system automatically pushes data to the registered URI, allowing the receiving application to process and act upon the information immediately. In some aspects, the webhooks 218 modules implement an observer pattern, with a dependent component providing a URI to the observed data source.
The dashboard 220 provides a user experience to a customer of the cloud security platform 104 and provides various integration modules, onboarding platforms, monitoring tools, and other functions for customers to access.
In some aspects, the APIs 222 can be various libraries to interact with various services, either through a dashboard 220 interface, a command line interface (not shown), or other tooling (not shown). The APIs 222 can also be API endpoints of the cloud security platform 104 or an API library associated with a third-party service (e.g., third-party services 252), or APIs associated with the cloud providers 254. In one aspect, the APIs 222 can include an agnostic API library that is configured to interact with the cloud providers 254 using a single API interface to scale resources, respond to security incidents, or other functions. This API 222 can be accessed via a command line interface or may be distributed to customers via various package management services.
The workflow 224 module can be various components that enable a customer to perform various tasks, such as managing specific resources, deploying services, communicating with team members regarding issues, and so forth. For example, the workflow 224 module can interact with the gateways 250 and an administration engine 248 to manage resources, access to resources, and deployment of various resources (e.g., deploy infrastructure with Terraform).
The analytics engine 214 is configured to integrate with gateways 250 and various third-party services 252 to monitor various events, services, and other operations. The analytics engine 214 includes a watch server 226 that is configured to disambiguate information from multiple sources of information (e.g., the gateway 250, the third-party services 252, etc.) to provide a holistic view of cloud networking operations. The analytics engine 214 may also be configured to interact with various components of the data plane 270 such as a metrics controller 242 and a data lake controller 246.
In some aspects, the resources engine 216 receives resources from cloud providers 254 and includes various components to route information and store information. The 216 includes an inventory router 228, logs 230 (e.g., a cache of logs for various functions), an inventory server 232, and a logs server 234. The components of the resources engine 216 are configured to disambiguate and combine information in an agnostic and standardized manner and store various resources in the data plane 270. For example, the resources engine 216 stores and receives events from an events controller 244 and also stores and receives logs in the data lake controller 246. In some aspects, the inventory router 228 and the inventory server 232 build an evergreen model of the customer's cloud accounts and subscriptions and create an address object for security policy management for the cloud security platform 200. The address object represents a segment of the customer's subscription based on cloud-native attributes (e.g., Security Group, ASG, customer-defined tags) and maps to a collection of IP Addresses which is automatically refreshed and synchronized with the gateway 250.
The data plane 270 includes various components to separate various types of information associated with the control plane and interconnected third-party services 252 and cloud providers 254. For example, the data plane 270 includes a configuration controller 240 that stores inventory information of a customer and various configuration information. In one example, the cloud providers 254 use different metrics for decisions pertaining to scaling deployed resources, and the configuration controller 240 stores information that allows the controller 210 to scale resources within the cloud providers 254 in a standardized manner. In some aspects, the configuration controller 240 may include storage mechanisms such as a relational database, a document database, and other high-availability storage mediums. The storage mechanisms can be on-premises resources or off-premises or cloud-based solutions such as various cloud-based relational or document databases (e.g., Redis, MySQL, MongoDB, etc.).
The data plane 270 also includes a metrics controller 242 that is configured to interact with custom metrics data or a third-party service for metrics analysis (e.g., Amazon CloudWatch). The events controller 244 is configured to handle and store events and various queues. For example, the events controller can include a Kafka server for handling real-time data feeds and event-driven applications. The metrics controller 242 may use a publish-subscribe model in which producers (e.g., a third-party service, an internal components of the controller 210, a gateway 250, etc.) publish data streams and a consumer subscribes to receive and process these streams in a fault-tolerant and distributed manner. The metrics controller 242 may handle massive amounts of data with low latency and high throughput.
The data lake controller 246 provides a long-term and scalable storage mechanism and associated services. For example, the data lake controller 246 may include a cloud-based S3 API for storing data to various cloud services (e.g., amazon web services (AWS), DigitalOcean, OpenShift) or on-premises services (e.g., MinIO, etc.). The data lake controller 246 may also include a search-based mechanism such as ElasticSearch for large-scale and efficient search of contents within the non-volatile cloud storage mechanisms. In some aspects, the data lake controller 246 stores network logs and implements search functionality (e.g., Snowflake) for large-scale ad hoc queries for security research and analysis.
The cloud security platform 200 also includes an administration engine 248, a gateway 250, and integrations into various third-party services 106. The administration engine 248 may include authentication services (e.g., Auth0, Okta) to verify identity and provide authentication mechanisms (e.g., access tokens), and may include infrastructure as code (IaC) tools such as Terraform to automate the process of creating, updating, and managing the specified infrastructure across various cloud providers or on-premises environments.
The cloud security platform 200 includes gateways 250 that are deployed into various integration points, such as cloud providers. The gateways 250 an ingress and egress points of the cloud security platform 200 and are configured to monitor traffic, provide information to the controller 210, dynamically scale based on the cloud security platform 200, and provide security to a customer's cloud infrastructure. For example, gateways 250 may implement a transparent forward and reverse proxy to manage traffic. The gateways 250 may also include a cloud-based firewall that is configured to filter malicious traffic using various dynamic detection policies.
The cloud security platform 200 also integrates into various third-party services 252 for various purposes such as receiving threat-related intelligence (e.g., Spunk, Talos, etc.). The third-party services 252 also include different types of infrastructure components such as managing mobile devices, implementing cloud-based multimedia communication services, business analytics, network analytics (e.g., reverse address lookup), certificate services, security information and event management (SIEM), and so forth.
In some aspects, the data path pipeline 300 comprises a single-pass firewall architecture that uses a single-pass flow without expensive context switches and memory copy operations. In a single-pass flow, processing is not duplicated multiple times on a packet. For example, TCP/IP receive and transmission operations are only performed a single time. This is different from existing next-generation firewalls (NGFW). The data path pipeline 300 uses fibers with flexible stages completely running in user-space and, therefore, does not incur a penalty for kernel-user context switches, which are expensive in high bandwidth and low latency operations. The data path pipeline 300 provides advanced web traffic inspection comparable to WAFs to secure all traffic flows and break the attack kill chain in multiple places, raising the economic costs for attackers. The data path pipeline 300 also captures packets of live attacks into a cloud storage bucket without significant performance degradation and enables a rule-based capture on a per-session and attack basis.
The data path pipeline 400 is also configured to be flexible and stages of processing are determined on a per-flow basis. For example, application 1 to application 2 may implement an L4 firewall and IPS inspection, application 3 to application 4 may implement an L4 firewall, a transport layer security (TLS) proxy, and IPS, and an internet client to web application 5 implements an L4 firewall, TLS proxy, IPS, and WAF.
In some aspects, the data path pipeline 300 includes various filters (e.g., malicious IP filter), geographic IP filter, fully qualified domain name (FQDN) filter) to filter both forwarding flows and proxy flows, as well as an L4 firewall to restrict traffic based on conventional techniques.
The data path pipeline 300 may also be integrated with a hardware offload 302 (e.g., a field programmable gate arrays (FPGA) of a cloud provider, an application specific integrated circuit (ASIC), etc.) that includes additional functionality that does not impact throughput. In one aspect, a cloud provider may offer a hardware offload or an accelerator function to implement a specialized function. For example, the hardware offload 302 includes a cryptographic engine 304, an API detection engine 306, a decompression engine 308, a regex engine 310, and a fast pattern engine 312 to offload operations into hardware.
In one aspect, the data path pipeline 300 includes high throughput decryption and re-encryption to enable inspection of all encrypted flows using the cryptographic engine 304. By contrast, traditional NGFWs provide a throughput of around 10% for inspecting encrypted flows. The data path pipeline 300 may use a decompression engine 308 to decrypt compressed traffic and perform deep packet inspection. For example, the data path pipeline 300 also uses a userspace Linux TCP/IP driver, in addition with network address translation (NAT) in conjunction with the API detection engine 306 and the decompression engine 308 to eliminate problematic and malicious flows.
The data path pipeline 300 includes a transparent reverse and forward proxy to isolate clients and servers without exposing internal details, a layer 7 firewall to rate limit and protect applications and APIs, and secure user access by looking up end-user-specific identity from an identity provider (IDP) and provide zero trust network access (ZTNA). The data path pipeline 300 includes a WAF pipeline and an IPS pipeline to detect malicious and problematic flows in conjunction with a regex engine 310 and a fast pattern engine 312. For example, the WAF pipeline may implement protection for web applications, including OWASP Top 10, using a core ruleset and application-specific rules for frameworks and common content management tools like PHP, Joomla, and WordPress. The data path pipeline 300 includes IDS and IPS to block known vulnerabilities and provide virtual patching until the applications can be patched with updated security fixes, application identification to block traffic based on client, server or application payload, DLP loss and filtering, URI filtering, antivirus and anti-malware features to prevent malware files from being transferred for ingress (malicious file uploads), east-west lateral attacks (moving toolkits) and egress flows (e.g., botnets).
The data path pipeline 400 comprises a L4 firewall 402, a user space receive TCP/IP stack 404, a TLS receive proxy 406, a WAF 408, an IPS 410, a TLS transmit proxy 412, and a user space transmit TCP/IP stack 414 and illustrates the flow of forwarding flows and proxy flows, and points at which packets may be dropped/accepted using an L4 firewall, a WAF, and/or IPS.
For example, the data path pipeline 400 may be implemented as a user-space driver (e.g., a data path packet driver (DPDK) that receives forwarding and proxy flows, computes hashes, and provides the packet to the worker. In this case, a worker is part of a distributed instance of a gateway and provides the flows to the L4 firewall 402. For example, the L4 firewall 402, or a transport layer firewall, may inspect traffic and filter traffic based on source and destination IP/port.
The user space receives TCP/IP stack 404 is configured to perform the receive processing of forwarding and proxy flows. For example, the user space receive TCP/IP stack 404 handles framing, addressing, and error detection within TCP/IP and further identifies per-flow processing based on policies and rules of the cloud security platform. For example, some forwarding flows are provided to the user space transmit TCP/IP stack 414, some forwarding flows are provided to the IPS 410, and proxy flows are forwarded to the TLS receive proxy 406. The TLS receive proxy 406 manages the TLS decryption process in the event further inspection is warranted based on the policies and rules, and then provides the proxy flows to either the IPS 410 or the WAF 408 based on a policy.
The IPS 410 examines its content, headers, and contextual information. Deep packet inspection involves analyzing the payload and looking for patterns, signatures, or anomalies that may indicate malicious activity. The IPS compares the packet against a database of known attack signatures and employs heuristic analysis to detect deviations from expected behavior. Additionally, it assesses factors such as source and destination addresses, ports, and protocol compliance. If the IPS identifies a packet as potentially malicious, it can take proactive measures, such as blocking the packet, alerting administrators, or initiating predefined security policies to prevent the exploitation of vulnerabilities and safeguard the network from intrusion attempts.
The WAF 408 monitors, filters, and analyzes HTTP traffic in real-time and actively looks for and blocks common web vulnerabilities such as SQL injection, XSS, and other application-layer attacks. By examining and validating HTTP requests and responses, the WAF can detect and block malicious traffic, ensuring that only legitimate requests reach the web application. WAFs often employ rule-based policies, signature-based detection, and behavioral analysis to identify and mitigate potential security risks.
The TLS transmit proxies 412 reassembles the proxy flows and contextual information and provides the proxy flows to the user space transmit TCP/IP stack 414, which reassembles the packet and forwards any traffic. As shown in
Each of the distributed across CSP 510, the CSP 520, and the CSP 530 may include an ingress gateway 511, a load balancer 512, a frontend 513, a backend 514, and an egress gateway 515. In some aspects, the ingress gateway 511 and the egress gateway 515 may be provided by the cloud security platform and provide an agnostic interface to control flows into each different CSP. For example, the cloud security platform can scale resources in a consistent manner, provide malicious content filtering, attack denial, rate limiting, and other services to protect the service 505 at the corresponding CSP. The cloud security platform can also perform the corresponding services to a recipient of the service (not shown).
The cloud security platform abstracts specific details associated with each CSP into a single interface and allows an administrator of the service 505 a common control plane for controlling resources within each CSP and protecting services within each CSP and the service 505.
Initially, the gateway receives a reconfiguration notification at time to that indicates that the updated gateway configuration is available. In this case, the gateway spawns new worker instances (e.g., as shown in
At block 604, as part of starting the gateway instance, the processor reads a plurality of rules (e.g., firewall rules) from cold storage and the processor begins to parse the rules at block 610. In this case, cold storage refers to a non-volatile memory (e.g., disc-based storage, etc.) or network-based storage (e.g., object storage) that is slower than hot storage (e.g., a volatile memory in an SoC, etc.). In some aspects, the gateway instance may have thousands of firewall rules that are separately stored in the cold storage, which consumes an amount of time to retrieve the rules from cold storage. While parsing the rules at block 604, the parsed rules are converted into corresponding data structures and instructions understood by the gateway service to implement particular rules based on the type of traffic. The processor may then cache the parsed rules at block 606 by moving the parsed rules into a swap memory (e.g., a non-volatile memory configured as an extension of volatile memory due to limited volatile memory), for example. In some cases, the parsed rule can be entirely unloaded because of available memory, and the parsing only verifies successful parsing. For example, some virtual machines in CSPs have limited memory to reduce cost, and a swap memory may be used to functionally extend direct memory access with slower storage mechanisms).
In this case, the parsing of the rules at block 604 and the caching of the parsed rules at block 606 occur as a monolithic function that is synchronous and prevents the gateway instance from processing traffic. Once the parsing of the rules is completed at time t1, the gateway instance can start traffic processing (e.g., receiving packet flows from a load balancer) at block 608. Although the timeline 600 appears to illustrate that block 604 and block 606 end concurrently, this is for illustrative purposes because caching of parsed rules can continue. As shown in
The gateway instance begins processing network flows at block 608 and at time t1 and the gateway may analyze the packet flows. At time t2, the gateway instance determines that the corresponding parsed rule is not immediately available and may read the rule to enable processing of the flow at block 610. For example, the processor can determine that a corresponding rule of a packet flow is cached, and the gateway then retrieves the rule. For example, the rule can be cached in a swap partition (e.g., a swap memory configured in the cold storage), read from the swap partition, and then activated. At this point, the rule has been read into memory twice and stored to swap once, yet the rule was almost immediately required as shown in the timeline 600.
In this illustrated case, the updated gateway instance is unavailable for a significant period of time due to the number of rules parsed and other limitations on the virtual machine. The extra time can cause strain on other gateway instances that are presently active and can trigger load-balancing operations that can affect the flow of traffic.
The controller includes a configuration database 712, a ruleset engine 714, an ML engine 716, and an event engine 718. The configuration database 712 includes different configurations that include various rules related to different types of network flow (e.g., IPsec, malicious content detection, etc.). There may be thousands of different configurations based on an entity's specific requirements for applications, services, and other network traffic. The ruleset engine 714 is configured to read rules from the configuration database 712 and determine the rules to apply to the gateway 720 based on information received from the gateway 720.
In some aspects, the gateway 720 provides event data recorded at the gateway 720 via an agent 722. The agent 722 is configured to aggregate and synthesize content in a shared memory 724 that is populated by at least one worker 726. The worker 726 is a spawned process that is executed within the gateway. For example, the worker 726 may be a single thread that processes traffic and stores and receives information with the agent 722 via the shared memory 724. For example, a worker 726 may process a packet flow and determine the flow is malicious traffic and store information in the shared memory. The agent may also provide information to the shared memory that the worker 726 receives, such as a particular ruleset for the worker 726 to apply to a packet flow. For example, the shared memory can be multiple stacks (e.g., a stack for communication to each worker 726, a stack for communication to the agent 722) for efficient reading and writing. The shared memory 724 allows information to be written into by a source (e.g., the worker 726, the agent 722) and then later collected by the target (e.g., the worker 726, the agent 722). The sprayer 728 is configured to randomly spray traffic to the different workers 726 to process traffic as it is received.
The workers may log various data related to network flow and processing, such as actions taken based on a packet flow that triggers a particular ruleset. The logged data provides insights into network activity, helps monitor and analyze traffic patterns, detects anomalies, and identifies potential security threats such as intrusion attempts or unauthorized access. The logged data also aids in troubleshooting network issues by providing detailed information about packet flows and connection statuses to enable swift identification and resolution of problems. Additionally, the logged data can be used for compliance purposes, such as auditing network usage and ensuring adherence to regulatory requirements.
In some aspects, the agent 722 collects the information from the workers 726 and provides the collected information to the event engine 718. The event engine 718 may store this information (e.g., in the logs server 234 in
The ML engine 716 can use various frameworks that are configured with a training regimen (e.g., instructions based on an API of the framework) to process the logged data and generate a corresponding ML model. The ML engine 716 can also be configured to autonomously store and update ML models in a repository (e.g., object storage). In some cases, the ML engine 716 may fine tune train existing models on a periodic basis or may generate new models on a periodic basis or in response to various events.
The gateway 720 may be configured to perform a selective data path rule loading based on the ML engine 716. The gateway 720 may receive information pertaining to a reconfiguration of a firewall that the gateway 720 executes. In this case, the gateway 720 may request the ruleset engine 714 to provide a list of prioritized rules for the gateway 720 to load, and then begin execution of network flows. The ML engine 716 receives the request and an identifier of the gateway instance and generates a list of prioritized rules for the gateway 720 based on training of the model. The list of prioritized rules predicts firewall rules in the configuration database that have the highest likelihood of being deployed in the gateway based on training (e.g., from the events in the event engine 718). As illustrated in
The gateway 720 receives the list of prioritized rules, which may be stored in cold storage (e.g., disk-based storage), and only loads and parses each rule in the list of prioritized rules. The gateway receives the list of prioritized rules, which may be stored in cold storage (e.g., disk-based storage), and only loads and parses each rule in the list of prioritized rules, and begins execution of the workers 726. The gateway 720 thereby loads a subset of all data path rules that the ML engine 716 determines are most pertinent and reduces the amount of time consumed by transporting, loading, parsing, caching, and then reloading all of the rules.
In some aspects, after the gateway 720 is processing data, the gateway 720 may be configured to retrieve and load the remaining rules. For example, the gateway 720 may execute a limited background process that parses and caches the remaining rules without affecting the execution of the prioritized rules.
Initially, the gateway receives a reconfiguration notification at time to that indicates that the gateway will restart the firewall (e.g., the workers 726 in
Once the first set of rules is parsed in block 804 at time t2, the gateway instance becomes available to start processing traffic at block 806. In this case, because the ML model identifies pertinent firewall rules based on the current network environment (which is provided to the ML), the gateway instance is running the most pertinent rules and can begin processing, thereby ensuring a minimal gap between the reconfiguration notice (e.g. at time t0) and the availability of processing new packet flows after loading of the prioritized rules (e.g., at time t2). At some point later, the gateway may determine a second set of rules based on various techniques. For example, the ML engine 716 may classify the firewall rules into a plurality of classifications (e.g., high, medium, low, etc.) when responding to the request from the gateway at time t1, and the gateway loads the rules in the order identified in the classification. In any event, at time t3, the gateway begins to read and parse the secondary set of rules while the gateway is processing packet flows and while the original workers continue to drain their flows in block 802.
In the illustrated examples, the gateway in timeline 600 cannot accept new flows for 8 minutes and is completely unavailable for 5 minutes due to synchronous loading of the firewalls. The gateway in timeline 800 cannot accept new flows for approximately 2 seconds based on loading only the highest priority rules, significantly reducing the time that the gateway is inactive. Maintaining a quick turnaround for a gateway can be important in quickly deploying changes within a gateway instance. For example, gateways may need to minimize downtime to maintain service level agreements and handle load balancing.
In some aspects, the computing system may be executing a first instance of a service (e.g., a gateway instance that is part of a cluster of gateway instances). The processor may obtain flow information pertaining to network traffic associated with the first instance and send a message including at least flow information based to a controller of the gateway. The controller is control plane of the multi-cloud security platform and may be an entry point for data to be recorded. In some cases, this information can also be sent to a known device that is configured to store the data. In either case, the flow information is stored in a central repository for each tenant. As described above, the flow information can be used to train an ML model.
At block 902, the computing system may receive a command related to a configuration of a data path service of a gateway. An example of data path service is a firewall. The subset of rules is identified based on a machine learning (ML) model trained to identify prioritized rules when instantiating the second instance. In some cases, the collection of rules is expressly associated with the data path service of the gateway. For example, an administrator explicitly identifies a complete set of rules that apply to individuals, groups, or all gateways. In some aspects, the size of the collection of rules is greater than an available amount of memory available to the gateway in user space. For example, the gateway instance may run a memory-limited processor and the number of rules exceeds the capacity to be stored within the memory space allocated to the user.
In some aspects, the command related to the command is triggered autonomously without supervision by a third-party service. For example, a third-party service may issue a report regarding a vulnerability, and the multi-cloud security platform can autonomously deploy a gateway configuration to limit the vulnerability. In another aspect, the command is triggered based on the supervision of a user, for example, a configuration that is designed by a tenant administrator of the multi-cloud security platform or by an administrator of the multi-cloud security platform.
In some cases, at block 902, the computing system may drain the flows of the first instance and then terminate the first instance. The second instance becomes available after all flows in the first instance are drained.
At block 904, the computing system may obtain a subset of rules from a collection of rules to apply to a second instance of the data path service that replaces a first instance of the data path service.
At block 906, the computing system may load the subset of rules and the second instance of the data path service
At block 908, the computing system may, when the second instance is available after loading the subset of rules, process network traffic of the gateway based on the subset of rules. In some aspects, the computing system may filter malicious traffic from the network traffic using the subset of rules.
After block 908, the computing system loads the remaining rules from the collection of rules after the subset of rules. After loading the remaining rules, the computing system may report the loading of a specific rule to a controller to create information that the ML model engine may use to further train various ML models. For example, the computing system may load a rule of the collection of rules that is not identified in the subset of rules. In this case, the collection of rules is stored on a local storage medium (e.g., in system memory, in cache, in a swap memory, etc.) associated with at least one processor executing the data path service. In some aspects, when a rule is loaded, the computing system may report this event to the gateway. Events such as loading a particular ruleset can be used to train the ML model.
The programmable network processor 1002 may be programmed to perform functions that are conventionally performed by integrated circuits (IC) that are specific to switching, routing line card, and routing fabric. The programmable network processor 1002 may be programmable using the programming protocol-independent packet processors (P4) language, which is a domain-specific programming language for network devices for processing packets. The programmable network processor 1002 may have a distributed P4 NPU architecture that may execute at a line rate for small packets with complex processing. The programmable network processor 1002 may also include optimized and shared NPU fungible tables. In some aspects, the programmable network processor 1002 supports a unified software development kit (SDK) to provide consistent integrations across different network infrastructures and simplifies networking deployments. The SoC 1000 may also include embedded processors to offload various processes, such as asynchronous computations.
The programmable network processor 1002 includes a programmable NPU host 1004 that may be configured to perform various management tasks, such as exception processing and control-plane functionality. In one aspect, the programmable NPU host 1004 may be configured to perform high-bandwidth offline packet processing such as, for example, operations, administration, and management (OAM) processing and MAC learning.
The SoC 1000 includes counters and meters 1006 for traffic policing, coloring, and monitoring. As an example, the counters and meters 1006 include programmable counters used for flow statistics and OAM loss measurements. The programmable counters may also be used for port utilization, microburst detection, delay measurements, flow tracking, elephant flow detection, congestion tracking, etc.
The telemetry 1010 is configured to provide in-band telemetry information such as per-hop granular data in the forwarding plane. The telemetry 1010 may observe changes in flow patterns caused by microbursts, packet transmission delay, latency per node, and new ports in flow paths. The NPU database 1012 provides data storage for one or more devices, for example, the programmable network processor 1002 and the programmable NPU host 1004. The NPU database 1012 may include different types of storage, such as key-value pair, block storage, etc.
In some aspects, the SoC 1000 includes a shared buffer 1014 that may be configured to buffer data, configurations, packets, and other content. The shared buffer 1014 may be utilized by various components such as the programmable network processor 1002 and the programmable NPU host 1004. A web scale circuit 1016 may be configured to dynamically allocate resources within the SoC 1000 for scale, reliability, consistency, fault tolerance, etc.
In some aspects, the SoC 1000 may also include a time of day (ToD) time stamper 1018 and a SyncE circuit 1020 for distributing a reference to subordinate devices. For example, the time stamper 1018 may support IEEE-1588 for ToD functions. In some aspects, the time stamper 1018 includes support for a precision timing protocol (PTP) for distributing frequency and/or phase to enable subordinate devices to synchronize with the SoC 1000 for nano-second level accuracy.
The serializer/deserializer 1022 is configured to serialize and deserialize packets into electrical signals and data. In one aspect, the serializer/deserializer 1022 supports sending and receiving data using non-return-to-zero (NRZ) modulation or pulse amplitude modulation 4-level (PAM4) modulation. In one illustrative aspect, the hardware components of the SoC 1000 provide features for terabit-level performance based on flexible port configuration, nanosecond-level timing, and programmable features. Non-limiting examples of hardware functions that the SoC 1000 may support include IP tunneling, multicast, NAT, port address translation (PAT), security and quality of service (QoS) access control lists (ACLs), equal cost multiple path (ECMP), congestion management, distributed denial of service (DDos) migration using control plane policing, telemetry, timing and frequency synchronization, and so forth.
In some aspects, computing system 1100 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple data centers, a peer network, etc. In some aspects, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some aspects, the components can be physical or virtual devices.
Example computing system 1100 includes at least one processing unit (a central processing unit (CPU) or processor) 1110 and connection 1105 that couples various system components including system memory 1115, such as ROM 1120 and RAM 1125 to processor 1110. Computing system 1100 can include a cache 1112 of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 1110.
Processor 1110 can include any general purpose processor and a hardware service or software service, such as services 1132, 1134, and 1136 stored in storage device 1130, configured to control processor 1110 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 1110 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 1100 includes an input device 1145, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 1100 can also include output device 1135, which can be one or more of a number of output mechanisms. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 1100. Computing system 1100 can include communications interface 1140, which can generally govern and manage the user input and system output. The communication interface may perform or facilitate receipt and/or transmission wired or wireless communications using wired and/or wireless transceivers, including those making use of an audio jack/plug, a microphone jack/plug, a universal serial bus (USB) port/plug, an Apple® Lightning® port/plug, an Ethernet port/plug, a fiber optic port/plug, a proprietary wired port/plug, a Bluetooth® wireless signal transfer, a BLE wireless signal transfer, an IBEACON® wireless signal transfer, an RFID wireless signal transfer, near-field communications (NFC) wireless signal transfer, dedicated short range communication (DSRC) wireless signal transfer, 802.11 WiFi wireless signal transfer, WLAN signal transfer, Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), IR communication wireless signal transfer, Public Switched Telephone Network (PSTN) signal transfer, Integrated Services Digital Network (ISDN) signal transfer, 3G/4G/5G/LTE cellular data network wireless signal transfer, ad-hoc network signal transfer, radio wave signal transfer, microwave signal transfer, infrared signal transfer, visible light signal transfer, ultraviolet light signal transfer, wireless signal transfer along the electromagnetic spectrum, or some combination thereof. The communications interface 1140 may also include one or more Global Navigation Satellite System (GNSS) receivers or transceivers that are used to determine a location of the computing system 1100 based on receipt of one or more signals from one or more satellites associated with one or more GNSS systems. GNSS systems include, but are not limited to, the US-based GPS, the Russia-based Global Navigation Satellite System (GLONASS), the China-based BeiDou Navigation Satellite System (BDS), and the Europe-based Galileo GNSS. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 1130 can be a non-volatile and/or non-transitory and/or computer-readable memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, a floppy disk, a flexible disk, a hard disk, magnetic tape, a magnetic strip/stripe, any other magnetic storage medium, flash memory, memristor memory, any other solid-state memory, a compact disc read only memory (CD-ROM) optical disc, a rewritable compact disc (CD) optical disc, digital video disk (DVD) optical disc, a blu-ray disc (BDD) optical disc, a holographic optical disk, another optical medium, a secure digital (SD) card, a micro secure digital (microSD) card, a Memory Stick® card, a smartcard chip, a EMV chip, a subscriber identity module (SIM) card, a mini/micro/nano/pico SIM card, another IC chip/card, RAM, static RAM (SRAM), dynamic RAM (DRAM), ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash EPROM (FLASHEPROM), cache memory (L1/L2/L3/L4/L5/L#), resistive random-access memory (RRAM/ReRAM), phase change memory (PCM), spin transfer torque RAM (STT-RAM), another memory chip or cartridge, and/or a combination thereof.
The storage device 1130 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 1110, it causes the system to perform a function. In some aspects, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1110, connection 1105, output device 1135, etc., to carry out the function. The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as CD or DVD, flash memory, memory or memory devices. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like.
In some examples, the processes described herein (e.g., method 900, and/or other process described herein) may be performed by a computing device or apparatus. In one example, the method 900 can be performed by a computing device having a computing architecture of the computing system 1100 shown in
In some cases, the computing device or apparatus may include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component(s) that are configured to carry out the steps of processes described herein. In some examples, the computing device may include a display, one or more network interfaces configured to communicate and/or receive the data, any combination thereof, and/or other component(s). The one or more network interfaces can be configured to communicate and/or receive wired and/or wireless data, including data according to the 3G, 4G, 5G, and/or other cellular standard, data according to the Wi-Fi (802.11x) standards, data according to the Bluetooth™ standard, data according to the IP standard, and/or other types of data.
The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphical processing units (GPUs), digital signal processors (DSPs), CPUs, and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.
In some aspects the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Specific details are provided in the description above to provide a thorough understanding of the aspects and examples provided herein. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the aspects in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the aspects.
Individual aspects may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but may have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable medium. A processor(s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure.
In the foregoing description, aspects of the application are described with reference to specific aspects thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative aspects of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above-described application may be used individually or jointly. Further, aspects can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate aspects, the methods may be performed in a different order than that described.
One of ordinary skill will appreciate that the less than (“<”) and greater than (“>”) symbols or terminology used herein can be replaced with less than or equal to (“≤”) and greater than or equal to (“≥”) symbols, respectively, without departing from the scope of this description.
Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.
The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly.
Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as RAM such as synchronous dynamic random access memory (SDRAM), ROM, non-volatile random access memory (NVRAM), EEPROM, flash memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.
The program code may be executed by a processor, which may include one or more processors, such as one or more DSPs, general purpose microprocessors, an ASIC, FPGAs, or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein.
This application claims benefit of and priority to U.S. Provisional Patent Application No. 63/609,196, filed Dec. 12, 2023, entitled “FIBER-BASED ACCELERATION OF DATA PATH EXECUTION”, the entire contents of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63609196 | Dec 2023 | US |