The disclosure relates generally to cloud networking and, more specifically but not exclusively, to systems and techniques for secure cluster membership for a multi-cloud security platform.
Public clouds are third-party, off-premises cloud platforms that deliver computing resources, such as virtual machines, storage, and applications, over the Internet. Services provided by public cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, are shared among multiple customers. Public clouds offer scalability, cost efficiency, and flexibility as organizations can access and pay for resources on a pay-as-you-go model. Pay-as-you-go is particularly beneficial for customers with fluctuating workloads and enables enterprises to scale resources up or down based on demand. However, the shared nature of public clouds raises considerations regarding security, compliance, and data privacy, and customers need to carefully evaluate their specific requirements and choose appropriate providers.
Many customers also have private clouds, which are dedicated infrastructure that is either on-premises or hosted by a third-party. Private clouds are designed exclusively for a single customer, providing greater control over resources and data. Private clouds are suitable for entities with stringent security and compliance requirements, allowing the entities to customize and manage the infrastructure according to specific needs. Entities use private clouds to retain control over critical business applications, sensitive data, or when regulatory compliance mandates demand a higher level of data governance.
Hybrid and multi-cloud approaches have become popular to adapt to the benefit of public and private clouds. Hybrid clouds allow organizations to enjoy the scalability of public clouds while retaining certain workloads in a private, more controlled environment. Multi-cloud strategies involve using services from multiple public cloud providers, offering redundancy, flexibility, and the ability to choose the best-suited services for specific tasks.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure may be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described to avoid obscuring the description. References to one or an embodiment in the present disclosure may be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.
Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods, and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the herein disclosed principles. The features and advantages of the disclosure may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or may be learned by the practice of the principles set forth herein.
Cloud network providers include various companies such as Google, Apple, Amazon, Microsoft, DigitalOcean, Vercel, Alibaba, Netlify, Redhat OpenShift, Oracle, and many other entities. Each cloud provider offers a range of services, from foundational infrastructure, which is referred to Infrastructure as a Service (IaaS), platforms for application development and deployment, which is referred to as platform as a service (PaaS), and fully managed software applications, which is referred to as software as a service (SaaS). Cloud providers maintain a network of geographically distributed data centers that host servers, storage, and networking equipment and allowing customers to deploy resources in proximity to their target audience for improved performance and redundancy, including content delivery networks (CDN) and edge compute services.
Virtualization technology is a foundational aspect of cloud providers and enable the creation of virtual instances of servers, storage, and network resources within a geographic region. Cloud providers also deploy resource orchestration tools manage the dynamic allocation and scaling of these virtual resources based on demand. Fundamentally, cloud providers establish robust, high-speed connections between their data centers and forming a global network backbone. This backbone ensures low-latency communication and facilitates data transfer between different regions.
Conventional security within cloud providers deploy a range of security measures, including encryption, firewalls, identity and access management, and compliance certifications, to safeguard customer data and ensure the integrity of their services. Cloud services are designed to be elastic, allowing customers to dynamically scale resources up or down based on demand to handle varying workloads efficiently.
Cloud providers offer various managed services, such as databases, machine learning, analytics, runtimes, and other aspects that allow customers to leverage advanced functionalities without the need for deep expertise in those domains. Various application programming interfaces (APIs) can be exposed by a cloud provider that enable users to programmatically interact with and manage their resources and allow integration with third-party tools and the automation of various tasks.
Fundamentally, in past server architectures, a server was defined with a fixed internet protocol (IP) address. In cloud-based computing, IP addresses are dynamic and enable the resources within the cloud providers. Cloud environments require dynamic scaling to accommodate varying workloads and dynamic IP addresses allow for the automatic allocation and release of addresses as resources are provisioned or de-provisioned. The dynamic addresses also allow service elasticity to respond to increasing or decreasing resources, cost efficiency, automation and orchestration of tools within the cloud integration and deployment environment, load balancing, high availability and failover, adaptable network topology, and increase resource utilization.
Cloud security is a fundamental issue as customers typically may deploy resources and integrate into resources of different cloud providers. While the clouds have a generic infrastructure configuration with a spine network topology that routes traffic to a top-of-rack (TOR) switch and servers within the racks, clouds are still configured differently and have different requirements. For example, some cloud providers emphasize different geographical markets; cloud providers can emphasize different business segments (e.g., healthcare, government, etc.) and configure services according to their intended market.
Cloud security has become an important aspect of networking today because there are significant challenges. For example, data breaches are a significant concern in the cloud because unauthorized access to sensitive data, either through misconfigurations or cyberattacks, can lead to data exposure and compromise the confidentiality of information. Misconfigurations of cloud services, such as incorrectly configured access controls or insecure storage settings, can create vulnerabilities and may expose data to unauthorized users or attackers.
Another important aspect of cloud security is identity management. Improper management of user identities and access privileges can result in unauthorized access. Inadequate or improperly implemented encryption can lead to data exposure. This includes data in transit, data at rest, and data during processing. Ensuring end-to-end encryption is crucial for maintaining data confidentiality.
Cloud providers use shared infrastructure and technologies. If a vulnerability is discovered in a shared component, multiple clients could be affected simultaneously. Regular security updates and patches are essential to mitigate this risk, and there is an increased market for third-party services that integrate into cloud provider services.
Organizations may fail to conduct thorough due diligence when selecting a cloud service provider (CSP). Inadequate assessment of a provider's security measures, compliance standards, and data protection practices can result in security gaps.
The evolving landscape of cybersecurity introduces new threats and attack vectors. Cloud security solutions must continuously adapt to address emerging threats, such as zero-day vulnerabilities and advanced persistent threats (APTs). These attacks can come from many different sources, and monitoring these threats can be too difficult for entities.
The cloud is dynamic, connected, and encrypted. Customers of cloud providers primarily care about their business operations and not the infrastructure behind the business operations. In the current environment, customers of CSPs need to implement instruction protection services (IPS), instruction detection services (IDS), web application firewalls (WAF), as well as provide egress security. Customers may also need to implement data lost prevention services (DLP) to comply with sensitive information requirements.
In some cases, malicious actors may be able to obtain access to a subset of resources via a command line interface or provide instructions to attempt to bypass authentication resources. One technique is booting an instance of a resource in a distributed environment to bypass authentication mechanisms.
Disclosed are systems, apparatuses, methods, computer readable medium, and circuits for secure cluster membership for multi-cloud security platform. The disclosed systems and techniques map encrypted information to each instance that forms the cluster and separate boot information from the image. This provides a dual security layer by removing some information from the boot process and requiring the instance to connect to a controller for authenticating based on the encrypted information.
According to at least one example, a method includes: in response to receiving a start message to start a first instance of a gateway service, transmitting a boot message to a cloud service provider to cause the cloud service provider to boot an image corresponding to the gateway service, the boot message including boot script information; receiving a join message from the first instance to join a gateway service cluster; and configuring the first instance with the gateway service cluster based on credentials included in the join message. For example, a computing system of a cloud security platform may, in response to receiving a start message to start a first instance of a gateway service, transmit a boot message to a cloud service provider to cause the cloud service provider to boot an image corresponding to the gateway service, the boot message including boot script information; receive a join message from the first instance to join a gateway service cluster; and configure the first instance with the gateway service cluster based on credentials included in the join message.
In another example, a controller of a cloud security platform is provided that includes a storage (e.g., a memory configured to store data, such as virtual content data, one or more images, etc.) and one or more processors (e.g., implemented in circuitry) coupled to the memory and configured to execute instructions and, in conjunction with various components (e.g., a network interface, a display, an output device, etc.), cause the controller to: in response to receiving a start message to start a first instance of a gateway service, transmit a boot message to a cloud service provider to cause the cloud service provider to boot an image corresponding to the gateway service, the boot message including boot script information; receive a join message from the first instance to join a gateway service cluster; and configure the first instance with the gateway service cluster based on credentials included in the join message.
The applications 102 include various forms, such as distributed cloud-based applications, edge-based applications (e.g., webapps), desktop-based applications, mobile phone applications, and so forth. The third-party services 106 include various services, such as cloud service providers and other services that are integrated into the cloud security platform 104. For example, the cloud security platform 104 may be configured to use different services for specialty functions that are consistent for each customer of the cloud security platform 104. Non-limiting examples of different services include various types of communication services (e.g., mail servers, communication platforms, etc.), security-oriented services (e.g., monitoring services such as Splunk), search services, storage services (e.g., relational databases, document databases, time-series databases, graph databases, etc.), authentication services, and so forth.
The cloud security platform 104 is configured to be deployed within various infrastructure environments in a PaaS manner. The cloud security platform 104 includes networking infrastructure 108 for connecting the application 102 to the cloud security platform 104. The cloud security platform 104 includes a plurality of servers 110 that are geographically distributed, with each server 110 being managed by with various operating systems (OS) 112, runtimes 114, middleware 116, virtual machines (VM) 118, APIs 120, and management services 122. In some aspects, the cloud security platform 104 includes a runtime 114 refers to the environment that the middleware 116 will execute within to control various aspects of the cloud security platform 104. For example, the VMs 118 may be Kubernetes containers and the middleware 116 may be configured to add or remove hardware resources within cloud providers dynamically.
The cloud security platform 104 also exposes one or more APIs 120 for allowing the applications 102 to interact with the cloud security platform 104. The APIs 120 enable a customer to surface information, interact with information within the cloud security platform 104, and perform other low-level functions to supplement the security services of the cloud security platform 104. The API 120 is also configured to integrate with other third-party services (e.g., the third-party service 106) to perform various functions. For example, the API 120 may access a customer's resources in a cloud service provider (e.g., a third-party service 106) to monitor for threats, analyze configurations, retrieve logs, monitor communications, and so forth. In one aspect, the API 120 integrates with third-party cloud providers in an agnostic manner and allows the cloud security platform 104 to perform functions dynamically across cloud providers. For example, the API 120 may dynamically scale resources, allow resources to join a cluster (e.g., a cluster of controller instances), implement security rules from the cloud security platform 104 into the corresponding cloud provider, and other functions that enable a cloud-agnostic and service-agnostic integrated platform. For example, in some cases, the API 120 is configured to integrate with other security services to retrieve alerts of specific assets to reduce exposure to malicious actors.
The cloud security platform 104 also includes management services 122 for managing various resources of a customer. In some aspects, the management services 122 can manage resources including a controller (e.g., the controller 210 in
In one aspect, the management services 122 include an onboarding user experience that connects to various cloud providers (e.g., using the API 120) and allows onboarding of different cloud resources. The management services 122 also provides a cloud-agnostic approach to managing resources across different cloud providers, such as scaling up identical resources in different regions using different cloud providers. As an example, some cloud providers do not have a significant presence in the far east, and the management services 122 are configured to activate similar resources in a first geographical region (e.g., in Europe) and a second geographical region (e.g., Asia) with similar configurations in different cloud providers.
The cloud security platform 104 is configured to provide security across and within cloud providers in different contexts. For example, the cloud security platform 104 provides protection and security mechanisms in different flows. The cloud security platform 104 is configured to provide varying levels of protection based on flow, packet, encryption and other mechanisms. In one aspect, the cloud security platform 104 is configured to protect forwarding flows and packet flows.
Forwarding flow refers to the set of rules and decisions that determine how network devices handle incoming packets without inspecting packet and traffic contents. A forwarding flow involves making decisions based on information such as destination IP address, media access control (MAC) address, and routing tables to determine the outgoing interface for the packet and typically includes actions like address resolution (e.g., an address resolution protocol (ARP) for IP to MAC address mapping), updating MAC tables, and forwarding the packet to the appropriate interface, and various rules to apply based on configuration and policies.
A proxy flow comprises both forward proxy and reverse proxy functions and inspects the content of encrypted flows and access control. In some aspects, the cloud security platform 104 decrypts encrypted traffic to ensure malicious actors are not exploiting vulnerabilities in TLS-encrypted applications, and prevents data exfiltration (e.g., DLP) or connection to inappropriate universal resource indicators (URIs).
The cloud security platform 104 is also configured to handle packets differently based on security, such as policies related to IPS and a WAF. WAF protects various web applications from online threats, such as SQL injection, cross-site scripting (XSS), authentication spoofing, and other potential security. For example, a WAF filters and monitors traffic by inspecting headers (e.g., a JSON-encoded object in a hypertext transfer protocol (HTTP) header).
The cloud security platform 104 provides real-time discovery of multi-cloud workloads, at-scale, for virtual private clouds (VPCs) and cloud accounts. Real-time discovery also enables finding security gaps and improving defensive posture. The cloud security platform 104 also provides a dataplane management using gateways (e.g., the gateway 250 in
In some aspects, the cloud security platform 200 separates compute and data storage functions and enables a multi-tenancy to support different customers while maintaining data separation when needed. For example, the compute components are separated into a controller 210 and data storage components are implemented in a data plane 270. The controller 210 may be a collection of Kubernetes-based services that deploy a low latency connection (e.g., a remote procedure call (RPC) such as gRPC, Websockets, WebTransport, etc.) to connect various endpoints and enable bidirectional streaming, preventing connection setup and teardown. Each service within the controller 210 scales up or down horizontally based on load.
The controller 210 includes a configuration engine 212, an analytics engine 214, and a resources engine 216. The configuration engine 212 configures the various components and provides various services such as webhooks 218, a dashboard 220, an API 222, and a workflow 224.
In one aspect, the webhooks 218 module configures an asynchronous method of communication between different applications or services in real-time. In a webhook configuration, one application can register an endpoint URI with another, specifying where it should send data when a particular event occurs. When the event triggers, the originating system automatically pushes data to the registered URI, allowing the receiving application to process and act upon the information immediately. In some aspects, the webhooks 218 modules implement an observer pattern, with a dependent component providing a URI to the observed data source.
The dashboard 220 provides a user experience to a customer of the cloud security platform 104 and provides various integration modules, onboarding platforms, monitoring tools, and other functions for customers to access.
In some aspects, the APIs 222 can be various libraries to interact with various services, either through a dashboard 220 interface, a command line interface (not shown), or other tooling (not shown). The APIs 222 can also be API endpoints of the cloud security platform 104 or an API library associated with a third-party service (e.g., third-party services 252), or APIs associated with the cloud providers 254. In one aspect, the APIs 222 can include an agnostic API library that is configured to interact with the cloud providers 254 using a single API interface to scale resources, respond to security incidents, or other functions. This API 222 can be accessed via a command line interface or may be distributed to customers via various package management services.
The workflow 224 module can be various components that enable a customer to perform various tasks, such as manage specific resources, deploy services, communicate with team members regarding issues, and so forth. For example, the workflow 224 module can interact with the gateways 250 and an administration engine 248 to manage resources, access to resources, and deployment of various resources (e.g., deploy infrastructure with Terraform).
The analytics engine 214 is configured to integrate with gateways 250 and various third-party services 252 to monitor various events, services, and other operations. The analytics engine 214 includes a watch server 226 that is configured to disambiguate information from multiple sources of information (e.g., the gateway 250, the third-party services 252, etc.) to provide a holistic view of cloud networking operations. The analytics engine 214 may also be configured to interact with various components of the data plane 270 such as a metrics controller 242 and a data lake controller 246.
In some aspects, the resources engine 216 receives resources from cloud providers 254 and includes various components to route information and store information. The 216 includes an inventory router 228, logs 230 (e.g., a cache of logs for various functions), an inventory server 232, and a logs server 234. The components of the resources engine 216 are configured to disambiguate and combine information in an agnostic and standardized manner and store various resources in the data plane 270. For example, the resources engine 216 stores and receives events from an events controller 244 and also stores and receives logs in the data lake controller 246. In some aspects, the inventory router 228 and the inventory server 232 build an evergreen model of the customer's cloud accounts and subscriptions and create an address object for security policy management for the cloud security platform 200. The address object represents a segment of the customer's subscription based on cloud-native attributes (e.g., Security Group, ASG, customer-defined tags) and maps to a collection of IP Addresses which is automatically refreshed and synchronized with the gateway 250.
The data plane 270 includes various components to separate various types of information associated with the control plane and interconnected third-party services 252 and cloud providers 254. For example, the data plane 270 includes a configuration controller 240 that stores inventory information of a customer and various configuration information. In one example, the cloud providers 254 use different metrics for decisions pertaining to scaling deployed resources, and the configuration controller 240 stores information that allows the controller 210 to scale resources within the cloud providers 254 in a standardized manner. In some aspects, the configuration controller 240 may include storage mechanisms such as a relational database, a document database, and other high-availability storage mediums. The storage mechanisms can be on-premises resources or off-premises or cloud-based solutions such as various cloud-based relational or document databases (e.g., Redis, MySQL, MongoDB, etc.).
The data plane 270 also includes a metrics controller 242 that is configured to interact with custom metrics data or a third-party service for metrics analysis (e.g., Amazon CloudWatch). The events controller 244 is configured to handle and store events and various queues. For example, the events controller can include a Kafka server for handling real-time data feeds and event-driven applications. The metrics controller 242 may use a publish-subscribe model in which producers (e.g., a third-party service, an internal components of the controller 210, a gateway 250, etc.) publish data streams and a consumer subscribes to receive and process these streams in a fault-tolerant and distributed manner. The metrics controller 242 may handle massive amounts of data with low latency and high throughput.
The data lake controller 246 provides a long-term and scalable storage mechanism and associated services. For example, the data lake controller 246 may include a cloud-based S3 API for storing data to various cloud services (e.g., AWS, DigitalOcean, OpenShift) or on-premises services (e.g., MinIO, etc.). The data lake controller 246 may also include a search-based mechanism such as ElasticSearch for large-scale and efficient search of contents within the non-volatile cloud storage mechanisms. In some aspects, the data lake controller 246 stores network logs and implements search functionality (e.g., Snowflake) for large-scale ad hoc queries for security research and analysis.
The cloud security platform 200 also includes an administration engine 248, a gateway 250, and integrations into various third-party services 106. The administration engine 248 may include authentication services (e.g., Auth0, Okta) to verify identity and provide authentication mechanisms (e.g., access tokens), and may include infrastructure as code (IaC) tools such as Terraform to automate the process of creating, updating, and managing the specified infrastructure across various cloud providers or on-premises environments.
The cloud security platform 200 includes gateways 250 that are deployed into various integration points, such as cloud providers. The gateways 250 an ingress and egress points of the cloud security platform 200 and are configured to monitor traffic, provide information to the controller 210, dynamically scale based on the cloud security platform 200, and provide security to a customer's cloud infrastructure. For example, gateways 250 may implement a transparent forward and reverse proxy to manage traffic. The gateways 250 may also include a cloud-based firewall that is configured to filter malicious traffic using various dynamic detection policies.
The cloud security platform 200 also integrates into various third-party services 252 for various purposes such as receiving threat-related intelligence (e.g., Spunk, Talos, etc.). The third-party services 252 also include different types of infrastructure components such as managing mobile devices, implementing cloud-based multimedia communication services, business analytics, network analytics (e.g., reverse address lookup), certificate services, security information and event management (SIEM), and so forth.
In some aspects, the data path pipeline 300 comprises a single-pass firewall architecture that uses a single-pass flow without expensive context switches and memory copy operations. In a single-pass flow, processing is not duplicated multiple times on a packet. For example, TCP/IP receive and transmission operations are only performed a single time. This is different from existing next-generation firewalls (NGFW). The data path pipeline 300 uses fibers with flexible stages completely running in user-space and, therefore, does not incur a penalty for kernel-user context switches, which are expensive in high bandwidth and low latency operations. The data path pipeline 300 provides advanced web traffic inspection comparable to WAFs to secure all traffic flows and break the attack kill chain in multiple places, raising the economic costs for attackers. The data path pipeline 300 also captures packets of live attacks into a cloud storage bucket without significant performance degradation and enables a rule-based capture on a per-session and attack basis.
The data path pipeline 400 is also configured to be flexible and stages of processing are determined on a per-flow basis. For example, application 1 to application 2 may implement an L4 firewall and IPS inspection, application 3 to application 4 may implement an L4 firewall, a transport layer security (TLS) proxy, and IPS, and an internet client to web application 5 implements an L4 firewall, TLS proxy, IPS, and WAF.
In some aspects, the data path pipeline 300 includes various filters (e.g., malicious IP filter), geographic IP filter, fully qualified domain name (FQDN) filter) to filter both forwarding flows and proxy flows, as well as an L4 firewall to restrict traffic based on conventional techniques.
The data path pipeline 300 may also be integrated with a hardware offload 302 (e.g., a field programmable gate arrays (FPGA) of a cloud provider, an application specific integrated circuit (ASIC), etc.) that includes additional functionality that does not impact throughput. In one aspect, a cloud provider may offer a hardware offload or an accelerator function to implement a specialized function. For example, the hardware offload 302 includes a cryptographic engine 304, an API detection engine 306, a decompression engine 308, a regex engine 310, and a fast pattern engine 312 to offload operations into hardware.
In one aspect, the data path pipeline 300 includes high throughput decryption and re-encryption to enable inspection of all encrypted flows using the cryptographic engine 304. By contrast, traditional NGFWs provide a throughput of around 10% for inspecting encrypted flows. The data path pipeline 300 may use a decompression engine 308 to decrypt compressed traffic and perform deep packet inspection. For example, the data path pipeline 300 also uses a userspace Linux TCP/IP driver, in addition with network address translation (NAT) in conjunction with the API detection engine 306 and the decompression engine 308 to eliminate problematic and malicious flows.
The data path pipeline 300 includes a transparent reverse and forward proxy to isolate clients and servers without exposing internal details, a layer 7 firewall to rate limit and protect applications and APIs, and secure user access by looking up end-user-specific identity from an identity provider (IDP) and provide zero trust network access (ZTNA). The data path pipeline 300 includes a WAF pipeline and an IPS pipeline to detect malicious and problematic flows in conjunction with a regex engine 310 and a fast pattern engine 312. For example, the WAF pipeline may implement protection for web applications, including OWASP Top 10, using a core ruleset and application-specific rules for frameworks and common content management tools like PHP, Joomla, and WordPress. The data path pipeline 300 includes IDS and IPS to block known vulnerabilities and provide virtual patching until the applications can be patched with updated security fixes, application identification to block traffic based on client, server or application payload, DLP loss and filtering, URI filtering, antivirus and anti-malware features to prevent malware files from being transferred for ingress (malicious file uploads), east-west lateral attacks (moving toolkits) and egress flows (e.g., botnets).
The data path pipeline 400 comprises a L4 firewall 402, a user space receive TCP/IP stack 404, a TLS receive proxy 406, a WAF 408, an IPS 410, a TLS transmit proxy 412, and a user space transmit TCP/IP stack 414 and illustrates the flow of forwarding flows and proxy flows, and points at which packets may be dropped/accepted using an L4 firewall, a WAF, and/or IPS.
For example, the data path pipeline 400 may be implemented as a user-space driver (e.g., a data path packet driver (DPDK) that receives forwarding and proxy flows, computes hashes, and provides the packet to the worker. In this case, a worker is part of a distributed instance of a gateway and provides the flows to the L4 firewall 402. For example, the L4 firewall 402, or a transport layer firewall, may inspect traffic and filter traffic based on source and destination IP/port.
The user space receives TCP/IP stack 404 is configured to perform the receive processing of forwarding and proxy flows. For example, the user space receive TCP/IP stack 404 handles framing, addressing, and error detection within TCP/IP and further identifies per-flow processing based on policies and rules of the cloud security platform. For example, some forwarding flows are provided to the user space transmit TCP/IP stack 414, some forwarding flows are provided to the IPS 410, and proxy flows are forwarded to the TLS receive proxy 406. The TLS receive proxy 406 manages the TLS decryption process in the event further inspection is warranted based on the policies and rules, and then provides the proxy flows to either the IPS 410 or the WAF 408 based on a policy.
The IPS 410 examines its content, headers, and contextual information. Deep packet inspection involves analyzing the payload and looking for patterns, signatures, or anomalies that may indicate malicious activity. The IPS compares the packet against a database of known attack signatures and employs heuristic analysis to detect deviations from expected behavior. Additionally, it assesses factors such as source and destination addresses, ports, and protocol compliance. If the IPS identifies a packet as potentially malicious, it can take proactive measures, such as blocking the packet, alerting administrators, or initiating predefined security policies to prevent the exploitation of vulnerabilities and safeguard the network from intrusion attempts.
The WAF 408 monitors, filters, and analyzes HTTP traffic in real-time and actively looks for and blocks common web vulnerabilities such as SQL injection, XSS, and other application-layer attacks. By examining and validating HTTP requests and responses, the WAF can detect and block malicious traffic, ensuring that only legitimate requests reach the web application. WAFs often employ rule-based policies, signature-based detection, and behavioral analysis to identify and mitigate potential security risks.
The TLS transmit proxies 412 reassembles the proxy flows and contextual information and provides the proxy flows to the user space transmit TCP/IP stack 414, which reassembles the packet and forwards any traffic. As shown in
Each of the distributed across CSP 510, the CSP 520, and the CSP 530 may include an ingress gateway 511, a load balancer 512, a frontend 513, a backend 514, and an egress gateway 515. In some aspects, the ingress gateway 511 and the egress gateway 515 may be provided by the cloud security platform and provide an agnostic interface to control flows into each different CSP. For example, the cloud security platform can scale resources in a consistent manner, provide malicious content filtering, attack denial, rate limiting, and other services to protect the service 505 at the corresponding CSP. The cloud security platform can also perform the corresponding services to a recipient of the service (not shown).
The cloud security platform abstracts specific details associated with each CSP into a single interface and allows an administrator of the service 505 a common control plane for controlling resources within each CSP and protecting services within each CSP and the service 505.
In some aspects, the cluster membership system 600 can be connected to one or more virtual networks, such as virtual network 602 and virtual network 604. In one aspect, the virtual network 602 can be for staging, and the virtual network 604 can be for production. That is, the cluster membership system 600 can be configured into a continuous integration/continuous deployment (CI/CD) pipeline. Both the virtual network 602 and the virtual network 604 are configured to provide traffic to a load balancer 606. In some aspects, the load balancer 606 distributes the network traffic across different availability zones, such as available zone 608 and availability zone 610. The availability zones can be different geographic regions, different CSPs, different physical data centers, etc. That is, the availability zones represent different levels of abstraction based on the implementation. Each availability zone (e.g., the available zone 608 and the availability zone 610) includes management services 612 for managing the corresponding availability zone and data path services 614.
In some aspects, the data path services 614 comprises a cluster of services within the data path between the load balancer 606 and the network 620. The data path services 614 include various functions such as cloud-based firewalls (e.g., stateful firewalls), line rate API inspection, gateway functions, content filtering, and so forth. In one aspect, the data path services are at least partially implemented as virtual machines or other similar containerized implementation (e.g., Kubernetes) that dynamic scale resources. The data path services 614 can also include accelerator-based functionality using CSP assets. For example, an accelerator can use very high-speed integrated circuit hardware description language (VHDL) to provision hardware-based functionality for the virtual machines to offload operations that are expensive in software to hardware. As an example, an accelerator can implement at least a portion of a regular expression (e.g., regex) processing in hardware, which is significantly faster in hardware. Offloading operations into hardware (e.g., using VHDL to create a circuit in an FPGA) enables various operations to be performed at line rate.
The cluster membership system 600 may also include a key vault 622 and object storage 624. The key vault 622 is a secure repository that stores and manages cryptographic keys used in various security protocols and applications. Encryption is the backbone of modern cybersecurity and is crucial for safeguarding sensitive data from unauthorized access. The key vault 622 serves as a centralized and highly secure solution for storing encryption keys and ensures the confidentiality and integrity of the keys. The key vault 622 uses advanced encryption algorithms and access controls to protect the keys from potential threats. The key vault 622 streamlines key management processes and provides a centralized location for generating, storing, and distributing keys, simplifying the complex task of managing cryptographic assets across an organization. The key vault 622 also ensures keys are rotated and updated efficiently handled within the key vault 622, ensuring that cryptographic keys remain resilient against evolving security threats.
The object storage 624 is a highly scalable and durable object storage service that uses separation into different buckets and storing each piece of data as an object. As an example, simple storage services (S3) is an API that enables access to the objects using a key-pair value, with the key being a unique identifier, and the value corresponding to the data. The value can also include other metadata. S3 APIs are an industry standard API based on high durability, availability, and replication of data across multiple geographically dispersed data centers. S3 also provides a versatile set of features, including versioning, access control, and lifecycle management, allowing users to tailor their storage solutions based on specific requirements.
In some aspects, key vault 622 is configured to store asymmetric encryption keys associated with the data path services 614. For example, each instance of an image (e.g., a virtual machine, a container, etc.) may be associated with an asymmetric key pair and, while the instance of an image is booting, a private key is retrieved by the image and is used to authenticate the validity of the instance. For example, to initiate the instance, an API request is formed with a payload including the private key and the request is provided to an API associated with the CSP. Based on the payload, the CSP retrieves the image (e.g., from an image storage service) and modifies the boot process based on the private key. In one aspect, the API retrieves a boot script from object storage 624 and modifies the boot process of the image based on the boot script. In this case, the payload including the private key is not stored, ensuring no leakage of the private key.
When the instance of the image is booted, the instance is configured to request access to a controller (e.g., the controller 210 in
In some aspects, a cloud security platform includes a platform engine 702 for scaling resources and a controller 710 that manages at least one gateway 720 across different CSPs in an agnostic manner. The platform engine 702 allows a user to define and provision infrastructure in a declarative manner. For example, the platform engine 702 allows for the creation, modification, and versioning of infrastructure components across various cloud providers and on-premises environments. The platform engine 702 uses a configuration that describes the desired state of the infrastructure using a domain-specific language provisioning and manages and deploys the resources according to the configuration.
In some aspects, the platform engine 702 includes an API storage 704 that stores sensitive information such as API keys. For example, the API storage can include keys related to the cloud security platform, such as various keys associated with different customers.
Each tenant of the cloud security platform is allocated with at least one controller 710, and the controller 710 includes various services (e.g., service 712 and service 714). An example of a service 712 deployed in the cloud security platform includes one or more gateways 720. A single controller 710 is illustrated to simplify the illustration and the explanation of the cloud security platform. In this case, a tenant can be an organization or other type of legal entity, such as a business, a non-profit, an educational institution, etc. Each tenant includes a plurality of resources that can be managed based on the controller 710 and deploy various gateways (e.g., gateway 720) in different zones and/or geographic regions. Each tenant is associated with one or more tables in a database for storing tenant-specific information. These tables can only be accessed based on the credentials of the tenant. For example, each tenant includes gateway data 716, which is a table that stores information to authenticate the gateways. In one aspect, the gateway data 716 stores the public encryption keys associated with each instance of the gateway 720. For example, the public encryption key of an instance can authenticate an instance of the gateway 720.
The controller 710 may also be configured to access a CSP storage source 718 associated with the various CSPs. The CSP storage source 718 can store secret keys associated with the CSP that the gateways 720 are deployed, allowing the cloud security platform to access APIs and other interfaces to control the various resources. The CSP storage source 718 may also be a certificate authority (CA) root server for validating certificates and other security functions.
In some aspects, the gateways 720 includes an agent 722 that connects to and joins the service 712, which is a cluster of gateways 720 that are distributed physically and logically. As shown in
In some aspects, the controller 710 is configured to insert the user data 724 during a boot process. For example, the controller 710 may request, via an API request (e.g., a POST request in HTTP), a CSP to boot an instance of the gateway 720. The API request can include information such as a private key and boot information. The CSP can retrieve a specific image (e.g., an image of the gateway 720) based on the API request and then boot the specific image on the hardware of the CSP. The additional information in the API request, such as the private key and the boot information, can be used during the boot process to retrieve a boot script. For example, the CSP may use secured information to retrieve the boot script from object storage and then insert the boot script into the image. In this case, if a malicious actor can access the boot image using unauthorized techniques, the boot image will not have the necessary boot script to complete boot operations.
Once the instance of the specific image has completed the boot operation with the CSP, the image can then connect to the service 712 of the controller 710 to join as a member of a cluster associated with an availability zone (e.g., the available zone 608 or the availability zone 610, etc.). In one aspect, the instance can use the private key to authenticate with the gateway 720. The gateway 720 may also send a token that is signed based on the private key. In some aspects, the token provides information to the service 712 that indicates successful authentication and credentials, and reduces repeated authentication processes.
At block 812, the controller 802 or a component thereof (e.g., a load balancer) may determine to create a new instance. In response to the determination to create a new instance of a cloud service provider service (e.g., a gateway, etc.) at block 812, the controller 802 may generate an asymmetric key-pair (e.g., a private key and a public key) and store the public key. The public key may be stored in a tenant-specific storage mechanism. For example, the controller 802 can include a table in a database that identifies each instance of a gateway and may include various information that is relevant to that instance, such as a unique identifier that identifies the instance, the public key, etc.
The controller 802 may send an API request 814 (e.g., a POST request) to the API 806 requesting the CSP 804 activate a new instance of image (e.g., an image associated with the gateway). The API request 814 includes information identifying the image, the private key generated by the controller 802, and boot information. In response, the API 806 provides an instantiate request 816 to the CSP 804 to boot an instance at block 818. For example, the API 806 causes an image service (e.g., Amazon Machine Image (AMI), etc.) to boot the corresponding image based on additional parameters. The API request 814 may also include an instruction or parameter that causes the CSP 804 to retrieve a boot script for the image. As an example, the API request 814 may include a token that indicates authentication to the CSP 804 and permits the CSP 804 to get the boot script from storage 810, which can be within the CSP 804 or may be external to the CSP 804.
While booting the instance at block 818, the boot script retrieved from storage 810 is inserted into the boot process of the instance 808. For example, the boot script stored in the initial image is replaced with the boot script 820 retrieved from the storage 810. The separation of the boot script provides an additional layer of security and prevents malicious actors from being able to successfully boot with the necessary information.
Once the instance 808 has booted with the boot script from the storage 810, the instance sends a membership request 822 to the controller 802. In some aspects, the membership request 822 is part of the boot script retrieved from the storage 810. The boot script itself is only stored in non-volatile memory or, in some cases, a secure enclave to prevent the boot script from being exposed. The membership request 822 may include information from the API request 814, such as a value signed by a private key. For example, the API request 814 may include a random value or a unique identifier provided to the instance 808, and the instance 808 may sign the random value or unique identifier with the private key and return the signed value in the membership request 822.
The controller 802 receives the membership request 822 and authenticates the instance 808 at block 824. In one aspect, the controller 802 uses the public key associated with the instance 808 (e.g., stored in a database associated with the tenant corresponding to the controller 802) and validates the signed value. In this case, the controller 802 authenticates the instances and allows the instance to join the cluster of instances. For example, the controller 802 registers the instance with a load balancer (not shown) to provide network traffic in the corresponding availability zone.
If the controller 802 does not authenticate the instance, the controller does not register the instance with the load balancer. In this case, when the load balancer performs a maintenance operation that evaluates the amount of traffic each instance is processing, the load balancer will identify an instance that is not handling traffic. For example, a gateway instance (e.g., that has been validated by the controller 802) may be interrupted and becomes unavailable to process traffic. When the load balancer identifies a gateway instance that is not providing sufficient throughput, the load balancer removes the gateway instance (e.g., garbage collection). In the event of an instance that is not authenticated, the load balancer will identify that the gateway instance is not processing traffic, and will then remove the invalid gateway instance.
The controller 802 then sends a token 826 to the instance 808, which the instance 808 can present for validation on subsequent communications. For example, the token 826 validates the identity of the instance 808 and may grant access to specific resources or functionalities within a system or application. The token 826 acts as a digital credential and represents information about the permissions and roles and streamlines authentication and authorization for each interaction of the instance 808.
At block 828, the controller 802 may determine to rotate the asymmetric keys of the instance 808 after a period of time. Cycling encryption keys, also known as key rotation, mitigates the risks associated with compromised keys by regularly replacing them and reduces the window of opportunity for potential attackers. Key rotation aligns is a best practice for cryptographic security and ensures that cryptographic material remains fresh and up-to-date. Cycling encryption keys may also comply industry regulations and standards, which often mandate regular key updates to safeguard sensitive information.
At block 828, the controller generates a new asymmetric key-pair and sends the private key 830 to the instance 808. In some aspects, the instance 808 may return a response in connection with generating a new token for the instance (not shown).
At block 905, the computing system may receive a start message to start a first instance of a gateway service.
At block 910, in response to receiving a start message to start a first instance of a gateway service, the computing system may generate a private key and a public key associated with the first instance based on the start message. The computing system may store the public key and information needed to map the first instance to the public key. For example, the public key may be stored in a database based on a unique identifier generated for the first instance.
At block 915, the computing system may transmit a boot message to a cloud service provider to cause the cloud service provider to boot an image corresponding to the gateway service. In one aspect, the computing system may send a POST request and include various information to boot the instance within the POST request. The information in the boot message may include boot script information, which may identify a boot script to inject into the image. The cloud service provider may already have necessary credentials to obtain the boot script identified in the boot script information. For example, the boot script may be stored in a provisioned object storage service within the cloud service provider. The object storage may also be external (e.g., object storage of a different cloud service provider). The POST request may also include the private key and instructions within the POST request to cause the cloud service provider to utilize the private key with the boot script.
In some aspects, the POST request may identify the image corresponding to the gateway service. For example, the cloud service provider may retrieve the image of the gateway, and boot the image to create the instance. While booting the image, the request may cause the cloud service provider to retrieve the boot script and insert the boot script into the boot sequence of the instance. For example, the cloud service provider may replace an internal boot script within the image, or may supplement the internal boot script with the retrieved boot script to provide additional instructions. In some aspects, the retrieved boot script may include confirmation information that is provided based on an instructions from an administrator of the computing system (e.g., the controller 210 in
At block 920, the computing system may receive the join message from the first instance to join a gateway service cluster. At block 925, the computing system may determine an authenticity of the join message based on the information within the join message encrypted with the private key. For example, as noted above, the join message may include information signed with the private key. The computing system may use information to retrieve the public key, which can be stored within a database noted above), and authenticate the veracity of the join message. The authentication validates that the first instance is requested using authentic and trusted mechanisms. For example, the computing system may determine that the gateway service cluster has too high of a processing load and may itself request the instance. In other cases, other components of the system may determine the instance is needed (e.g., the load balancer).
At block 930, the computing system may configure the first instance with the gateway service cluster based on credentials included in the join message. If the join message is not authentic, the first instance is not permitted to join the gateway service cluster. In the vent the first instance does not join due to invalid credentials, the load balancer will destroy the first instance because the volume of traffic processed by the first instance indicates unused or invalid resources. For example, gateway instances can stop processing, and the load balancer automatically cleans up and destroys the corresponding resources (e.g., deletes any key-pair values).
The computing system may also rotate the public and private keys at various intervals (e.g., a day, a month, etc.). The computing system can also generate a token based on the authenticating first instance having the private key to reduce the authentication flows.
The programmable network processor 1002 may be programmed to perform functions that are conventionally performed by integrated circuits (IC) that are specific to switching, routing line card, and routing fabric. The programmable network processor 1002 may be programmable using the programming protocol-independent packet processors (P4) language, which is a domain-specific programming language for network devices for processing packets. The programmable network processor 1002 may have a distributed P4 NPU architecture that may execute at a line rate for small packets with complex processing. The programmable network processor 1002 may also include optimized and shared NPU fungible tables. In some aspects, the programmable network processor 1002 supports a unified software development kit (SDK) to provide consistent integrations across different network infrastructures and simplifies networking deployments. The SoC 1000 may also include embedded processors to offload various processes, such as asynchronous computations.
The programmable network processor 1002 includes a programmable NPU host 1004 that may be configured to perform various management tasks, such as exception processing and control-plane functionality. In one aspect, the programmable NPU host 1004 may be configured to perform high-bandwidth offline packet processing such as, for example, operations, administration, and management (OAM) processing and media access control (MAC) learning.
The SoC 1000 includes counters and meters 1006 for traffic policing, coloring, and monitoring. As an example, the counters and meters 1006 include programmable counters used for flow statistics and OAM loss measurements. The programmable counters may also be used for port utilization, microburst detection, delay measurements, flow tracking, elephant flow detection, congestion tracking, etc.
The telemetry 1010 is configured to provide in-band telemetry information such as per-hop granular data in the forwarding plane. The telemetry 1010 may observe changes in flow patterns caused by microbursts, packet transmission delay, latency per node, and new ports in flow paths. The NPU database 1012 provides data storage for one or more devices, for example, the programmable network processor 1002 and the programmable NPU host 1004. The NPU database 1012 may include different types of storage, such as key-value pair, block storage, etc.
In some aspects, the SoC 1000 includes a shared buffer 1014 that may be configured to buffer data, configurations, packets, and other content. The shared buffer 1014 may be utilized by various components such as the programmable network processor 1002 and the programmable NPU host 1004. A web scale circuit 1016 may be configured to dynamically allocate resources within the SoC 1000 for scale, reliability, consistency, fault tolerance, etc.
In some aspects, the SoC 1000 may also include a time of day (ToD) time stamper 1018 and a SyncE circuit 1020 for distributing a reference to subordinate devices. For example, the time stamper 1018 may support IEEE-1588 for ToD functions. In some aspects, the time stamper 1018 includes support for a precision timing protocol (PTP) for distributing frequency and/or phase to enable subordinate devices to synchronize with the SoC 1000 for nano-second level accuracy.
The serializer/deserializer 1022 is configured to serialize and deserialize packets into electrical signals and data. In one aspect, the serializer/deserializer 1022 supports sending and receiving data using non-return-to-zero (NRZ) modulation or pulse amplitude modulation 4-level (PAM4) modulation. In one illustrative aspect, the hardware components of the SoC 1000 provide features for terabit-level performance based on flexible port configuration, nanosecond-level timing, and programmable features. Non-limiting examples of hardware functions that the SoC 1000 may support include internet protocol (IP) tunneling, multicast, network address translation (NAT), port address translation (PAT), security and quality of service (QOS) access control lists (ACLs), equal cost multiple path (ECMP), congestion management, distributed denial of service (DDos) migration using control plane policing, telemetry, timing and frequency synchronization, and so forth.
In some aspects, computing system 1100 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple data centers, a peer network, etc. In some aspects, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some aspects, the components can be physical or virtual devices.
Example computing system 1100 includes at least one processing unit (a central processing unit (CPU) or processor) 1110 and connection 1105 that couples various system components including system memory 1115, such as ROM 1120 and RAM 1125 to processor 1110. Computing system 1100 can include a cache 1112 of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 1110.
Processor 1110 can include any general purpose processor and a hardware service or software service, such as services 1132, 1134, and 1136 stored in storage device 1130, configured to control processor 1110 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 1110 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 1100 includes an input device 1145, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 1100 can also include output device 1135, which can be one or more of a number of output mechanisms. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 1100. Computing system 1100 can include communications interface 1140, which can generally govern and manage the user input and system output. The communication interface may perform or facilitate receipt and/or transmission wired or wireless communications using wired and/or wireless transceivers, including those making use of an audio jack/plug, a microphone jack/plug, a universal serial bus (USB) port/plug, an Apple® Lightning® port/plug, an Ethernet port/plug, a fiber optic port/plug, a proprietary wired port/plug, a Bluetooth® wireless signal transfer, a BLE wireless signal transfer, an IBEACON® wireless signal transfer, an RFID wireless signal transfer, near-field communications (NFC) wireless signal transfer, dedicated short range communication (DSRC) wireless signal transfer, 802.11 WiFi wireless signal transfer, WLAN signal transfer, Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), IR communication wireless signal transfer, Public Switched Telephone Network (PSTN) signal transfer, Integrated Services Digital Network (ISDN) signal transfer, 3G/4G/5G/LTE cellular data network wireless signal transfer, ad-hoc network signal transfer, radio wave signal transfer, microwave signal transfer, infrared signal transfer, visible light signal transfer, ultraviolet light signal transfer, wireless signal transfer along the electromagnetic spectrum, or some combination thereof. The communications interface 1140 may also include one or more Global Navigation Satellite System (GNSS) receivers or transceivers that are used to determine a location of the computing system 1100 based on receipt of one or more signals from one or more satellites associated with one or more GNSS systems. GNSS systems include, but are not limited to, the US-based GPS, the Russia-based Global Navigation Satellite System (GLONASS), the China-based BeiDou Navigation Satellite System (BDS), and the Europe-based Galileo GNSS. There is no restriction on operating on any particular hardware arrangement, and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 1130 can be a non-volatile and/or non-transitory and/or computer-readable memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, a floppy disk, a flexible disk, a hard disk, magnetic tape, a magnetic strip/stripe, any other magnetic storage medium, flash memory, memristor memory, any other solid-state memory, a compact disc read only memory (CD-ROM) optical disc, a rewritable compact disc (CD) optical disc, digital video disk (DVD) optical disc, a blu-ray disc (BDD) optical disc, a holographic optical disk, another optical medium, a secure digital (SD) card, a micro secure digital (microSD) card, a Memory Stick® card, a smartcard chip, a EMV chip, a subscriber identity module (SIM) card, a mini/micro/nano/pico SIM card, another integrated circuit (IC) chip/card, RAM, static RAM (SRAM), dynamic RAM (DRAM), ROM, programmable read-only memory (PROM), crasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash EPROM (FLASHEPROM), cache memory (L1/L2/L3/L4/L5/L #), resistive random-access memory (RRAM/ReRAM), phase change memory (PCM), spin transfer torque RAM (STT-RAM), another memory chip or cartridge, and/or a combination thereof.
The storage device 1130 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 1110, it causes the system to perform a function. In some aspects, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1110, connection 1105, output device 1135, etc., to carry out the function. The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as CD or DVD, flash memory, memory or memory devices. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like.
In some examples, the processes described herein (e.g., method 900, and/or other process described herein) may be performed by a computing device or apparatus. In one example, the method 900 can be performed by a computing device having a computing architecture of the computing system 1100 shown in
In some cases, the computing device or apparatus may include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component(s) that are configured to carry out the steps of processes described herein. In some examples, the computing device may include a display, one or more network interfaces configured to communicate and/or receive the data, any combination thereof, and/or other component(s). The one or more network interfaces can be configured to communicate and/or receive wired and/or wireless data, including data according to the 3G, 4G, 5G, and/or other cellular standard, data according to the Wi-Fi (802.11x) standards, data according to the Bluetooth™ standard, data according to the IP standard, and/or other types of data.
The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, GPUs, DSPs, CPUs, and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.
In some aspects the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per sc.
Specific details are provided in the description above to provide a thorough understanding of the aspects and examples provided herein. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the aspects in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the aspects.
Individual aspects may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but may have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable medium. A processor(s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure.
In the foregoing description, aspects of the application are described with reference to specific aspects thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative aspects of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above-described application may be used individually or jointly. Further, aspects can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate aspects, the methods may be performed in a different order than that described.
One of ordinary skill will appreciate that the less than (“<”) and greater than (“>”) symbols or terminology used herein can be replaced with less than or equal to (“≤”) and greater than or equal to (“≥”) symbols, respectively, without departing from the scope of this description.
Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.
The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly.
Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as RAM such as synchronous dynamic random access memory (SDRAM), ROM, non-volatile random access memory (NVRAM), EEPROM, flash memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.
The program code may be executed by a processor, which may include one or more processors, such as one or more DSPs, general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein.
This application priority to U.S. provisional application No. 63/609,196, titled “Fiber-Based Acceleration of Data Path Execution,” and filed on Dec. 12, 2023, which is expressly incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63609196 | Dec 2023 | US |