Embodiments of the subject matter described herein relate generally to data ingestion to cloud-based data storage, and more particularly to systems and methods for data ingestion of log data to cloud-based data storage.
A log is a record of the events occurring within an organization's systems and networks. Logs are composed of log entries; each entry contains information related to a specific event that has occurred within a system or network. Many logs within an organization contain records related to computer security. These computer security logs are generated by many sources, including security software, such as antivirus software, firewalls, and intrusion detection and prevention systems; operating systems on servers, workstations, and networking equipment; and applications.
The number, volume, and variety of computer security logs have increased greatly, which has led organizations to develop computer security log management systems, which aim at ensuring that computer security records are stored in sufficient detail for an appropriate period of time. Routine log analysis is beneficial for identifying security incidents, policy violations, fraudulent activity, and operational problems. Logs are also useful when performing auditing and forensic analysis, supporting internal investigations, establishing baselines, and identifying operational trends and long-term problems. Organizations also may store and analyze certain logs to comply with Federal legislation and regulations. Log storage can be complicated by several factors, including a high number of log sources; inconsistent log content, formats, and timestamps among sources; and increasingly large volumes of log data.
This summary is provided to describe select concepts in a simplified form that are further described in the Detailed Description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one embodiment, a cloud-based data repository system is provided. The data repository system includes internal partitioned data storage for storing multiple petabytes of log data; a data ingestion pipeline for use in ingesting multiple gigabytes of raw log data per second for storage in the internal partitioned data storage; a configuration service for providing configuration metadata regarding source data buckets from which the raw log data are retrieved; and a log partitioner service including a controller. The controller is configured to: deploy (e.g., instantiate) a log partitioner cluster including a plurality of log partitioner service instances for storing the raw log data in a partitioned manner for improved defensive security; and associate one or more of the source data buckets to each of a plurality of deployed log partitioner service instances, wherein to associate a log partitioner service instance the controller is configured to provide associated configuration metadata from the configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance; wherein each log partitioner service instance is configured to fetch raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instance; wherein the associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths; and wherein each log partitioner service instance is configured to place fetched raw log data in the internal partitioned log storage in accordance with the instructions provided in its associated configuration metadata.
In another embodiment, a processor-implemented method for storing multiple petabytes of raw log data from cloud-based source data buckets into internal partitioned data storage in a data lake is provided. The method includes: deploying a log partitioner cluster including a plurality of log partitioner service instances for storing the raw log data in a partitioned manner; associating one or more of the source data buckets to each of a plurality of deployed log partitioner service instances, the associating including providing associated configuration metadata from a configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance; fetching, via the log partitioner cluster, raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instances; wherein the associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths; and placing fetched raw log data, via the log partitioner cluster, in the internal partitioned log storage in accordance with the instructions provided in associated configuration metadata.
In another embodiment, a non-transitory computer readable medium encoded with programming instructions configurable to cause a processor to perform a method for storing multiple petabytes of raw log data from cloud-based source data buckets into internal partitioned data storage in a data lake is provide. The method includes: deploying a log partitioner cluster including a plurality of log partitioner service instances for storing the raw log data in a partitioned manner; associating one or more of the source data buckets to each of a plurality of deployed log partitioner service instances, the associating including providing associated configuration metadata from a configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance; fetching, via the log partitioner cluster, raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instances; wherein the associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths; and placing fetched raw log data, via the log partitioner cluster, in the internal partitioned log storage in accordance with the instructions provided in associated configuration metadata.
Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the preceding background.
A more complete understanding of the subject matter may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description.
The example cloud-based data repository system 100 provides for streaming-based data ingestion via a streaming-based data ingestion pipeline and storage-based data ingestion via a storage-based data ingestion pipeline. The streaming-based data ingestion pipeline may include a log publisher service streaming data service 111 (such as Apache Kafka) to stream raw log data (e.g., from a log publisher service 112) into the internal raw log storage 102. Also, a third party security service 114 (such as a CrowdStrike service) may be used to store raw log data into the internal raw log storage 102. In either case, when raw log data is stored in the internal raw log storage 102, a messaging service (such as SQS) generates a messaging notification 103 (e.g., a SQS message or notification) that informs the log partitioner service 106 that data is available in the internal raw log storage 102 for ingestion. The log partitioner service 106 is configured to fetch the raw data from the internal raw log data storage 102 based on information contained in the messaging notification 103 (e.g., SQS notification) and configuration metadata 107 received from a configuration service 110.
When raw log data is available for ingestion from external raw log storage (e.g., storage-based data ingestion via a data pipeline between the external raw log storage and the log partitioner service 106), a messaging service (such as SQS) generates a messaging notification 105 (e.g., a SQS notification) that informs the log partitioner service 106 that data is available in the external raw log storage 108 for ingestion. The log partitioner service 106 is configured to fetch the raw data from the external raw log data storage 108 based on information contained in the messaging notification 105 and configuration metadata 107 received from a configuration service 110.
The configuration metadata 107 instructs the log partitioner service 106 on where/when/how to onboard raw log data and place the raw log data in the internal partitioned log storage 104. In particular, the configuration metadata instructs the log partitioner service 106 on what to fetch from the external raw log storage 108 and how to format the data to store it in the partitioned log storage 104. Also, the configuration metadata 107 instructs the log partitioner service 106 on what to fetch from the internal raw log storage 102 and how to format the data to store it in the partitioned log storage 104.
The example log partitioner service 106 is implemented using a controller comprising at least one processor and a computer-readable storage device or media encoded with programming instructions for configuring the controller. The processor may be any custom-made or commercially available processor, a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), an auxiliary processor among several processors associated with the controller, a semiconductor-based microprocessor (in the form of a microchip or chip set), any combination thereof, or generally any device for executing instructions.
The computer readable storage device or media may include volatile and nonvolatile storage in read-only memory (ROM), random-access memory (RAM), and keep-alive memory (KAM), for example. KAM is a persistent or non-volatile memory that may be used to store various operating variables while the processor is powered down. The computer-readable storage device or media may be implemented using any of a number of known memory devices such as PROMs (programmable read-only memory), EPROMs (electrically PROM), EEPROMs (electrically erasable PROM), flash memory, or any other electric, magnetic, optical, or combination memory devices capable of storing data, some of which represent executable programming instructions, used by the controller.
In an example operating scenario, the example log partitioner service 106 attaches to a source log storage 102 or 108 (e.g., AWS S3 bucket) and listens for messaging notifications for incoming new data. For example, a third party security service 114 may drop a new batch of raw log data in a current source bucket in internal raw log storage 102. Then a messaging notification 103 (e.g., SQS notification) will be sent to the log partitioner service 106 that will handle the batch by downloading all *.gz files for the batch and outputting the log files under a defined prefix in internal partitioned log storage 104.
The example log partitioner service 106 is implemented by bootstrapping one or more instances of the log partitioner service software. An instance of the log partitioner service 106 is bootstrapped with configuration metadata 107 stored in Configuration Service 110 for the source bucket (e.g., in internal raw log storage 102 or external raw log storage 108). This allows different log partitioner instances to handle different cloud storage elements (e.g., source buckets). This also allows the resources for implementing the log partitioner service 106 to be scaled up as needed. As used herein, a bucket is a container for storing objects.
Bootstrapping an instance of the log partitioner service 106 involves associating one or more source buckets (e.g., S3 buckets) with a deployed log partitioner service. The association of a source bucket is accomplished by a piece of metadata that is used to initiate a log partitioner. A messaging notification is enabled for every source bucket and triggers a notification on log partitioner service when a new log object is uploaded and present for consumption. In an example, SNS (Simple Notification Service) is a managed service that provides message delivery from publishers to subscribers (also known as producers and consumers). Publishers communicate asynchronously with subscribers by sending messages to a topic, which is a logical access point and communication channel. Clients can subscribe to the SNS topic and receive published messages using a supported endpoint type, such as SQS.
The configuration metadata is dynamic and stored with the configuration service 110. The configuration service 110 provides a read API for log partitioner service instances to use to retrieve corresponding metadata The configuration service 110 also provides other APIs, like Create/Update/Delete, that provide a mechanism to modify source bucket association information (aka storage specification) to dynamically control the behavior of log partitioner service instances. Additionally, the configuration service 110 provides a mechanism for recording metadata on where/when/how raw log data is onboarded and placed in a destination location (e.g., partitioned log storage 104).
The example cloud-based data repository system 100 includes two types of modes in which log partitioner service instances are associated with raw log storage. In the first mode, isolated mode, each log partitioner service instance attaches to one or more unique raw log storage buckets and partitions retrieved log data is a distinct manner. In the second mode, shared mode, multiple log partition instances are peers and collaborate to handle log partitioning work for a plurality of shared log storage buckets.
With reference to
The first log partitioner service instance 204 receives messaging notifications from the messaging service 213, fetches data from the internal raw log storage element 212 in response to the received messaging notifications, and executes a partitioning task to store the fetched data in internal partitioned log data storage (e.g., internal partitioned raw log storage 104). The second log partitioner service instance 206 receives messaging notifications from the messaging services 215, 217, fetches data from the external raw log storage elements 214, 216 in response to the received messaging notifications, and executes a partitioning task to store the fetched data in the internal partitioned log data storage (e.g., internal partitioned raw log storage 104).
In this example, the single source log partitioner (first log partitioner service instance 204) is reserved to process dedicated log data and it utilizes isolated computation resources to execute the partitioning work. The multi-source log partitioner (second log partitioner service instance 206) provides a cost-effective solution for performing partitioning work when a single instance is not needed to service log data sources. The multi-source log partitioner utilizes multiplexing to perform partitioning work for multiple different log data sources. The multi-source log partitioner can be useful for processing multiple low throughput and low volume log data sources in parallel thereby decreasing the cost-to-serve (CTS).
With reference to
The log partitioner service (e.g., example log partitioner service 106, log partitioner instances 204, 206, 208, 210, and log partitioner instances 306, 308, 310, 312) stores log data in the destination storage (e.g., internal partitioned raw log storage 104) based on a partition key. The example partition key is devised based on multiple dimensions such as business unit (BU), source type, time, environment, etc. The partition key allows the log partitioner service to properly put log data in the correct path in the destination storage. Laying out data under the right path allows downstream services to pick up data and apply corresponding logic to process them further. An example partition key for log partitioner service may include the dimensions listed below:
The example DestinationStorageRoot dimension identifies the destination location at which the log data is to be stored. The example BU dimension identifies the business unit that own the log data. The example SourceType dimension differentiates between types of log data, including event types from log data sources, such as CrowdStrike, osquery, etc., that have hundreds of event types.
The example Environment dimension describes where a BU collects a source type. A BU could collect the same source type across multiple environments. To differentiate different environments (e.g., cloud substrate), the environment dimension can be used in the partition key.
The example HashGroup dimension allows for smaller-sized storage units to be searched for partitioned log data. The HashGroup dimension can be used to cause log data to be stored in smaller-sized storage units. For cloud storage, like S3, putting all hourly log data under the same prefix may hit limits slowing down downstream services when consuming data by running an expensive distributed query. To further organize data in a reasonable amount under a prefix, the HashGroup dimension can be used to subgroup log data to improve the consumption experience. The HashGroup dimension can be made optional for logs that do not need it. The HashGroup dimension includes hash values computed via a hash function to partition a bunch of log files. The hash function can be defined per source type in terms of business requirements.
The example time key dimension may be represented as YYYY/MM/DD/HH and provides up to hour granularity to group log data. There are a number of time choices that can be used for the time key dimension. The lifecycle of raw log data includes (i) an event time—the moment when it is created and recorded in log lines, (ii) the storage time—the moment when log data is uploaded to cloud storage and persisted, (iii) the ingestion time—the moment when the log partitioner service ingests the log data; and (iv) the processing time—the moment when downstream services consume the partitioned log data for post processing. At each phase, there is a timestamp that accompanies the log data.
The example log partitioner service uses the event time for the time key dimension when retrieving event time is computationally cheap and there is no significant performance impact. Unless the raw log storage has a well partitioned layout with a timestamp, retrieving event time can be computationally expensive because of computation spent on data scan and inspection. If the event time is not accessible or computationally expensive, the example log partitioner service may use the storage time for the time key dimension because it can be a rough estimate of event time.
The workload on the example log partitioner service is affected by the raw data layout and organization in the source log data storage. When the raw log data in a source bucket follows an expected partition key or uses a similar partition approach, the log partitioner service will have much less work to do but to pass the log data into a predefined destination bucket at a specific prefix/location. In contrast, when the source data is not partitioned or uses a special classification, the log partitioner service will have more complex work to do.
The log partitioner service can partition at least three types of raw log data—well-partitioned, semi-partitioned, and non-partitioned log data, which are illustrated in Table 1 below.
To process semi-partitioned and non-partitioned data, the log partitioner service 106 is configured to extract partition keys from the log entries themselves. The raw log data can have different formats and may or may not contain a schema, which can make information extraction challenging. The source bucket's partition degree and embedded schema is contained in storage specification. The specification is a piece of metadata stored in the configuration service 110 used to bootstrap a log partitioner instance and to instruct the log partitioner instance on how to process data from the source bucket with an expected behavior. An example storage specification is provided below:
The storage specification describes log data source and destination storage locations. For source storage, there are a series of partition rules.
In addition, the example log partitioner service 106 archives processed or failed messaging notifications 408 into internal storage 410 (e.g., S3, GCS, etc.). The archives can be replayed for multiple purposes, such as: (i) backfilling—the failed raw log data can be re-imported by a new log partitioner instance so that data ingestion is complete; (ii) regrouping—the data can be re-imported to a new destination storage according to an updated partition strategy to satisfy detection and investigation; and (iii) auditing—imported data can be monitored in the front so that the amount of log data that is ingested can be transparently understood.
The example process 500 includes deploying a log partitioner cluster comprising a plurality of log partitioner service instances for storing the raw log data in a partitioned manner (operation 502). The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in an isolated mode wherein each of the plurality of log partitioner service instances in the log partitioner cluster is associated with a source data bucket that is different from any source data bucket associated with another of the log partitioner service instances in the log partitioner cluster. At least one of the log partitioner service instances may comprise a single source log partitioner. At least one of the log partitioner service instances may comprise a multi-source log partitioner.
The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in a shared mode to work collaboratively to handle log partitioning work for a plurality of shared raw log storage elements, and deploying a log partitioner cluster may comprise: deploying a log partitioner routing service; receiving, via the log partitioner routing service, a notification that data is available in a source bucket; selecting, via the log partitioner routing service, one of the log partitioner service instances in the log partitioner cluster to service the notification; dispatching, via the log partitioner routing service, the notification to the selected log partitioner service instance; and fetching data from the appropriate source data bucket and executing a partitioning task to store the fetched data in the internal partitioned log storage, by the selected log partitioner instance after receiving the notification.
The example process 500 includes associating one or more of the source data buckets to each of a plurality of deployed log partitioner service instances (operation 504). The associating comprises providing associated configuration metadata from a configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance. The associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths.
The example process 500 includes fetching, via the log partitioner cluster, raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instances (operation 506) and placing fetched raw log data, via the log partitioner cluster, in the internal partitioned log storage in accordance with the instructions provided in associated configuration metadata (operation 508).
The placing fetched raw log data in the internal partitioned log storage may comprise placing, via the log partitioner cluster, the fetched raw log data in the internal partitioned log storage based on a partition key having a plurality of dimensions. The dimensions may comprise a destination storage root dimension, a source type dimension, a time key dimension, an environment dimension, and a hash group dimension. The time key dimension may be based on event time. The time key dimension may be based on storage time.
In one embodiment, a cloud-based data repository system is provided. The data repository system comprises internal partitioned data storage for storing multiple petabytes of log data; a data ingestion pipeline for use in ingesting multiple gigabytes of raw log data per second for storage in the internal partitioned data storage; a configuration service for providing configuration metadata regarding source data buckets from which the raw log data are retrieved; and a log partitioner service comprising a controller. The controller is configured to: deploy (e.g., instantiate) a log partitioner cluster comprising a plurality of log partitioner service instances for storing the raw log data in a partitioned manner for improved defensive security; and associate one or more of the source data buckets to each of a plurality of deployed log partitioner service instances, wherein to associate a log partitioner service instance the controller is configured to provide associated configuration metadata from the configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance; wherein each log partitioner service instance is configured to fetch raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instance; wherein the associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths; and wherein each log partitioner service instance is configured to place fetched raw log data in the internal partitioned log storage in accordance with the instructions provided in its associated configuration metadata.
These aspects and other embodiments include one or more of the following features. The data repository system may comprise a data lake. The data ingestion pipeline may comprise storage-based ingestion from source data buckets in external raw log storage. The system of claim 3, wherein the data ingestion pipeline may comprise streaming-based ingestion into one or more source data buckets in internal raw log storage using a real-time streaming data pipeline (e.g., Kafka). The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in an isolated mode wherein each of the plurality of log partitioner service instances in the log partitioner cluster is associated with a source data bucket that is different from any source data bucket associated with another of the log partitioner service instances in the log partitioner cluster. At least one of the log partitioner service instances may comprise a single source log partitioner. At least one of the log partitioner service instances may comprise a multi-source log partitioner. The controller may be further configured to deploy a log partitioner routing service, the log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in a shared mode to work collaboratively to handle log partitioning work for a plurality of shared raw log storage elements; the log partitioner routing service may be configured to receive a notification that data is available in a source bucket, select one of the log partitioner service instances in the log partitioner cluster to service the notification, and dispatch the notification to the selected log partitioner service instance; and when a selected log partitioner instance receives a notification, the selected log partitioner instance may be configured to fetch data from the appropriate source data bucket and execute a partitioning task to store the fetched data in the internal partitioned log storage. The cloud-based data repository system may further comprise internal message service storage and the log partitioner service is further configured to archive processed or failed messaging notification into the internal message service storage. The log partitioner service may be configured to place the fetched raw log data in the internal partitioned log storage based on a partition key having a plurality of dimensions, wherein the dimensions comprise a destination storage root dimension, a source type dimension, a time key dimension, an environment dimension, and a hash group dimension. The time key dimension may be based on event time. The time key dimension may be based on storage time. Associated configuration metadata may comprise source data bucket specific values for a partition key for fetched raw log data from a specific source data bucket.
In another embodiment, a processor-implemented method for storing multiple petabytes of raw log data from cloud-based source data buckets into internal partitioned data storage in a data lake is provided. The method comprises: deploying a log partitioner cluster comprising a plurality of log partitioner service instances for storing the raw log data in a partitioned manner; associating one or more of the source data buckets to each of a plurality of deployed log partitioner service instances, the associating comprising providing associated configuration metadata from a configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance; fetching, via the log partitioner cluster, raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instances; wherein the associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths; and placing fetched raw log data, via the log partitioner cluster, in the internal partitioned log storage in accordance with the instructions provided in associated configuration metadata.
These aspects and other embodiments may include one or more of the following features. The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in an isolated mode wherein each of the plurality of log partitioner service instances in the log partitioner cluster is associated with a source data bucket that is different from any source data bucket associated with another of the log partitioner service instances in the log partitioner cluster. At least one of the log partitioner service instances may comprise a single source log partitioner. At least one of the log partitioner service instances may comprise a multi-source log partitioner. The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in a shared mode to work collaboratively to handle log partitioning work for a plurality of shared raw log storage elements, wherein the method may further comprise: deploying a log partitioner routing service; receiving, via the log partitioner routing service, a notification that data is available in a source bucket; selecting, via the log partitioner routing service, one of the log partitioner service instances in the log partitioner cluster to service the notification; dispatching, via the log partitioner routing service, the notification to the selected log partitioner service instance; and fetching data from the appropriate source data bucket and executing a partitioning task to store the fetched data in the internal partitioned log storage, by the selected log partitioner instance after receiving the notification. The method may further comprise placing, via the log partitioner cluster, the fetched raw log data in the internal partitioned log storage based on a partition key having a plurality of dimensions, wherein the dimensions comprise a destination storage root dimension, a source type dimension, a time key dimension, an environment dimension, and a hash group dimension.
In another embodiment, a non-transitory computer readable medium encoded with programming instructions configurable to cause a processor to perform a method for storing multiple petabytes of raw log data from cloud-based source data buckets into internal partitioned data storage in a data lake is provide. The method comprises: deploying a log partitioner cluster comprising a plurality of log partitioner service instances for storing the raw log data in a partitioned manner; associating one or more of the source data buckets to each of a plurality of deployed log partitioner service instances, the associating comprising providing associated configuration metadata from a configuration service to a deployed log partitioner service instance to initiate the log partitioner service instance; fetching, via the log partitioner cluster, raw log data from associated source data buckets based on the associated configuration metadata provided to the log partitioner service instances; wherein the associated configuration metadata provides instructions for use by a log partitioner service instance to onboard raw log data and place log data to destination storage on defined paths; and placing fetched raw log data, via the log partitioner cluster, in the internal partitioned log storage in accordance with the instructions provided in associated configuration metadata.
These aspects and other embodiments may include one or more of the following features. The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in an isolated mode wherein each of the plurality of log partitioner service instances in the log partitioner cluster is associated with a source data bucket that is different from any source data bucket associated with another of the log partitioner service instances in the log partitioner cluster. At least one of the log partitioner service instances may comprise a single source log partitioner. At least one of the log partitioner service instances may comprise a multi-source log partitioner. The log partitioner cluster may comprise a plurality of log partitioner service instances that are bootstrapped in a shared mode to work collaboratively to handle log partitioning work for a plurality of shared raw log storage elements, wherein the method may further comprise: deploying a log partitioner routing service; receiving, via the log partitioner routing service, a notification that data is available in a source bucket; selecting, via the log partitioner routing service, one of the log partitioner service instances in the log partitioner cluster to service the notification; dispatching, via the log partitioner routing service, the notification to the selected log partitioner service instance; and fetching data from the appropriate source data bucket and executing a partitioning task to store the fetched data in the internal partitioned log storage, by the selected log partitioner instance after receiving the notification. The method may further comprise placing, via the log partitioner cluster, the fetched raw log data in the internal partitioned log storage based on a partition key having a plurality of dimensions, wherein the dimensions comprise a destination storage root dimension, a source type dimension, a time key dimension, an environment dimension, and a hash group dimension.
The foregoing description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the technical field, background, or the detailed description. As used herein, the word “exemplary” or “example” means “serving as an example, instance, or illustration.” Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations, and the exemplary embodiments described herein are not intended to limit the scope or applicability of the subject matter in any way.
For the sake of brevity, conventional techniques related to object models, web pages, cloud computing, on-demand applications, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. In addition, those skilled in the art will appreciate that embodiments may be practiced in conjunction with any number of system and/or network architectures, data transmission protocols, and device configurations, and that the system described herein is merely one suitable example. Furthermore, certain terminology may be used herein for the purpose of reference only, and thus is not intended to be limiting. For example, the terms “first,” “second” and other such numerical terms do not imply a sequence or order unless clearly indicated by the context.
Embodiments of the subject matter may be described herein in terms of functional and/or logical block components, and with reference to symbolic representations of operations, processing tasks, and functions that may be performed by various computing components or devices. Such operations, tasks, and functions are sometimes referred to as being computer-executed, computerized, software-implemented, or computer-implemented. In practice, one or more processing systems or devices can carry out the described operations, tasks, and functions by manipulating electrical signals representing data bits at accessible memory locations, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to the data bits. It should be appreciated that the various block components shown in the figures may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of a system or a component may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. When implemented in software or firmware, various elements of the systems described herein are essentially the code segments or instructions that perform the various tasks. The program or code segments can be stored in a processor-readable medium or transmitted by a computer data signal embodied in a carrier wave over a transmission medium or communication path. The “processor-readable medium” or “machine-readable medium” may include any non-transitory medium that can store or transfer information. Examples of the processor-readable medium include an electronic circuit, a semiconductor memory device, a ROM, a flash memory, an erasable ROM (EROM), a floppy diskette, a CD-ROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (RF) link, or the like. The computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic paths, or RF links. The code segments may be downloaded via computer networks such as the Internet, an intranet, a LAN, or the like. In this regard, the subject matter described herein can be implemented in the context of any computer-implemented system and/or in connection with two or more separate and distinct computer-implemented systems that cooperate and communicate with one another.
As used herein, the term “module” refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), a field-programmable gate-array (FPGA), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
While at least one exemplary embodiment has been presented, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application. Accordingly, details of the exemplary embodiments or other limitations described above should not be read into the claims absent a clear intention to the contrary.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6161149 | Achacoso et al. | Dec 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec et al. | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp et al. | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans et al. | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772229 | Achacoso et al. | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7062502 | Kesler | Jun 2006 | B1 |
7069231 | Cinarkaya et al. | Jun 2006 | B1 |
7181758 | Chan | Feb 2007 | B1 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7356482 | Frankland et al. | Apr 2008 | B2 |
7363457 | Dekoning | Apr 2008 | B1 |
7401094 | Kesler | Jul 2008 | B1 |
7412455 | Dillon | Aug 2008 | B2 |
7508789 | Chan | Mar 2009 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
7698160 | Beaven et al. | Apr 2010 | B2 |
7730478 | Weissman | Jun 2010 | B2 |
7779475 | Jakobson et al. | Aug 2010 | B2 |
8014943 | Jakobson | Sep 2011 | B2 |
8015495 | Achacoso et al. | Sep 2011 | B2 |
8032297 | Jakobson | Oct 2011 | B2 |
8082301 | Ahlgren et al. | Dec 2011 | B2 |
8095413 | Beaven | Jan 2012 | B1 |
8095594 | Beaven et al. | Jan 2012 | B2 |
8209308 | Rueben et al. | Jun 2012 | B2 |
8275836 | Beaven et al. | Sep 2012 | B2 |
8457545 | Chan | Jun 2013 | B2 |
8484111 | Frankland et al. | Jul 2013 | B2 |
8490025 | Jakobson et al. | Jul 2013 | B2 |
8504945 | Jakobson et al. | Aug 2013 | B2 |
8510045 | Rueben et al. | Aug 2013 | B2 |
8510664 | Rueben et al. | Aug 2013 | B2 |
8566301 | Rueben et al. | Oct 2013 | B2 |
8646103 | Jakobson et al. | Feb 2014 | B2 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramanian et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robbins | Nov 2002 | A1 |
20030004971 | Gong | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane et al. | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker et al. | Apr 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec et al. | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio et al. | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20060021019 | Hinton et al. | Jan 2006 | A1 |
20080249972 | Dillon | Oct 2008 | A1 |
20090063414 | White et al. | Mar 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20100161564 | Lee | Jun 2010 | A1 |
20110247051 | Bulumulla et al. | Oct 2011 | A1 |
20120042218 | Cinarkaya et al. | Feb 2012 | A1 |
20120218958 | Rangaiah | Aug 2012 | A1 |
20120233137 | Jakobson et al. | Sep 2012 | A1 |
20130212497 | Zelenko et al. | Aug 2013 | A1 |
20130218948 | Jakobson | Aug 2013 | A1 |
20130218949 | Jakobson | Aug 2013 | A1 |
20130218966 | Jakobson | Aug 2013 | A1 |
20130247216 | Cinarkaya | Sep 2013 | A1 |
20180101589 | Shi | Apr 2018 | A1 |
20200285737 | Kraus | Sep 2020 | A1 |
20200326991 | Carroll | Oct 2020 | A1 |
20200334231 | Muralidhar | Oct 2020 | A1 |
20220188439 | Drapeau | Jun 2022 | A1 |
Entry |
---|
Amazon Simple Storage Service: User Guide, API Version Mar. 1, 2006 (Year: 2006). |
Number | Date | Country | |
---|---|---|---|
20230305698 A1 | Sep 2023 | US |