A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document and/or the patent disclosure as it appears in the United States Patent and Trademark Office patent file and/or records, but otherwise reserves all copyrights whatsoever.
A distributed data storage system comprises features for integration with application orchestrators such as Kubernetes, and includes a proprietary Container Storage Interface (CSI) driver. A number of custom resources are designed and defined to be directly consumed as native by the application orchestrator environment, e.g., Kubernetes and/or containerized applications. Features include setting snapshot scheduling and retention policies, and a “container data mover” that replicates data from a source to a distinct destination distributed data storage system. In the distributed data storage system, data is stored on virtual disks that are partitioned into distinct portions called storage containers. The storage containers may be replicated on a plurality of storage service nodes across the storage system. The illustrative container data mover enables data in these storage containers to migrate efficiently between distinct distributed data storage systems. The migration may be between on-premises and/or public cloud environments, without limitation. The migration may be configured one-to-one, one-to-many, unidirectional, and/or bi-directional. Metadata-based snapshots and metadata-based changed block tracking identify payload data that needs to move from source to destination within the application orchestrator frameworks at both ends. Payload data migrates from source to destination using different techniques than those used for migrating metadata, e.g., kernel-to-kernel for copying payload data versus ordinary writes for metadata. An illustrative barrier logic ensures that the migration follows a controlled progression of operations. Thus, the container data mover feature represents a technological improvement that offers streamlined migration between storage systems.
To enhance the reader's understanding of the present disclosure, the term “metadata” is distinguished from the term “data”, even though both data and metadata comprise information stored on the illustrative distributed data storage system. Accordingly, “data” will refer herein to “payload” data, which is typically generated by an application or other data source that uses the distributed data storage system as a data storage resource, e.g., generated by a containerized application orchestrated by Kubernetes or another application orchestrator. Thus, the terms “data”, “payload”, and “payload data” will be used interchangeably herein. On the other hand, “metadata” will refer to other information in the distributed data storage system, e.g., information about the payload data, about the components hosting the payload data, about other metadata-hosting components, about other components of the distributed data storage system, and also meta-metadata. Finally, the invention is not limited to embodiments that operate within a Kubernetes framework, but most of the examples given herein are Kubernetes-based or Kubernetes-compatible in order enhance the reader's understanding and appreciation of the present disclosure.
Detailed descriptions and examples of systems and methods according to one or more illustrative embodiments of the present invention may be found in the section entitled CONTAINER DATA MOVER FOR MIGRATING DATA BETWEEN DISTRIBUTED DATA STORAGE SYSTEMS INTEGRATED WITH APPLICATION ORCHESTRATORS, as well as in the section entitled Example Embodiments, and also in
Various embodiments described herein are intimately tied to, enabled by, and would not exist except for, computer technology. For example, data transfers from source to destination storage clusters described herein in reference to various embodiments cannot reasonably be performed by humans alone, without the computer technology upon which they are implemented.
Generally, the systems and associated components described herein may be compatible with and/or provide at least some of the functionality of the systems and corresponding components described in one or more of the following U.S. patents and patent applications assigned to Commvault Systems, Inc., each of which is hereby incorporated by reference in its entirety herein.
Distributed Data Storage System
An example embodiment of the disclosed distributed data storage system is the Hedvig Distributed Storage Platform now available from Commvault Systems, Inc. of Tinton Falls, New Jersey, USA, and thus some of the terminology herein originated with the Hedvig product line. The illustrative distributed data storage system comprises a plurality of storage service nodes that form one or more storage clusters. Data reads and writes originating from an application on an application host computing device are intercepted by a storage proxy, which is co-resident with the originating application. The storage proxy performs some pre-processing and analysis functions before making communicative contact with the storage cluster. The system ensures strong consistency of data and metadata written to the storage service nodes.
Terminology for the Distributed Data Storage System
Data and Metadata. To enhance the reader's understanding of the present disclosure, the term “metadata” is distinguished from the term “data” herein, even though both data and metadata comprise information stored on the illustrative distributed data storage system. Accordingly, “data” will refer to “payload” data, which is typically generated by an application or other data source that uses the distributed data storage system for data storage. Thus, the terms “data”, “payload”, and “payload data” will be used interchangeably herein. On the other hand, “metadata” will refer to other information in the distributed data storage system, e.g., information about the payload data, about the components hosting the payload data, about metadata-hosting components, about other components of the distributed data storage system, and also information about the metadata, i.e., “meta-metadata.”
Storage Service, e.g., Hedvig Storage Service. The storage service is a software component that installs on commodity x86 or ARM servers to transform existing server and storage assets into a fully-featured elastic storage cluster. The storage service may deploy to an on-premise infrastructure, to hosted clouds, and/or to public cloud computing environments to create a single system that is implicitly hybrid.
Storage Service Node (or storage node), e.g., Hedvig Storage Server (HSS), comprises both computing and storage resources that collectively provide storage service. The system's storage service nodes collectively form one or more storage clusters. Multiple groups of storage service nodes may be clustered in geographically and/or logically disparate groups, e.g., different cloud computing environments, different data centers, different usage or purpose of a storage cluster, etc., without limitation, and thus the present disclosure may refer to distinct storage clusters in that context. One or more of the following storage service subsystems of the storage service may be instantiated at and may operate on a storage service node: (i) distributed fault-tolerant metadata subsystem providing metadata service, e.g., “Hedvig Pages”; (ii) distributed fault-tolerant data subsystem (or data storage subsystem) providing payload data storage, e.g., “Hedvig HBlock”; and (iii) distributed fault-tolerant pod subsystem for generating and maintaining certain system-level information, e.g., “Hedvig HPod.” The system stores payload data on certain dedicated storage resources managed by the data storage subsystem, and stores metadata on other dedicated storage resources managed by the metadata subsystem. Thus, another way to distinguish payload data from metadata in the illustrative system is that payload data is stored in and maintained by the data storage subsystem and metadata is stored in and maintained by the metadata subsystem. The pod subsystem, the metadata subsystem, and the data storage subsystem are all partitioned and replicated across various storage service nodes. These subsystems operate as independent services, they need not be co-located on the same storage service node, and they may communicate with a subsystem on another storage service node as needed.
Replica. The distributed data storage system replicates data and metadata across multiple storage service nodes. A “replica” or “replica node” is a storage service node that hosts a replicated copy of data and/or metadata that is also stored on other replica nodes. Illustratively, metadata uses a replication factor of 3, though the invention is not so limited. Thus, with a replication factor of 3 (“RF3”), each portion of metadata is replicated on three distinct metadata nodes across the storage cluster.
Virtual Disk (“vdisk”) and Storage Containers. The virtual disk is the unit of storage made visible by system 100 to applications and/or application nodes. Every virtual disk provisioned on the system is partitioned into fixed size chunks, each of which is called a storage container. Different replicas are assigned for each storage container. Since replica assignment occurs at the storage container level—not at a virtual disk level—the data for a virtual disk is distributed across a plurality of storage service nodes, thus allowing increased parallelism during input/output (I/O) and/or disk rebuilds. Thus, virtual disks are distributed and fault-tolerant.
Storage Pools. Storage pools are logical groupings of physical disks/drives in a storage service node and are configured as the protection unit for disk/drive failures and rebuilds. Within a replica, one or more storage containers are assigned to a storage pool. A typical storage service node will host two to four storage pools.
Metadata Node. An instance of the metadata subsystem executing on a storage service node is referred to as a metadata node that provides “metadata service.” The metadata subsystem executing on a storage service node stores metadata at the storage service node. The metadata node communicates with one or more other metadata nodes to provide a system-wide metadata service. The metadata subsystem also communicates with pod and/or data storage subsystems at the same or other storage service nodes. Some metadata nodes are designated owners of certain virtual disks whereas others are replicas but not owners. Owner nodes are invested with certain functionality for managing the owned virtual disk.
Metadata Node Identifier or Storage Identifier (SID) is a unique identifier of the metadata service instance on a storage service node, i.e., the unique system-wide identifier of a metadata node.
Storage Proxy. Each storage proxy is a lightweight software component that deploys at the application tier, i.e., on application servers or hosts. A storage proxy may be implemented as a virtual machine (VM) or as a software container (e.g., Docker), or may run on bare metal to provide storage access to any physical host or VM in the application tier. As noted, the storage proxy intercepts reads and writes issued by applications and directs input/output (I/O) requests to the relevant storage service nodes.
Erasure Coding (EC). In some embodiments, the illustrative distributed data storage system employs erasure coding rather than or in addition to replication. EC is one of the administrable attributes for a virtual disk. The default EC policy is (4,2), but (8,2) and (8,4) are also supported if a sufficient number of storage service nodes are available. The invention is not limited to a particular EC policy unless otherwise noted herein.
Container Data Mover for Migrating Data Between Distributed Data Storage Systems Integrated with Application Orchestrators
The illustrative distributed data storage system comprises features for integration with application orchestrators (a/k/a “container orchestrators”) such as Kubernetes and Kubernetes-based technologies, and includes an enhanced and proprietary Container Storage Interface (CSI) driver. Payload data and corresponding metadata move efficiently from source to destination within application orchestrator frameworks (e.g., Kubernetes frameworks) at both ends. Application orchestrators such as Kubernetes enable users to build cloud-independent applications. To achieve cloud independence, it is necessary to have cloud-agnostic storage resources to increase availability not only within a single site but also across different physical locations, including the cloud. The illustrative distributed data storage system, using one or more of the capabilities described herein, provides such a cloud-agnostic storage system.
Software Container Ecosystem.
The illustrative distributed data storage system provides native integration with application orchestrators such as Kubernetes and Kubernetes-based technologies, and enables: simplifying workflows via a proprietary Container Storage Interface (CSI); facilitating data management with built-in data protection and cloud data mobility; and securing the data storage environment through automatic snapshotting of persistent volumes. Software containers (or “containerization”) are well known in the art, and can be defined as operating system (OS)-level virtualization in which an operating system kernel allows the existence of multiple isolated user space instances. Kubernetes has emerged as a popular standard for container orchestration, and is well known in the art. See, e.g., http://kubernetes.io/.
Storage Container Support.
There is a need for infrastructure that integrates across all types of application orchestrator deployments (e.g., Kubernetes), including cloud-managed and/or self-managed deployments, and delivers seamless migration, data protection, availability, and disaster recovery for the entirety of these containerized environments. Some of the key technological improvements enabled by the illustrative distributed data storage system include without limitation: integrated storage container snapshots that provide point in time protection for stateful container workloads; storage container migration that delivers an efficient and intelligent data movement of unique changes across distinct storage clusters; and integrated policy automation that enables granular control over the frequency of snapshot and migration operations and the targeted environment to which the data is intelligently sent.
Persistent Volumes for Containers.
An enhanced proprietary container storage interface (CSI) driver 201 (see
Policy Driven Data Placement.
As organizations migrate stateful applications to container ecosystems, it is necessary to effectively manage data owned by different groups within the organizations while adhering to security and compliance policies. Each group might have its preferred choice of container ecosystem as well as a preferred location (on-prem and/or in the cloud) for persistent application data. The self-service, API-driven programmable infrastructure of some application orchestrators such as Kubernetes allows for customization. The illustrative distributed data storage system enables users to specify where they want their persistent application data to reside. By providing data placement as a policy, different groups within an organization can continue to use their existing workflows.
Snapshots and Clones.
Snapshots and clones generated by the illustrative distributed data storage system are seamlessly integrated into application orchestrators through the illustrative proprietary CSI driver. When data is spread across multiple disparate sites, continuous data protection can pose a significant challenge without a uniform data protection scheme. With a single storage fabric that spans multiple sites, data placement policies that are declarative in nature coupled with built-in snapshot capabilities, the illustrative distributed data storage system provides a uniform location-transparent scheme for protecting data.
Continuous Data Protection Using Snapshots.
A snapshot can be defined as the state of a storage volume captured at a given point in time. Persisting point in time states of volumes provide a fast recovery mechanism in the event of failures with the ability to restore known working points in the past. In the distributed data storage system, volume snapshots are space-efficient metadata-based zero-copy snapshots. Every newly created volume (e.g., virtual disk) has a version number and a version tree associated with it. The version number starts with “1” and is incremented on every successful snapshot operation along with an update to the version tree. Every block of data written is versioned with the version number associated with the volume at the time of the corresponding write operation.
As an example to understand how snapshots provide data protection in the distributed data storage system, consider the following sequence of events: a Hedvig volume is provisioned for application data at time t1 (version number: 1); a periodic snapshot is triggered at time t2 (version number: 2); a periodic snapshot is triggered at time t3 (version number: 3); and a ransomware attacks at time t4 after time t3. At t4, any new writes that happen as a part of the ransomware attack are recorded with version number: 3, because that is the currently active version number. By reverting the volume back to the previous version (2), the application can be recovered instantly. The process of reverting a volume to an earlier version is not dependent on the size of the volume or the amount of data it contains. No data of the volume needs to be copied during the snapshot or the revert operation, resulting in a data protection scheme that is simple, fast and operationally inexpensive.
Data Protection for Containerized Applications.
The illustrative proprietary CSI driver 201 (see
Container Data Mover.
The illustrative container data mover feature enables automated data migration of storage container data between storage clusters. The migration may be implemented across any kind of storage clusters, e.g., on-premises to any other, cloud to any other, public and/or private cloud, etc., without limitation. Thus, the container data mover is widely applicable to many and diverse environments. Even though the distributed data storage system provides a single distributed fabric that can span multiple on-prem and cloud sites, different groups might choose to isolate their data (for example, for compliance, risk mitigation, etc.) within different and distinct storage clusters. The container data mover enables organizations to isolate their application data in different storage clusters and to migrate between them as needed.
Change block tracking is typically used as incremental backup technology, but here it is used for efficiently migrating payload data between storage clusters. Because every block of payload data stored at the source storage cluster carries a version number, change block tracking is native to the illustrative distributed data storage system. Accordingly changed data can be identified by generation number/version and granularly migrated.
The intelligence built into the disclosed Container Data Mover technology leverages the use of kernel-to-kernel copies of payload data between source and destination storage nodes, which provides a fast data transfer channel. Accordingly, changed payload data is moved en masse through kernel-to-kernel copying of payload data files from source to destination, without having to rely on block-by-block application-level reads and writes between storage clusters that are ordinarily performed by the data storage subsystems and/or metadata subsystems at the storage service nodes. Payload data migration is orchestrated through snapshots and versioned change block tracking, which is native to the distributed data storage system. More details are given in
Distributed Barrier.
The illustrative distributed data storage system leverages a novel distributed barrier logic to implement a state machine for data migration. This process involves the following example steps, without limitation:
See also
Policy Driven Container Data Mover.
Data migration can be seamlessly enabled through policies assigned to application orchestrator (e.g., Kubernetes) constructs. Snapshot schedules provided through the proprietary CSI driver are enhanced to configure data migration based on the snapshot retention period. A data migration workflow example for CSI volumes is shown in
Distributed data storage system 100 (or system 100) comprises storage proxies 106 and storage cluster 110. System 100 flexibly leverages both hyperscale and hyperconverged deployment options, sometimes implemented in the same storage cluster 110 as depicted here. Hyperscale deployments scale storage resources independently from the application tier, as shown by storage service nodes 120 (e.g., 120-1 . . . 120-N). In such hyperscale deployments, storage capacity and performance scale out horizontally by adding commodity servers running the illustrative storage service; application nodes (or hosts) 102 scale separately along with storage proxy 106. On the other hand, hyperconverged deployments scale compute and storage in lockstep, with workloads and applications residing on the same physical nodes as payload data, as shown by compute hosts 121. In such hyperconverged deployments, storage proxy 106 and storage service software 122 are packaged and deployed as VMs on a compute host 121 with a hypervisor 103 installed. In some embodiments, system 100 provides plug-ins for hypervisor and virtualization tools, such as VMware vCenter, to provide a single management interface for a hyperconverged solution.
System 100 provides enterprise-grade storage services, including deduplication, compression, snapshots, clones, replication, auto-tiering, multitenancy, and self-healing of both silent corruption and/or disk/node failures to support production storage operations, enterprise service level agreements (SLAs), and/or robust storage for backed up data (secondary copies). Thus, system 100 eliminates the need for enterprises to deploy bolted-on or disparate solutions to deliver a complete set of data services. This simplifies infrastructure and further reduces overall Information Technology (IT) capital expenditures and operating expenses. Enterprise storage capabilities can be configured at the granularity of a virtual disk, providing each data originator, e.g., application, VM, and/or software container, with its own unique storage policy. Every storage feature can be switched on or off to fit the specific needs of any given workload. Thus, the granular provisioning of features empowers administrators to avoid the challenges and compromises of “one size fits all” storage and helps effectively support business SLAs, while decreasing operational costs.
System 100 inherently supports multi-site availability, which removes the need for additional costly disaster recovery solutions. The system provides native high availability storage for applications across geographically dispersed data centers by setting a unique replication policy and replication factor at the virtual disk level.
System 100 comprises a “shared-nothing” distributed computing architecture in which each storage service node is independent and self-sufficient. Thus, system 100 eliminates any single point of failure, allows for self-healing, provides non-disruptive upgrades, and scales indefinitely by adding more storage service nodes. Each storage service node stores and processes metadata and/or payload data, then communicates with other storage service nodes for data/metadata distribution according to the replication factor.
Storage efficiency in the storage cluster is characterized by a number of features, including: thin provisioning, deduplication, compression, compaction, and auto-tiering. Each virtual disk is thinly provisioned by default and does not consume capacity until data is written therein. This space-efficient dynamic storage allocation capability is especially useful in DevOps environments that use Docker, OpenStack, and other cloud platforms where volumes do not support thin provisioning inherently, but can support it using the virtual disks of system 100. System 100 provides inline global deduplication that delivers space savings across the entire storage cluster. Deduplication is administrable at the virtual disk level to optimize I/O and lower the cost of storing data. As writes occur, the system 100 calculates the unique fingerprint of data blocks and replaces redundant data with a small pointer. The deduplication process can be configured to begin at storage proxy 106, improving write performance and eliminating redundant data transfers over the network. System 100 provides inline compression administrable at the virtual disk level to optimize capacity usage. The system stores only compressed data on the storage service nodes. Illustratively, the Snappy compression library is used, but the invention is not limited to this implementation. To improve read performance and optimize storage space, the illustrative system periodically performs garbage collection to compact redundant blocks and generate large sequential chunks of data. The illustrative system balances performance and cost by supporting tiering of data among high-speed SSDs and lower-tier persistent storage technologies.
Application node (or host) 102 (e.g., 102-1, 102-2, 102-3) is any computing device, comprising one or more hardware processors and computer memory for executing computer programs, that generates and/or accesses data stored in storage cluster 110. Application(s) (not shown here but see, e.g., applications 132 in
Hypervisor 103 (e.g., 103A, 103B) is any hypervisor, virtual machine monitor, or virtualizer that creates and runs virtual machines on a virtual machine server or host. Software container 104A is any operating system virtualization software that shares the kernel of the host computing device (e.g., 102, 121) that it runs on and allows multiple isolated user space instances to co-exist. Docker is an example of software container 104A. Bare metal 105A refers to application node 102-3 running as a traditional computing device without virtualization features. Components 103, 104A, and 105A/B are well known in the art.
Storage proxy 106 (e.g., 106-1, 106-2, 106-3, 106-J . . . 106-K) is a lightweight software component that deploys at the application tier, i.e., on application nodes 102 and/or compute hosts 121. A storage proxy may be implemented as a virtual machine 106-1, as a software container (e.g., Docker) 106-2, and/or running on bare metal (e.g., 106-3) to provide storage access to any physical host or VM in the application tier. The storage proxy acts as a gatekeeper for all I/O requests to virtual disks configured at storage cluster 110. It acts as a storage protocol converter, load balances I/O requests to storage service nodes, caches data fingerprints, and performs certain deduplication functions. Storage protocols supported by storage proxy 106 include Internet Small Computer Systems Interface (iSCSI), Network File System (NFS), Server Message Block (SMB2) or Common Internet File System (CIFS), Amazon Simple Storage Service (S3), OpenStack Object Store (Swift), without limitation. The storage proxy runs in user space and can be managed by any virtualization management or orchestration tool. With storage proxies 106 that run in user space, the disclosed solution is compatible with any hypervisor, software container, operating system, or bare metal computing environment at the application node. In some virtualized embodiments where storage proxy 106 is deployed on a virtual machine, the storage proxy may be referred to as a “controller virtual machine” (CVM) in contrast to application-hosting virtual machines that generate data for and access data at the storage cluster.
Storage cluster 110 comprises the actual storage resources of system 100, such as storage service nodes 120 and storage services 122 running on compute hosts 121. In some embodiments, storage cluster 110 is said to comprise compute hosts 121 and/or storage service nodes 120.
Storage service node 120 (e.g., 120-1 . . . 120-N) is any commodity server configured with one or more x86 or ARM hardware processors and with computer memory for executing the illustrative storage service, which is described in more detail in
Compute host 121 (e.g., 121-1 . . . 121-M) is any computing device, comprising one or more hardware processors and computer memory for executing computer programs, that comprises the functional components of an application node 102 and of a storage service node 120 in a “hyperconverged” configuration. In some embodiments, compute hosts 121 are configured, sometimes in a group, within an appliance such as the Commvault Hyperscale™ X backup appliance from Commvault Systems Inc., of Tinton Falls, New Jersey, USA.
Application 132 (e.g., 132-1, 132-2, 132-4, etc.) is any software that executes on its underlying host (e.g., 102-1, 102-2, 102-4) and performs a function as a result. The application 132 may generate data and/or need to access data which is stored in system 100. Examples of application 132 include email applications, database management applications, office productivity software, backup software, etc., without limitation.
The bi-directional arrows between each storage proxy 106 and a storage service node 120 depict the fact that communications between applications 132 and storage cluster 110 pass through storage proxies 106, each of which identifies a proper storage service node 120 to communicate with for the present transaction, e.g., storage service node 120-2 for storage proxy 106-1, storage service node 120-4 for storage proxy 106-2, etc.
Application orchestrator node 102-4 is illustratively embodied as a Kubernetes node (a/k/a Kubernetes kubelet) that comprises or hosts one or more containerized applications 132-4 and containerized storage proxy 106-4. See also https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ for more details on the Kubernetes kubelet, which is the primary node agent that runs on each Kubernetes node. The Kubernetes kubelet is also known as an “agent” (or “Kubernetes agent”) that runs on each Kubernetes node in a Kubernetes cluster. See, e.g., https://kubernetes.io/docs/concepts/overview/components/. Node 102-4 additionally comprises a proprietary CSI driver 201, which is not shown in the present figure and is described in detail in
It is noted here that the term “Kubernetes cluster” has a different meaning than the illustrative storage cluster(s) 110 depicted herein. “When you deploy Kubernetes, you get a cluster. A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every [Kubernetes] cluster has at least one worker node.” https://kubernetes.io/docs/concepts/overview/components/. Thus, an application orchestrator node 102-4 (e.g., Kubernetes node, Kubernetes worker node), which is part of a Kubernetes cluster, is not to be confused with storage cluster 110, which comprises storage service nodes, such as storage service nodes 120.
Storage proxy 106 intercepts reads and writes issued by applications 132 that are targeted to particular virtual disks configured in storage cluster 110. Storage proxy 106 provides native block, file, and object storage protocol support, as follows:
Block storage—system 100 presents a block-based virtual disk through a storage proxy 106 as a logical unit number (LUN). Access to the LUN, with the properties applied during virtual disk provisioning, such as compression, deduplication and replication, is given to a host as an iSCSI target. After the virtual disk is in use, the storage proxy translates and relays all LUN operations to the underlying storage cluster.
File storage—system 100 presents a file-based virtual disk to one or more storage proxies 106 as an NFS export, which is then consumed by the hypervisor as an NFS datastore. Administrators can then provision VMs on that NFS datastore. The storage proxy acts as an NFS server that traps NFS requests and translates them into the appropriate remote procedure call (RPC) calls to the backend storage service node.
Object storage—buckets created via the Amazon S3 API, or storage containers created via the OpenStack Swift API, are translated via the storage proxies 106 and internally mapped to virtual disks 170. The storage cluster 110 acts as the object (S3/Swift) target, which client applications 132 can utilize to store and access objects.
Storage Proxy 106 comprises one or more caches that enable distributed operations and the performing of storage system operations locally at the application node 102 to accelerate read/write performance and efficiency. An illustrative metacache stores metadata locally at the storage proxy, preferably on SSDs. This cache eliminates the need to traverse the network for metadata lookups, leading to substantial read acceleration. For virtual disks provisioned with client-side caching, an illustrative block cache stores data blocks to local SSD drives to accelerate reads. By returning blocks directly from the storage proxy, read operations avoid network hops when accessing recently used data. For virtual disks provisioned with deduplication, an illustrative dedupe cache resides on local SSD media and stores fingerprint information of certain data blocks written to storage cluster 110. Based on this cache, the storage proxy determines whether data blocks have been previously written and if so, avoids re-writing these data blocks again. Storage proxy 106 first queries the dedupe cache and if the data block is a duplicate, storage proxy 106 updates the metadata subsystem 140 to map the new data block(s) and acknowledges the write to originating application 132. Otherwise, storage proxy 106 queries the metadata subsystem 140 and if the data block was previously written to storage cluster 110, the dedupe cache and the metadata subsystem 140 are updated accordingly, with an acknowledgement to originating application 132. Unique new data blocks are written to the storage cluster as new payload data. More details on reads and writes are given in
A simplified use case workflow comprises: 1. A virtual disk 170 is administered with storage policies via a web-based user interface, a command line interface, and/or a RESTful API (representational state transfer application programming interface). 2. Block and file virtual disks are attached to a storage proxy 106, which presents the storage resource to application hosts, e.g., 102. For object storage, applications 132 directly interact with the virtual disk via Amazon S3 or OpenStack Swift protocols. 3. Storage proxy 106 intercepts application 132 I/O through the native storage protocol and communicates it to the underlying storage cluster 110 via remote procedure calls (RPCs). 4. The storage service distributes and replicates data throughout the storage cluster based on virtual disk policies. 5. The storage service conducts background processes to auto-tier and balance across racks, data centers, and/or public clouds based on virtual disk policies.
Pod subsystem 130 maintains certain system-wide information for synchronization purposes and comprises processing and tracking resources and locally stored information. A network of pods 130 throughout storage cluster 110, where each pod comprises three nodes, is used for managing transactions for metadata updates, distributed-atomic-counters as a service, tracking system-wide timeframes such as generations and epochs, etc. More details on the pod subsystem may be found in U.S. Pat. No. 9,483,205 B2, which is incorporated by reference in its entirety herein.
Metadata subsystem 140 comprises metadata processing resources and partitioned replicated metadata stored locally at the storage service node. Metadata subsystem 140 receives, processes, and generates metadata. Metadata in system 100 is partitioned and replicated across a plurality of metadata nodes. Typically, metadata subsystem 140 is configured with a replication factor of 3 (RF3), and therefore many of the examples herein will include 3-way replication scenarios, but the invention is not so limited. Each metadata subsystem 140 tracks the state of data storage subsystems 150 and of other metadata subsystems 140 in storage cluster 110 to form a global view of the cluster. Metadata subsystem 140 is responsible for optimal replica assignment and tracks writes in storage cluster 110.
Metadata synchronization logic (or “anti-entropy engine” day is completedAE) not shown here) runs in the metadata subsystem 140. The metadata synchronization logic compares replicas of metadata across metadata nodes and ensures that the replicas agree on a superset of the metadata therein to avoid losing metadata. During storage and compaction of metadata-carrying string-sorted tables (SSTs), a consistent file identification scheme is used across all metadata nodes. When an application node writes to and reads from a virtual disk on the distributed data storage system, metadata is generated and stored in replicas on different metadata nodes. A modified log-structured merge tree is used to store and compact the metadata SST files. A fingerprint file is created for each metadata SST file that includes a start-length-hash value triple for each region of the metadata SST file. To synchronize, fingerprint files of two metadata SST files are compared, and if any hash values are missing from a fingerprint file then key-value-timestamp triples corresponding to these missing hash values are sent to the metadata SST file that is missing them. An example of metadata synchronization logic is described in U.S. Pat. No. 10,740,300, which is incorporated by reference in its entirety herein.
Data storage subsystem 150 receives, processes, and stores payload data written to storage cluster 110. Thus, data storage subsystem 150 is responsible for replicating data to other data storage subsystems 150 on other storage service nodes and striping data within and across storage pools. Data storage subsystem 150 comprises storage processing for payload data blocks (e.g., I/O, compaction, garbage collection, etc.) and stores partitioned replicated payload data at the storage service node.
The bold bi-directional arrows in the present figure show that metadata is communicated between storage proxy 106 and metadata subsystem 140, whereas data blocks are transmitted to/from data storage subsystem 150. Depending on the configuration, metadata subsystem 140 may operate on a first storage service node 120 or storage service 122 and data storage subsystem 150 may operate on another distinct storage service node 120 or storage service 122. See also
Each storage service node 120 (or compute host 121) is typically configured with computing resources (e.g., hardware processors and computer memory) for providing storage services and with a number of storage resources 160, e.g., hard disk drives (HDD) shown here as storage disk shapes, solid state storage drives (SSD) (e.g., flash memory technology) shown here as square shapes, etc. The illustrative system uses commit logs, which are preferably stored on SSD before they are flushed to another disk/drive for persistent storage. Metadata commit logs are stored on dedicated metadata-commit-log drives “MCL”, whereas payload-data commit logs are stored on distinct dedicated data-commit-log drives “DCL.” As an example depicted in the present figure, pod system information is stored in storage resource “P” which is preferably SSD technology for faster read/write performance; the metadata commit log is stored in storage resource “MCL” which is preferably SSD technology; metadata is then flushed from the commit log to persistent storage “M” (SSD and/or HDD); the data commit log is stored in storage resource “DCL” which is preferably SSD technology; payload data is then flushed from the data commit log to persistent storage “D” (typically HDD). The storage resources 160 depicted in the present figures are shown here as non-limiting examples to ease the reader's understanding; the numbers and types of storage technologies among storage resources 160 will vary according to different implementations.
To accelerate read operations, client-side caching of data is used on SSDs accessible by the storage proxy 106. Data is also cached on SSDs at storage service nodes. For caching, the system supports the use of Peripheral Component Interconnect Express (PCIe) and Non-Volatile Memory Express (NVMe) SSDs. All writes are executed in memory and flash (SSD/NVMe) and flushed sequentially to persistent storage. Persistent storage uses flash technology (e.g., multi-level cell (MLC) and/or 3D NAND SSD) and/or spinning disk technology (e.g., HDD)). Options are administrable at the virtual disk level.
Virtual disk (“vdisk”) 170 is the data storage representation of system 100 that is visible to and accessible by applications 132 as data storage resources. In other words, each application 132 will use one or more virtual disks 170 for data storage without having knowledge of how system 100 as a whole is organized and configured. Every virtual disk 170 provisioned on the system is partitioned into fixed size chunks, each of which is called a storage container. Different replicas are assigned for each storage container. Since replica assignment occurs at the storage container level—not at a virtual disk level—the data for a virtual disk is distributed across a plurality of storage service nodes, thus allowing increased parallelism during I/Os and/or disk rebuilds. Thus, the virtual disks are distributed and fault-tolerant. Notably, the replication factor alone (e.g., RF3) does not limit how many storage service nodes 120 may comprise payload data of a given virtual disk 170. Thus, different containers of the virtual disk may be stored and replicated on different storage service nodes, adding up to more total storage service nodes associated with the virtual disk than the replication factor of the virtual disk.
Any number of virtual disks 170 may be spun up, each one thinly provisioned and instantly available. Illustrative user-configurable attributes for virtual disk 170 include without limitation: Name—a unique name to identify the virtual disk. Size—to set the desired virtual disk size. System 100 supports single block and NFS virtual disks of unlimited size. Disk Type—to specify the type of storage protocol to use for the virtual disk: block or file (NFS). Object containers/buckets are provisioned directly from OpenStack via Swift, via the Amazon S3 API, etc. Workload Type—for NFS disk type, options include default, proprietary, or object storage target (OST) workload types. For proprietary and OST, if Enable Deduplication is selected, a Retention Policy can be added as well. For block disk type, the only option is default. Retention Policy—specifies a duration for proprietary and OST workloads, e.g., two weeks, one month, etc. Encryption—to encrypt both data at rest and data in flight for the virtual disk. Enable Deduplication—to enable inline global deduplication. Clustered File System—to indicate that the virtual disk will be used with a clustered file system. When selected, system 100 enables concurrent read/write operations from multiple VMs or hosts. Description—to provide an optional brief description of the virtual disk. Compressed—to enable virtual disk compression to reduce data size. Client-Side Caching—to cache data to local SSD or PCIe devices at the application tier to accelerate read performance. CSV—to enable Cluster Shared Volumes for failover (or high availability) clustering. A CSV is a shared disk containing a Windows NT File System (NTFS) or Resilient File System (ReFS) volume that is made accessible for read and write operations by all nodes within a Windows Server failover cluster. Replication Policy—to set the policy for how data will replicate across the storage cluster: Agnostic, Rack Aware, or Data Center Aware. Replication Factor (RF)—to designate the number of replicas for each virtual disk. Replication factor is tunable, typically ranging from one to six, without limitation. Block Size—to set a block virtual disk size to 512 bytes, 4 k or 64 k. File (NFS)-based virtual disks have a standard 512 size, and object-based virtual disks have a standard 64K size. Residence—to select the type of media on which the data is to reside: HDD, SSD. The present figure depicts only one virtual disk 170 for illustrative purposes, but system 100 has no limits on how many virtual disks it may support.
At step W, storage proxy 106 intercepts a write command issued by application 132, comprising one or more payload data blocks to be written to a virtual disk 170 in storage cluster 110. At step 1W, storage proxy 106 determines the replica nodes 120 for the data blocks to be written and transmits the data blocks to one of the replica nodes 120, e.g., 120-4. If the virtual disk is enabled for deduplication, the storage proxy 106 calculates a data block fingerprint, queries the dedupe cache and, if necessary, further queries metadata subsystem 140 (at the virtual disk's metadata owner node, e.g., 120-7), and either makes a metadata update or proceeds with a new write. At step 2W, the data storage subsystem 150 on replica node 120-4 receives and writes the data blocks locally and forwards them to other designated replica nodes, e.g., 120-1 and 120-8. At step 3W, storage proxy 106 sends a write acknowledgment back to the originating application 132 after a quorum of data storage subsystem 150 replicas have completed step 2W. For RF3, two acknowledged successful writes are needed from the three (RF3) replicas to satisfy the quorum (RF/2+1=3/2+1=2). Two of the three replicas are written synchronously, and one may be written asynchronously. At step 4W, storage proxy 106 causes an atomic write to be made into metadata subsystem 140 at metadata owner node 120-7, after which the write is deemed successful. At step 5W, the metadata subsystem 140 replicates the metadata from node 120-7 to designated metadata replica nodes, e.g., 120-8 and 120-9.
At step R, storage proxy 106 intercepts a read request issued by application 132 for one or more data blocks from a virtual disk 170 in storage cluster 110. At step 1R, storage proxy 106 queries the local metacache for a particular data block to be read and if the information is not found in the local metacache, at step 1R′ storage proxy 106 consults metadata subsystem 140 (e.g., at the vdisk's designated metadata owner node 120-7). At step 2R, storage proxy 106 sends the data block details to one of the closest data storage subsystems 150, based on observed latency, e.g., storage service node 120-4. At step 3R, the data storage subsystem 150 reads the data block(s) and transmits the block(s) back, if found, to storage proxy 106. If the read operation fails due to any error, the read is attempted from another replica. At step 4R, storage proxy 106 serves the requested data block(s) to application 132. If client-side caching is enabled for the targeted virtual disk 170 during provisioning, the storage proxy 106 queries the local block cache at step 1R to fetch the data block(s), and if found therein serves the data block(s) to application 132 at step 4R, thereby bypassing the data storage subsystem 150 at the storage service nodes(s) and eliminating the need to traverse the network to reach storage cluster 110.
System Resiliency. System 100 is designed to survive disk, node, rack, and data center outages without application downtime and with minimal performance impact. These resiliency features include: high availability, non-disruptive upgrades (NDU), disk failures, replication, and snapshots and clones.
High availability. A preferable minimum of three storage service node should be provisioned for an implementation of the illustrative system. Redundancy can be set as agnostic, at the rack level, or at data center level. The system initiates transparent failover in case of failure. During node, rack, or site failures, reads and writes continue as usual from/to remaining operational replicas. To protect against a single point of failure, storage proxies 106 install as a high availability active/passive pair (“HA pair,” not shown). A virtual IP address (VIP) assigned to the HA pair redirects traffic automatically to the active storage proxy 106 at any given time. If one storage proxy 106 instance is lost or interrupted, operations fail over seamlessly to the passive instance to maintain availability. This happens without requiring intervention by applications, administrators, or users. During provisioning, administrators can indicate that an application host 102/121 will use a clustered file system. This automatically sets internal configuration parameters to ensure seamless failover when using VM migration to a secondary physical host running its own storage proxy 106. During live VM migration, such as VMware vMotion or Microsoft Hyper-V, any necessary block and file storage “follows” guest VMs to another host.
Non-disruptive upgrades (NDUs). The illustrative system supports non-disruptive software upgrades by staging and rolling the upgrade across individual components using the highly available nature of the system to eliminate any downtime or data unavailability. Storage service nodes 120 and storage services 122 undergo upgrades first one node at a time. Meanwhile, any I/O continues to be serviced from alternate available nodes, e.g., replicas. Storage proxies 106 are upgraded next, starting with the passive storage proxy in HA pairs. After the passive storage proxy upgrade is complete, it is made active, and the formerly active storage proxy 106 is upgraded and resumes service as the passive of the HA pair. This process eliminates any interruption to reads or writes during the upgrade procedure.
Disk Failures. The illustrative system supports efficient data and metadata rebuilds that are initiated automatically when there is a disk failure. Payload data is rebuilt from other data replicas and using information in the metadata subsystem. The metadata rebuild self-heals within the metadata service.
Replication. The illustrative system uses a combination of synchronous and asynchronous replication processes to distribute and protect data across the storage cluster and provide near-zero recovery point objectives (RPO) and recovery time objectives (RTO). For example, two of three replicas are written synchronously, and one is written asynchronously. The system supports any number of active data centers in a single storage cluster 110, using a tunable replication factor and replication policy options. The replication factor designates the number of replicas to create for each virtual disk, and the replication policy defines the destination for the replicas across the storage cluster. Replicas occur at the storage container level of a virtual disk 170. For example, if a 100 GB virtual disk with RF3 is created, the entire 100 GBs are not stored as contiguous chunks on three storage service nodes. Instead, the 100 GBs are divided among several storage containers, and replicas of each storage container are spread across different storage pools on different storage service nodes within the storage cluster. For additional disaster recovery protection against rack and data center failures, the illustrative system supports replication policies that span multiple racks or data centers using structured IP addressing, DNS naming/suffix, and/or customer-defined snitch endpoints. For “agnostic” replication policies, data is spread across the storage cluster using a best-effort to improve availability. For “rack aware” replication policies, data is spread across as many physically distinct racks as possible within in a single data center. For “data center aware” replication policies, data replicates to additional physical sites, which can include private and/or hosted data centers and public clouds. In a disaster recovery example, where the Replication Policy=Data Center Aware and the Replication Factor=3, the illustrative system divides the data into storage containers and ensures that three copies (RF3) of each storage container are spread to geographically dispersed physical sites, e.g., Data Centers A, B, and C. At any time, if a data copy fails, re-replication is automatically initiated from replicas across the data centers.
Snapshots And Clones. In addition to replication policies, data management tasks include taking snapshots and making “zero-copy” clones of virtual disks. There is no limit to the number of snapshots or clones that can be created. Snapshots and clones are space-efficient, requiring capacity only for changed blocks.
Encryption. The illustrative system provides software-based encryption with the Encrypt360 feature. This enables encryption of data at the point of ingestion (at the storage proxy 106). Data encrypted in this way remains protected in flight between storage proxy 106 and storage service nodes 120/storage service 122, in flight among storage service nodes as part of replication, in-use at storage proxy 106, and at rest while in storage. Any encryption scheme may be implemented, preferably 256-bit AES. Additionally, any third-party key management system can be attached.
Ecosystem Integration. The illustrative system works with and provides a secure data storage system for a variety of data-generating platforms, including systems that generate primary (production) data and systems that generate backup data from primary sources. VMware. The illustrative system features a vCenter plug-in that enables provisioning, management, snapshotting, and cloning of virtual disks 170 directly from the vSphere Web Client. Additionally, the system incorporates support for the VMware vSphere Storage APIs Array Integration (VAAI). Docker. The illustrative system provides persistent storage for Docker software containers through a volume plugin. The volume plugin enables a user to create a persistent Docker volume backed by a virtual disk 170. Different options, such as deduplication, compression, replication factor, and/or block size, may be set for each Docker volume, using “volume options” in the Docker Universal Control Plane (UCP) or using the “docker volume” command line. The virtual disk can then be attached to any host. The volume plugin also creates a file system on this virtual disk and mounts it using the path provided by the user. The file system type can also be configured by the user. All I/O to the Docker volume goes to virtual disk 170. As the software container moves in the environment, virtual disk 170 will automatically be made available to any host, and data will be persisted using the policies chosen during volume creation. For container orchestration platforms (a/k/a application orchestrator environments), such as Kubernetes and OpenShift, the illustrative system 100 provides persistent storage for software containers through a proprietary dynamic provisioner and via other technologies that interoperate with the orchestration platform(s). OpenStack. The illustrative system delivers block, file, and object storage for OpenStack all from a single platform via native Cinder and Swift integration. The system supports granular administration, per-volume (Cinder) or per-container (Swift), for capabilities such as compression, deduplication, snapshots, and/or clones. OpenStack administrators can provision the full set of storage capabilities of system 100 in OpenStack Horizon via OpenStack's QoS functionality. As with VMware, administrators need not use system 100's native web user interfaces and/or RESTful API, and storage can be managed from within the OpenStack interface.
Multitenancy. The illustrative system supports the use of rack-aware and data center-aware replication policies for customers who must satisfy regulatory compliance and restrict certain data by region or site. These capabilities provide the backbone of a multitenant architecture, which is supported with three forms of architectural isolation: LUN masking, dedicated storage proxies, and complete physical isolation. Using the LUN masking option, different tenants are hosted on a shared infrastructure with logical separation. Logical separation is achieved by presenting virtual disks only to a certain VM and/or physical application host (IP range). Quality of Service (QoS) is delivered at the VM level. Using the dedicated storage proxies option, storage access is provided with a dedicated storage proxy 106 per tenant. Storage proxies can be deployed on a dedicated physical host or a shared host. This provides storage as a shared infrastructure, while compute is dedicated to each tenant. Quality of Service (QoS) is at the VM level. Using the complete physical isolation option, different tenants are hosted on dedicated storage clusters (each running their own storage service and storage proxies) to provide complete logical and physical separation between tenants. For all of these multitenant architectures, each tenant can have unique virtual disks with tenant-specific storage policies, because the illustrative system configures policies at the virtual disk level. Policies can be grouped to create classes of service.
Thus, the illustrative distributed data storage system scales seamlessly and linearly from a few nodes to thousands of nodes using virtual disks as the user-visible storage resource provided by the system. Enterprise storage capabilities are configurable at the virtual disk level. The storage service nodes can be configured in a plurality of physical computing environments, e.g., data centers, private clouds, and/or public clouds without limitation. The embodiments and components thereof disclosed in
Payload data is stored in virtual disks 170 configured in the storage cluster, which are consumed as application orchestrator (e.g., Kubernetes) persistent volumes. Each virtual disk 170 is partitioned and replicated across a number of storage service nodes 120—the partitioning taking the form of storage containers. Usually, a certain metadata node is the assigned “owner” of the virtual disk and is therefore responsible for certain aspects of the disclosed container data mover feature.
Container Storage Interface (CSI).
CSI is a community-driven project for standardizing persistent volume workflows across different application orchestrators such as Kubernetes. In general, a CSI driver comprises:
Proprietary CSI driver 201 is particularly designed by the present inventors for operating within the illustrative distributed data storage system. Furthermore, the proprietary CSI driver 201 also enables data migration between distinct storage clusters as shown in
In an example Kubernetes configuration, a Controller Server is installed as a deployment and is responsible for provisioning CSI volumes. It is also responsible for other operations, such as attaching and snapshotting volumes, which need not be executed on the node where the volume is consumed. The Node Server is installed as a Daemonset and is responsible for mounting and unmounting CSI volumes on Kubernetes nodes where the volumes will be consumed by applications. Storage proxy 106 is deployed as a Daemonset and is responsible for handling I/O requests for all CSI volumes attached locally. The following sequence of events occurs when a Kubernetes user issues a request to provision Hedvig storage using the proprietary CSI driver 201. These events explain how the illustrative distributed data storage system components interact with Kubernetes and utilize the Kubernetes constructs to let end users seamlessly manage storage resources within a Kubernetes cluster: 1. The administrator creates one or more storage classes (StorageClass) for Hedvig. See
Policy Driven Container Data Mover.
Data migration can be seamlessly enabled through policies assigned to application orchestrators such as Kubernetes constructs. Snapshot schedules provided through the proprietary CSI driver have been enhanced to allow users to configure data migration based on a snapshot retention period.
A list of steps for configuring data migration includes without limitation:
(1) Create a migration location. Migration location is implemented as a CustomResourceDefinition (CRD) and is managed by the proprietary CSI driver 201. A migration location can be created on the source application orchestration cluster by specifying the name of the destination storage cluster and the seeds. An example is shown in the bottom block of the present figure. The migration location is implemented as a CustomResourceDefinition (CRD) and is cluster scoped. After the CSI driver 201 has been deployed, verify the existence of the CRD by running the following command: #kubectl get crd migrationlocations.hedvig.io
(2) Create a snapshot schedule and snapshot class. This example, shown in
(3) Create a storage class with migration location and snapshot schedule. An example appears in
(4) Create a persistent volume claim using the storage class. An example appears in
(5) Access the migrated persistent volume on the target (destination) storage cluster. See, e.g.,
(5A) Register the migrated virtual disk to the app-orchestrator cluster (e.g., Kubernetes cluster). See an example command in
In contrast to the approach taken for payload data migration, metadata is transferred to the destination using ordinary metadata write operations, not kernel-to-kernel, though the invention is not so limited. Thus, metadata subsystem 140, which runs in user space at the storage service node, analyzes metadata 449 at the source metadata node to identify the appropriate payload data SST files 459 that need to be migrated. The metadata subsystem 140 reads metadata blocks 449 and transmits them to the destination cluster after all the identified payload data SST files 459 have been successfully written at the destination. At the destination storage cluster, the metadata intake is an ordinary metadata write. Thus, even if entire metadata SST files are migrated to the destination, the migration takes the form of ordinary metadata write operations, in user space, at the appropriate storage service nodes. In contrast to the payload data transmitted in kernel-to-kernel copy operations as described above, the metadata “goes through” the metadata subsystem 140 at source and destination storage service nodes. See also
At block 2004, within the application orchestration environment (e.g., Kubernetes framework), using the source virtual disk (e.g., 170S) as a persistent volume, data is received and stored therein, e.g., from an application 132S. Snapshots of metadata associated with the virtual disk are taken at the source storage cluster 110S, typically on a schedule and having a pre-defined retention period. More details are given in
At block 2006, on receiving a call to delete an expired snapshot at the source, the metadata owner 140 of the virtual disk 170S determines whether the virtual disk 170S is provisioned with migration enabled. The illustrative method allows for conditional migration decision-making (block 2008) since it may be undesirable to migrate every virtual disk 170 of the storage cluster 110S. If the virtual disk 170S is not migration-enabled, the snapshot is deleted and control passes back to data intake at the virtual disk at block 2004. However, if the virtual disk 170S is migration-enabled, control passes to a migration operation at block 2010. The metadata node 140 at the source that is the designated owner of the virtual disk acts as coordinator of this migration operation. For other virtual disks 170, their migration is coordinated by their respective owner metadata nodes. Notably, the migration involves the illustrative barrier logic 432, which executes in the pod subsystem 130. More details are given in
At block 2012, after the migration has successfully completed, a persistent volume at the destination comprises the migrated payload data and accompanying metadata and is available for use within the destination's application orchestrator environment (e.g., destination Kubernetes framework). More details are given in
At the destination, in blocks 2112-2116, a volume snapshot class is created for the destination volume. After a migration cycle has delivered payload data to the destination volume, snapshots are taken of the destination volume based on the volume snapshot class, and afterwards these snapshots are cloned. The clone/PersistedVolumeClaim created here is presented to the application in the destination storage cluster to access/retrieve the payload data migrated over from the source storage cluster. See also
At block 1012, the owner metadata node 140 signals the data storage subsystems 150 hosting these data SST files 459S to send these files to their corresponding destination storage nodes via kernel-to-kernel file copy operations. See also
When operations are not successfully completed (block 2508), the barrier logic aborts the migration (block 2516), e.g., if there is a network failure that prevents further data transfers. When completion criteria are met (block 2508) for a certain migration stage (e.g., all payload data SST files have been successfully received at the destination), the barrier logic permits the migration to proceed to the next stage (e.g., transmitting associated metadata at block 2510). After determining that all metadata has been successfully received at the destination (block 2512), the barrier logic 432 is de-activated (block 2514) and the migration cycle is considered complete. The barrier logic 432 is re-activated again when the next migration cycle is triggered (block 2504). However, if the metadata is not successfully received at the destination (block 2512), the barrier logic causes the present migration to abort at block 2516. See also
As noted, the distributed barrier logic 432 operates at the pod subsystem 130 in the source storage cluster and acts as a controller and overseer over the migration of payload data and metadata from source to destination.
In regard to the figures described herein, other embodiments are possible within the scope of the present invention, such that the above-recited components, steps, blocks, operations, messages, requests, queries, and/or instructions are differently arranged, sequenced, sub-divided, organized, and/or combined. In some embodiments, a different component may initiate or execute a given operation.
Some example enumerated embodiments of the present invention are recited in this section in the form of methods, systems, and non-transitory computer-readable media, without limitation.
According to an example embodiment, a distributed data storage system for out-migrating data therefrom comprises: a first storage service node; a second storage service node executing a metadata subsystem that (i) is designated an owner of a first virtual disk configured as a persistent volume in a framework of an application orchestrator, and (ii) comprises metadata associated with the first virtual disk; third storage service nodes executing a data storage subsystem and comprising payload data of the first virtual disk, wherein one or more containerized applications of the application orchestrator generate the payload data. The above-recited embodiment wherein the second storage service node is configured to: take a first snapshot of at least part of the metadata associated with the first virtual disk, wherein a custom resource definition within the framework of the application orchestrator defines a schedule-and-retention policy applicable to the first snapshot; on taking the first snapshot, cause the first storage service node to increment a generation counter from a first value to a second value. The above-recited embodiment wherein the second storage service node is further configured to: based on determining that the first virtual disk is migration-enabled, identify at the third storage service nodes a first set of payload data files that are associated with the first value of the generation counter; cause the third storage service nodes comprising one or more payload data files of the first set to transmit respective payload data files, using kernel-to-kernel communications, to corresponding storage service nodes at an other distributed data storage system, which is distinct from the distributed data storage system comprising the first, second, and third storage service nodes, and wherein the other distributed data storage system comprises a second virtual disk that corresponds to the first virtual disk. The above-recited embodiment wherein the second storage service node is further configured to: based on receiving permission from the first storage service node, transmit metadata captured in the first snapshot to a storage service node at the other distributed data storage system using metadata-write operations, which are distinct from and exclusive of the kernel-to-kernel write operations; and wherein after the metadata captured in the first snapshot is successfully received at the other distributed data storage system, payload data associated with the first value of the generation counter has been successfully migrated from the first virtual disk to the second virtual disk at the other distributed data storage system.
The above-recited embodiment wherein each storage service node comprises one or more processors and data storage resources. The above-recited embodiment wherein the application orchestrator is based on Kubernetes technology. The above-recited embodiment wherein the second storage service node is configured to determine, on expiration of the first snapshot, whether the first virtual disk is migration-enabled. The above-recited embodiment wherein the first set of payload data files at the third storage service nodes also includes third payload data files associated with a third value of the generation counter that preceded the first value, and wherein a migration of the third payload data files to the other distributed data storage system previously failed. The above-recited embodiment wherein within the framework of the application orchestrator: a storage class is configured with migration enabled and makes reference to the schedule-and-retention policy, and a persistent volume claim makes reference to the storage class. The above-recited embodiment wherein a proprietary container storage interface (CSI) driver is used for provisioning a persistent volume claim that references the first virtual disk. The above-recited embodiment wherein a proprietary container storage interface (CSI) driver within the framework of the application orchestrator is used (a) for provisioning a persistent volume claim that references the first virtual disk, (b) for creating the custom resource definition that defines the schedule-and-retention policy for the first snapshot, and (c) for enabling payload data migration from the first virtual disk to the second virtual disk. The above-recited embodiment wherein the distributed data storage system is configured to migrate payload data from the first virtual disk to the second virtual disk at the other distributed data storage system. The above-recited embodiment wherein a data mover system comprises the distributed data storage system and the other distributed data storage system. The above-recited embodiment wherein a barrier logic executing at the first storage service node ensures that migration from the distributed data storage system to the other distributed data storage system follows a controlled progression of operations. The above-recited embodiment wherein a barrier logic executing at the first storage service node ensures that migration from the distributed data storage system to the other distributed data storage system follows a controlled progression of operations, and wherein metadata is migrated only after all payload data files are migrated. The above-recited embodiment wherein a barrier logic executing at the first storage service node ensures that migration from the distributed data storage system to the other distributed data storage system follows a controlled progression of operations, and wherein metadata is not migrated and the migration is aborted if some payload data files are not successfully received at the second virtual disk. The above-recited embodiment wherein the first and second storage service nodes are the same storage service node. The above-recited embodiment wherein payload data from the one or more containerized applications of the application orchestrator are written to the first virtual disk via commit logs before being persisted. The above-recited embodiment wherein at least one of the distributed data storage system and the other distributed data storage system operates in a cloud computing environment. The above-recited embodiment wherein at least one of the distributed data storage system and the other distributed data storage system operates in a non-cloud computing environment. The above-recited embodiment wherein the one or more containerized applications are cloud-native to a cloud computing environment that hosts the framework of the application orchestrator.
According to another example embodiment, a first cloud computing environment hosting a first distributed data storage system for out-migrating data therefrom, wherein the first distributed data storage system comprises: a first storage service node configured in the first cloud computing environment; a second storage service node, which is configured in the first cloud computing environment and comprises metadata associated with a first virtual disk, wherein the first virtual disk is configured as a persistent volume in a framework of an application orchestrator hosted by the first cloud computing environment; third storage service nodes, which are configured in the first cloud computing environment and comprise payload data of the first virtual disk, wherein one or more containerized applications of the application orchestrator generate the payload data. The above-recited embodiment wherein the second storage service node is configured to: take a first snapshot of at least part of the metadata associated with the first virtual disk, wherein a custom resource definition within the framework of the application orchestrator defines a schedule-and-retention policy applicable to the first snapshot; on taking the first snapshot, cause the first storage service node to increment a generation counter from a first value to a second value. The above-recited embodiment wherein the second storage service node is configured to: based on determining that the first virtual disk is migration-enabled, identify at the third storage service nodes a first set of payload data files that are associated with the first value of the generation counter. The above-recited embodiment wherein the second storage service node is configured to: migrate the first set of payload data files associated with the first value of the generation counter to a second virtual disk at a second distributed data storage system, which is distinct from the first distributed data storage system, wherein the second virtual disk is configured to correspond to the first virtual disk, comprising: (i) cause the third storage service nodes comprising the one or more payload data files of the first set to transmit, via kernel-to-kernel copy operations, respective payload data files to corresponding storage service nodes at a second distributed data storage system, which is distinct from the first distributed data storage system, and (ii) based on receiving permission from the first storage service node, transmit metadata captured in the first snapshot to a storage service node at the second distributed data storage system using metadata-write operations, which are distinct from and exclusive of the kernel-to-kernel write operations.
The above-recited embodiment wherein a proprietary container storage interface (CSI) driver within the framework of the application orchestrator is used (a) for provisioning a persistent volume claim that references the first virtual disk, (b) for creating the custom resource definition that defines the schedule-and-retention policy for the first snapshot, and (c) for enabling payload data migration from the first virtual disk to the second virtual disk.
In other embodiments according to the present invention, a system or systems operates according to one or more of the methods and/or computer-readable media recited in the preceding paragraphs. In yet other embodiments, a method or methods operates according to one or more of the systems and/or computer-readable media recited in the preceding paragraphs. In yet more embodiments, a non-transitory computer-readable medium or media causes one or more computing devices having one or more processors and computer-readable memory to operate according to one or more of the systems and/or methods recited in the preceding paragraphs.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense, i.e., in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words using the singular or plural number may also include the plural or singular number respectively. The word “or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list. Likewise the term “and/or” in reference to a list of two or more items, covers all of the following interpretations of the word: any one of the items in the list, all of the items in the list, and any combination of the items in the list.
In some embodiments, certain operations, acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all are necessary for the practice of the algorithms). In certain embodiments, operations, acts, functions, or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described. Software and other modules may reside and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein. Software and other modules may be accessible via local computer memory, via a network, via a browser, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, interactive voice response, command line interfaces, and other suitable interfaces.
Further, processing of the various components of the illustrated systems can be distributed across multiple machines, networks, and other computing resources. Two or more components of a system can be combined into fewer components. Various components of the illustrated systems can be implemented in one or more virtual machines, rather than in dedicated computer hardware systems and/or computing devices. Likewise, the data repositories shown can represent physical and/or logical data storage, including, e.g., storage area networks or other distributed storage systems. Moreover, in some embodiments the connections between the components shown represent possible paths of data flow, rather than actual connections between hardware. While some examples of possible connections are shown, any of the subset of the components shown can communicate with any other subset of components in various implementations.
Embodiments are also described above with reference to flow chart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of the flow chart illustrations and/or block diagrams, and combinations of blocks in the flow chart illustrations and/or block diagrams, may be implemented by computer program instructions. Such instructions may be provided to a processor of a general purpose computer, special purpose computer, specially-equipped computer (e.g., comprising a high-performance database server, a graphics subsystem, etc.) or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor(s) of the computer or other programmable data processing apparatus, create means for implementing the acts specified in the flow chart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flow chart and/or block diagram block or blocks. The computer program instructions may also be loaded to a computing device or other programmable data processing apparatus to cause operations to be performed on the computing device or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the acts specified in the flow chart and/or block diagram block or blocks.
Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention. These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain examples of the invention, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims.
To reduce the number of claims, certain aspects of the invention are presented below in certain claim forms, but the applicant contemplates other aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C sec. 112(f) (AIA), other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. Any claims intended to be treated under 35 U.S.C. § 112(f) will begin with the words “means for,” but use of the term “for” in any other context is not intended to invoke treatment under 35 U.S.C. § 112(f). Accordingly, the applicant reserves the right to pursue additional claims after filing this application, in either this application or in a continuing application.
This application is a Continuation of U.S. patent application Ser. No. 17/179,160 filed on Feb. 18, 2021, which claims priority to U.S. Provisional Patent Application No. 63/082,631 filed on Sep. 24, 2020, which is incorporated by reference in its entirety, including Appendices, herein. Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet of the present application are hereby incorporated by reference in their entireties under 37 CFR 1.57.
Number | Name | Date | Kind |
---|---|---|---|
4084231 | Capozzi et al. | Apr 1978 | A |
4267568 | Dechant et al. | May 1981 | A |
4283787 | Chambers | Aug 1981 | A |
4417321 | Chang et al. | Nov 1983 | A |
4641274 | Swank | Feb 1987 | A |
4654819 | Stiffler et al. | Mar 1987 | A |
4686620 | Ng | Aug 1987 | A |
4912637 | Sheedy et al. | Mar 1990 | A |
4995035 | Cole | Feb 1991 | A |
5005122 | Griffin | Apr 1991 | A |
5093912 | Dong et al. | Mar 1992 | A |
5133065 | Cheffetz et al. | Jul 1992 | A |
5193154 | Kitajima et al. | Mar 1993 | A |
5212772 | Masters | May 1993 | A |
5226157 | Nakano et al. | Jul 1993 | A |
5239647 | Anglin et al. | Aug 1993 | A |
5241668 | Eastridge et al. | Aug 1993 | A |
5241670 | Eastridge et al. | Aug 1993 | A |
5253342 | Blount | Oct 1993 | A |
5276860 | Fortier et al. | Jan 1994 | A |
5276867 | Kenley et al. | Jan 1994 | A |
5287500 | Stoppani, Jr. | Feb 1994 | A |
5301286 | Rajani | Apr 1994 | A |
5321816 | Rogan et al. | Jun 1994 | A |
5333315 | Saether et al. | Jul 1994 | A |
5347653 | Flynn et al. | Sep 1994 | A |
5410700 | Fecteau et al. | Apr 1995 | A |
5420996 | Aoyagi | May 1995 | A |
5448724 | Hayashi et al. | Sep 1995 | A |
5454099 | Myers et al. | Sep 1995 | A |
5491810 | Allen | Feb 1996 | A |
5495607 | Pisello et al. | Feb 1996 | A |
5504873 | Martin et al. | Apr 1996 | A |
5544345 | Carpenter et al. | Aug 1996 | A |
5544347 | Yanai et al. | Aug 1996 | A |
5559957 | Balk | Sep 1996 | A |
5559991 | Kanfi | Sep 1996 | A |
5619644 | Crockett et al. | Apr 1997 | A |
5638509 | Dunphy et al. | Jun 1997 | A |
5642496 | Kanfi | Jun 1997 | A |
5664204 | Wang | Sep 1997 | A |
5673381 | Huai et al. | Sep 1997 | A |
5699361 | Ding et al. | Dec 1997 | A |
5729743 | Squibb | Mar 1998 | A |
5751997 | Kullick et al. | May 1998 | A |
5758359 | Saxon | May 1998 | A |
5761677 | Senator et al. | Jun 1998 | A |
5764972 | Crouse et al. | Jun 1998 | A |
5778395 | Whiting et al. | Jul 1998 | A |
5812398 | Nielsen | Sep 1998 | A |
5813009 | Johnson et al. | Sep 1998 | A |
5813017 | Morris | Sep 1998 | A |
5875478 | Blumenau | Feb 1999 | A |
5887134 | Ebrahim | Mar 1999 | A |
5901327 | Ofek | May 1999 | A |
5924102 | Perks | Jul 1999 | A |
5950205 | Aviani, Jr. | Sep 1999 | A |
5974563 | Beeler, Jr. | Oct 1999 | A |
6021415 | Cannon et al. | Feb 2000 | A |
6026414 | Anglin | Feb 2000 | A |
6052735 | Ulrich et al. | Apr 2000 | A |
6076148 | Kedem et al. | Jun 2000 | A |
6094416 | Ying | Jul 2000 | A |
6131095 | Low et al. | Oct 2000 | A |
6131190 | Sidwell | Oct 2000 | A |
6148412 | Cannon et al. | Nov 2000 | A |
6154787 | Urevig et al. | Nov 2000 | A |
6161111 | Mutalik et al. | Dec 2000 | A |
6167402 | Yeager | Dec 2000 | A |
6212512 | Barney et al. | Apr 2001 | B1 |
6260069 | Anglin | Jul 2001 | B1 |
6269431 | Dunham | Jul 2001 | B1 |
6275953 | Vahalia et al. | Aug 2001 | B1 |
6301592 | Aoyama et al. | Oct 2001 | B1 |
6324581 | Xu et al. | Nov 2001 | B1 |
6327590 | Chidlovskii et al. | Dec 2001 | B1 |
6328766 | Long | Dec 2001 | B1 |
6330570 | Crighton et al. | Dec 2001 | B1 |
6330642 | Carteau | Dec 2001 | B1 |
6343324 | Hubis et al. | Jan 2002 | B1 |
RE37601 | Eastridge et al. | Mar 2002 | E |
6356801 | Goodman et al. | Mar 2002 | B1 |
6389432 | Pothapragada et al. | May 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6421711 | Blumenau et al. | Jul 2002 | B1 |
6487561 | Ofek et al. | Nov 2002 | B1 |
6519679 | Devireddy et al. | Feb 2003 | B2 |
6538669 | Lagueux, Jr. et al. | Mar 2003 | B1 |
6542972 | Ignatius et al. | Apr 2003 | B2 |
6564228 | O'Connor | May 2003 | B1 |
6658436 | Oshinsky et al. | Dec 2003 | B2 |
6658526 | Nguyen et al. | Dec 2003 | B2 |
6721767 | DeMeno et al. | Apr 2004 | B2 |
6760723 | Oshinsky et al. | Jul 2004 | B2 |
6941429 | Kamvyssells et al. | Sep 2005 | B1 |
6959327 | Vogl | Oct 2005 | B1 |
6973555 | Fujiwara | Dec 2005 | B2 |
7000238 | Nadler | Feb 2006 | B2 |
7003641 | Prahlad | Feb 2006 | B2 |
7035880 | Crescenti | Apr 2006 | B1 |
7079341 | Kistler et al. | Jul 2006 | B2 |
7096418 | Singhal | Aug 2006 | B1 |
7107298 | Prahlad | Sep 2006 | B2 |
7130272 | Gai et al. | Oct 2006 | B1 |
7130970 | Devassy | Oct 2006 | B2 |
7143203 | Altmejd | Nov 2006 | B1 |
7162496 | Amarendran et al. | Jan 2007 | B2 |
7174433 | Kottomtharayil et al. | Feb 2007 | B2 |
7225220 | Gonzalez et al. | May 2007 | B2 |
7246207 | Kottomtharayil | Jul 2007 | B2 |
7260633 | Lette | Aug 2007 | B2 |
7315923 | Retnamma et al. | Jan 2008 | B2 |
7334144 | Schlumberger | Feb 2008 | B1 |
7340616 | Rothman et al. | Mar 2008 | B2 |
7343356 | Prahlad | Mar 2008 | B2 |
7343453 | Prahlad | Mar 2008 | B2 |
7346623 | Prahlad et al. | Mar 2008 | B2 |
7346751 | Prahlad | Mar 2008 | B2 |
7366846 | Boyd et al. | Apr 2008 | B2 |
7386744 | Barr | Jun 2008 | B2 |
7389311 | Crescenti et al. | Jun 2008 | B1 |
7395282 | Crescenti | Jul 2008 | B1 |
7440982 | Lu | Oct 2008 | B2 |
7448079 | Tremain | Nov 2008 | B2 |
7454569 | Kavuri | Nov 2008 | B2 |
7472079 | Fellenstein | Dec 2008 | B2 |
7483895 | Hysom | Jan 2009 | B2 |
7490207 | Amarendran et al. | Feb 2009 | B2 |
7500053 | Kavuri | Mar 2009 | B1 |
7502820 | Manders | Mar 2009 | B2 |
7516346 | Pinheiro et al. | Apr 2009 | B2 |
7516348 | Ofer | Apr 2009 | B1 |
7526798 | Chao | Apr 2009 | B2 |
7529782 | Prahlad et al. | May 2009 | B2 |
7536291 | Vijayan Retnamma et al. | May 2009 | B1 |
7543125 | Gokhale | Jun 2009 | B2 |
7546324 | Prahlad et al. | Jun 2009 | B2 |
7546475 | Mayo et al. | Jun 2009 | B2 |
7568080 | Prahlad et al. | Jul 2009 | B2 |
7584227 | Gokhale et al. | Sep 2009 | B2 |
7587570 | Sarkar et al. | Sep 2009 | B2 |
7603386 | Amarendran et al. | Oct 2009 | B2 |
7606844 | Kottomtharavil | Oct 2009 | B2 |
7613752 | Prahlad | Nov 2009 | B2 |
7617191 | Wilbrink et al. | Nov 2009 | B2 |
7617253 | Prahlad et al. | Nov 2009 | B2 |
7617262 | Prahlad et al. | Nov 2009 | B2 |
7620710 | Kottomtharayil | Nov 2009 | B2 |
7627827 | Taylor et al. | Dec 2009 | B2 |
7631351 | Erofeev | Dec 2009 | B2 |
7636743 | Erofeev | Dec 2009 | B2 |
7651593 | Prahlad | Jan 2010 | B2 |
7653668 | Shelat | Jan 2010 | B1 |
7657550 | Prahlad | Feb 2010 | B2 |
7660807 | Prahlad | Feb 2010 | B2 |
7661028 | Erofeev | Feb 2010 | B2 |
7668884 | Prahlad | Feb 2010 | B2 |
7685269 | Thrasher et al. | Mar 2010 | B1 |
7694070 | Mogi | Apr 2010 | B2 |
7734669 | Kottomtharayil et al. | Jun 2010 | B2 |
7739541 | Rao | Jun 2010 | B1 |
7739548 | Goodrum et al. | Jun 2010 | B2 |
7747579 | Prahlad et al. | Jun 2010 | B2 |
7761736 | Nguyen et al. | Jul 2010 | B2 |
7765167 | Prahlad | Jul 2010 | B2 |
7769616 | Ollivier | Aug 2010 | B2 |
7778984 | Zhang | Aug 2010 | B2 |
7792789 | Prahlad | Sep 2010 | B2 |
7797453 | Meier et al. | Sep 2010 | B2 |
7801864 | Prahlad | Sep 2010 | B2 |
7809914 | Kottomtharayil | Oct 2010 | B2 |
7814149 | Stringham | Oct 2010 | B1 |
7814351 | Redlich et al. | Oct 2010 | B2 |
7818082 | Roumeliotis et al. | Oct 2010 | B2 |
7822967 | Fung | Oct 2010 | B2 |
7840537 | Gokhale | Nov 2010 | B2 |
7882077 | Gokhale | Feb 2011 | B2 |
7899788 | Chadhok et al. | Mar 2011 | B2 |
7917438 | Kenedy et al. | Mar 2011 | B2 |
7975061 | Gokhale et al. | Jul 2011 | B1 |
7979389 | Prahlad et al. | Jul 2011 | B2 |
7996270 | Sundaresan | Aug 2011 | B2 |
8001277 | Mega | Aug 2011 | B2 |
8037028 | Prahlad | Oct 2011 | B2 |
8065166 | Maresh | Nov 2011 | B2 |
8108427 | Prahlad | Jan 2012 | B2 |
8112605 | Kavuri | Feb 2012 | B2 |
8134727 | Shmunis | Mar 2012 | B1 |
8140786 | Bunte | Mar 2012 | B2 |
8140794 | Prahlad et al. | Mar 2012 | B2 |
8156086 | Lu et al. | Apr 2012 | B2 |
8170995 | Prahlad et al. | May 2012 | B2 |
8219524 | Gokhale | Jul 2012 | B2 |
8229954 | Kottomtharayil | Jul 2012 | B2 |
8230195 | Amarendran et al. | Jul 2012 | B2 |
8266406 | Kavuri | Sep 2012 | B2 |
8285681 | Prahlad | Oct 2012 | B2 |
8296534 | Gupta et al. | Oct 2012 | B1 |
8307177 | Prahlad et al. | Nov 2012 | B2 |
8316091 | Hirvela et al. | Nov 2012 | B2 |
8321688 | Auradkar | Nov 2012 | B2 |
8352608 | Keagy et al. | Jan 2013 | B1 |
8364652 | Vijayan et al. | Jan 2013 | B2 |
8364802 | Keagy et al. | Jan 2013 | B1 |
8370307 | Wolfe | Feb 2013 | B2 |
8370542 | Lu et al. | Feb 2013 | B2 |
8396838 | Brockway et al. | Mar 2013 | B2 |
8407190 | Prahlad | Mar 2013 | B2 |
8417697 | Ghemawat et al. | Apr 2013 | B2 |
8429630 | Nickolov | Apr 2013 | B2 |
8433682 | Ngo | Apr 2013 | B2 |
8434131 | Varadharajan et al. | Apr 2013 | B2 |
8510573 | Muller | Aug 2013 | B2 |
8527549 | Cidon | Sep 2013 | B2 |
8566362 | Mason et al. | Oct 2013 | B2 |
8578120 | Attarde et al. | Nov 2013 | B2 |
8595191 | Prahlad et al. | Nov 2013 | B2 |
8612439 | Prahlad et al. | Dec 2013 | B2 |
8626741 | Vijakumar et al. | Jan 2014 | B2 |
8635184 | Hsu et al. | Jan 2014 | B2 |
8660038 | Pascazio | Feb 2014 | B1 |
8674823 | Contrario et al. | Mar 2014 | B1 |
8683103 | Ripberger | Mar 2014 | B2 |
8700754 | Riley | Apr 2014 | B2 |
8706867 | Vijayan | Apr 2014 | B2 |
8707070 | Muller | Apr 2014 | B2 |
8719767 | Bansod | May 2014 | B2 |
8769048 | Kottomtharayil | Jul 2014 | B2 |
8780400 | Shmunis | Jul 2014 | B2 |
8799242 | Leonard et al. | Aug 2014 | B2 |
8805971 | Roth et al. | Aug 2014 | B1 |
8849761 | Prahlad | Sep 2014 | B2 |
8849955 | Prahlad | Sep 2014 | B2 |
8924511 | Brand | Dec 2014 | B2 |
8950009 | Vijayan et al. | Feb 2015 | B2 |
8954446 | Vijayan et al. | Feb 2015 | B2 |
8959299 | Ngo et al. | Feb 2015 | B2 |
9020900 | Vijayan et al. | Apr 2015 | B2 |
9021282 | Muller | Apr 2015 | B2 |
9021307 | Parameswaran et al. | Apr 2015 | B1 |
9098495 | Gokhale | Aug 2015 | B2 |
9116633 | Sancheti et al. | Aug 2015 | B2 |
9189170 | Kripalani et al. | Nov 2015 | B2 |
9195636 | Smith | Nov 2015 | B2 |
9239687 | Vijayan et al. | Jan 2016 | B2 |
9262496 | Kumarasamy et al. | Feb 2016 | B2 |
9286110 | Mitkar et al. | Mar 2016 | B2 |
9298715 | Kumarasamy et al. | Mar 2016 | B2 |
9311121 | Deshpande et al. | Apr 2016 | B2 |
9342537 | Kumarasamy et al. | May 2016 | B2 |
9378035 | Kripalani | Jun 2016 | B2 |
9411534 | Lakshman et al. | Aug 2016 | B2 |
9424151 | Lakshman et al. | Aug 2016 | B2 |
9448731 | Nallathambi et al. | Sep 2016 | B2 |
9451023 | Sancheti et al. | Sep 2016 | B2 |
9454537 | Prahlad et al. | Sep 2016 | B2 |
9461881 | Kumarasamy et al. | Oct 2016 | B2 |
9471578 | Nallathambi et al. | Oct 2016 | B2 |
9483205 | Lakshman et al. | Nov 2016 | B2 |
9495404 | Kumarasamy et al. | Nov 2016 | B2 |
9558085 | Lakshman | Jan 2017 | B2 |
9633033 | Vijayan et al. | Apr 2017 | B2 |
9639274 | Maranna | May 2017 | B2 |
9639426 | Pawar et al. | May 2017 | B2 |
9641388 | Kripalani et al. | May 2017 | B2 |
9710465 | Dornemann et al. | Jul 2017 | B2 |
9774672 | Nallathambi et al. | Sep 2017 | B2 |
9798489 | Lakshman et al. | Oct 2017 | B2 |
9864530 | Lakshman | Jan 2018 | B2 |
9875063 | Lakshman | Jan 2018 | B2 |
9886346 | Kumarasamy et al. | Feb 2018 | B2 |
9959333 | Kumarasamy | May 2018 | B2 |
9983936 | Dornemann et al. | May 2018 | B2 |
10042716 | Nallathambi et al. | Aug 2018 | B2 |
10067722 | Lakshman | Sep 2018 | B2 |
10084873 | Dornemann | Sep 2018 | B2 |
10162528 | Sancheti et al. | Dec 2018 | B2 |
10210048 | Sancheti et al. | Feb 2019 | B2 |
10228962 | Dornemann et al. | Mar 2019 | B2 |
10248174 | Lakshman et al. | Apr 2019 | B2 |
10248657 | Prahlad et al. | Apr 2019 | B2 |
10264074 | Vijayan et al. | Apr 2019 | B2 |
10296368 | Dornemann et al. | May 2019 | B2 |
10310953 | Vijayan et al. | Jun 2019 | B2 |
10311150 | Bansod et al. | Jun 2019 | B2 |
10346259 | Gokhale et al. | Jul 2019 | B2 |
10379598 | Muller | Aug 2019 | B2 |
10387266 | Kumarasamy et al. | Aug 2019 | B2 |
10417102 | Sanakkayala et al. | Sep 2019 | B2 |
10592153 | Subramaniam et al. | Mar 2020 | B1 |
10592350 | Dornemann | Mar 2020 | B2 |
10613939 | Mitkar et al. | Apr 2020 | B2 |
10664352 | Rana | May 2020 | B2 |
10678758 | Dornemann | Jun 2020 | B2 |
10684924 | Kilaru et al. | Jun 2020 | B2 |
10691187 | Lakshman et al. | Jun 2020 | B2 |
10691666 | McDowell et al. | Jun 2020 | B1 |
10732885 | Gutta | Aug 2020 | B2 |
10740193 | Dhatrak | Aug 2020 | B2 |
10740300 | Lakshman et al. | Aug 2020 | B1 |
10747630 | Sanakkayala et al. | Aug 2020 | B2 |
10768971 | Dornemann et al. | Sep 2020 | B2 |
10776209 | Pawar et al. | Sep 2020 | B2 |
10776329 | Rao et al. | Sep 2020 | B2 |
10795577 | Lakshman et al. | Oct 2020 | B2 |
10846024 | Lakshman et al. | Nov 2020 | B2 |
10848468 | Lakshman et al. | Nov 2020 | B1 |
10853195 | Ashraf et al. | Dec 2020 | B2 |
10877928 | Nagrale et al. | Dec 2020 | B2 |
10891198 | Nara et al. | Jan 2021 | B2 |
10917471 | Karumbunathan et al. | Feb 2021 | B1 |
10949308 | Lyer et al. | Mar 2021 | B2 |
11099956 | Polimera et al. | Aug 2021 | B1 |
11106632 | Bangalore et al. | Aug 2021 | B2 |
11113246 | Mitkar et al. | Sep 2021 | B2 |
11327663 | Bhagi | May 2022 | B2 |
11340672 | Lakshman | May 2022 | B2 |
11442896 | Agrawal | Sep 2022 | B2 |
11570243 | Camargos et al. | Jan 2023 | B2 |
11789830 | Jain | Oct 2023 | B2 |
20020035511 | Haji et al. | Mar 2002 | A1 |
20020083079 | Meier et al. | Jun 2002 | A1 |
20020095609 | Tokunaga | Jul 2002 | A1 |
20020129047 | Cane et al. | Sep 2002 | A1 |
20020129106 | Gutfreund | Sep 2002 | A1 |
20020194033 | Huff | Dec 2002 | A1 |
20020194511 | Swoboda | Dec 2002 | A1 |
20030140068 | Yeung | Jul 2003 | A1 |
20030200222 | Feinberg et al. | Oct 2003 | A1 |
20040210724 | Koning et al. | Oct 2004 | A1 |
20050076251 | Barr et al. | Apr 2005 | A1 |
20050251522 | Clark | Nov 2005 | A1 |
20050268121 | Rothman et al. | Dec 2005 | A1 |
20050289414 | Adya et al. | Dec 2005 | A1 |
20060058994 | Ravi et al. | Mar 2006 | A1 |
20060101174 | Kanamaru et al. | May 2006 | A1 |
20060190775 | Aggarwal et al. | Aug 2006 | A1 |
20060206507 | Dahbour | Sep 2006 | A1 |
20060224846 | Amarendran et al. | Oct 2006 | A1 |
20060236073 | Soules et al. | Oct 2006 | A1 |
20060242356 | Mogi et al. | Oct 2006 | A1 |
20060245411 | Chen et al. | Nov 2006 | A1 |
20060251067 | Desanti et al. | Nov 2006 | A1 |
20070073970 | Yamazaki et al. | Mar 2007 | A1 |
20070079156 | Fujimoto | Apr 2007 | A1 |
20070101173 | Fung | May 2007 | A1 |
20070168606 | Takai et al. | Jul 2007 | A1 |
20070234302 | Suzuki et al. | Oct 2007 | A1 |
20080005168 | Huff et al. | Jan 2008 | A1 |
20080010521 | Goodrum et al. | Jan 2008 | A1 |
20080147460 | Ollivier | Jun 2008 | A1 |
20080162592 | Huang et al. | Jul 2008 | A1 |
20080183891 | Ni et al. | Jul 2008 | A1 |
20080228771 | Prahlad et al. | Sep 2008 | A1 |
20080244032 | Gilson et al. | Oct 2008 | A1 |
20080244177 | Crescenti et al. | Oct 2008 | A1 |
20080256384 | Branson et al. | Oct 2008 | A1 |
20080270461 | Gordon et al. | Oct 2008 | A1 |
20080301479 | Wood | Dec 2008 | A1 |
20090077443 | Nguyen et al. | Mar 2009 | A1 |
20090198677 | Sheehy et al. | Aug 2009 | A1 |
20090198825 | Miller et al. | Aug 2009 | A1 |
20090210464 | Chiang-Lin | Aug 2009 | A1 |
20090268903 | Bojinov et al. | Oct 2009 | A1 |
20090282020 | McSheffrey et al. | Nov 2009 | A1 |
20090287665 | Prahlad et al. | Nov 2009 | A1 |
20090319534 | Gokhale | Dec 2009 | A1 |
20090327477 | Madison, Jr. et al. | Dec 2009 | A1 |
20100023722 | Tabbara et al. | Jan 2010 | A1 |
20100064033 | Travostino et al. | Mar 2010 | A1 |
20100070448 | Omoigui | Mar 2010 | A1 |
20100070466 | Prahlad et al. | Mar 2010 | A1 |
20100070474 | Lad | Mar 2010 | A1 |
20100070725 | Prahlad et al. | Mar 2010 | A1 |
20100082672 | Kottomtharayil et al. | Apr 2010 | A1 |
20100082700 | Parab | Apr 2010 | A1 |
20100082713 | Frid-Nielsen et al. | Apr 2010 | A1 |
20100162002 | Dodgson et al. | Jun 2010 | A1 |
20100190478 | Brewer et al. | Jul 2010 | A1 |
20100235333 | Bates et al. | Sep 2010 | A1 |
20100257403 | Virk et al. | Oct 2010 | A1 |
20100269164 | Sosnosky et al. | Oct 2010 | A1 |
20100274772 | Samuels | Oct 2010 | A1 |
20100318782 | Auradkar et al. | Dec 2010 | A1 |
20100332401 | Prahlad et al. | Dec 2010 | A1 |
20100333116 | Prahlad et al. | Dec 2010 | A1 |
20110010518 | Kavuri et al. | Jan 2011 | A1 |
20110022642 | DeMilo et al. | Jan 2011 | A1 |
20110040824 | Harm | Feb 2011 | A1 |
20110055161 | Wolfe | Mar 2011 | A1 |
20110191544 | Naga et al. | Aug 2011 | A1 |
20110276713 | Brand | Nov 2011 | A1 |
20110277027 | Hayton et al. | Nov 2011 | A1 |
20120054626 | Odenheimer | Mar 2012 | A1 |
20120084262 | Dwarampudi et al. | Apr 2012 | A1 |
20120110186 | Kapur et al. | May 2012 | A1 |
20120131645 | Harm | May 2012 | A1 |
20120150818 | Retnamma et al. | Jun 2012 | A1 |
20120240183 | Sinha | Sep 2012 | A1 |
20130007245 | Malik et al. | Jan 2013 | A1 |
20130035795 | Pfeiffer et al. | Feb 2013 | A1 |
20130036092 | Lafont et al. | Feb 2013 | A1 |
20130125198 | Ferguson et al. | May 2013 | A1 |
20130238572 | Prahlad et al. | Sep 2013 | A1 |
20130238969 | Smith et al. | Sep 2013 | A1 |
20130262385 | Vibhor et al. | Oct 2013 | A1 |
20130262615 | Ankireddypalle | Oct 2013 | A1 |
20130297902 | Collins et al. | Nov 2013 | A1 |
20130326279 | Chavda et al. | Dec 2013 | A1 |
20140052706 | Misra | Feb 2014 | A1 |
20140189432 | Gokhale et al. | Jul 2014 | A1 |
20140196038 | Kottomtharayil et al. | Jul 2014 | A1 |
20140201140 | Vibhor et al. | Jul 2014 | A1 |
20140201157 | Pawar et al. | Jul 2014 | A1 |
20140283010 | Rutkowski et al. | Sep 2014 | A1 |
20140310706 | Bruso et al. | Oct 2014 | A1 |
20140380014 | Moyer | Dec 2014 | A1 |
20150074536 | Varadharajan et al. | Mar 2015 | A1 |
20150113055 | Vijayan et al. | Apr 2015 | A1 |
20150127967 | Dutton et al. | May 2015 | A1 |
20150198995 | Muller et al. | Jul 2015 | A1 |
20160042090 | Mitkar et al. | Feb 2016 | A1 |
20160100013 | Vijayan et al. | Apr 2016 | A1 |
20160142485 | Mitkar et al. | May 2016 | A1 |
20160350302 | Lakshman | Dec 2016 | A1 |
20160350391 | Vijayan et al. | Dec 2016 | A1 |
20170039218 | Prahlad et al. | Feb 2017 | A1 |
20170126807 | Vijayan et al. | May 2017 | A1 |
20170168903 | Dornemann et al. | Jun 2017 | A1 |
20170185488 | Kumarasamy et al. | Jun 2017 | A1 |
20170193003 | Vijayan et al. | Jul 2017 | A1 |
20170235647 | Kilaru et al. | Aug 2017 | A1 |
20170242871 | Kilaru et al. | Aug 2017 | A1 |
20170302588 | Mihailovici et al. | Oct 2017 | A1 |
20180048556 | Chen | Feb 2018 | A1 |
20180276085 | Mitkar et al. | Sep 2018 | A1 |
20180285202 | Bhagi et al. | Oct 2018 | A1 |
20180285205 | Mehta et al. | Oct 2018 | A1 |
20180285383 | Nara et al. | Oct 2018 | A1 |
20180375938 | Vijayan et al. | Dec 2018 | A1 |
20190050421 | Saxena | Feb 2019 | A1 |
20190068464 | Bernat | Feb 2019 | A1 |
20190109713 | Clark | Apr 2019 | A1 |
20190179805 | Prahlad et al. | Jun 2019 | A1 |
20190182325 | Vijayan et al. | Jun 2019 | A1 |
20190278662 | Nagrale et al. | Sep 2019 | A1 |
20190303246 | Gokhale et al. | Oct 2019 | A1 |
20190332294 | Kilari | Oct 2019 | A1 |
20200025004 | Dunnigan et al. | Jan 2020 | A1 |
20200034248 | Nara et al. | Jan 2020 | A1 |
20200073574 | Pradhan | Mar 2020 | A1 |
20200233845 | Dornemann et al. | Jul 2020 | A1 |
20200241613 | Lakshman et al. | Jul 2020 | A1 |
20200278915 | Degaonkar et al. | Sep 2020 | A1 |
20200319694 | Mohanty et al. | Oct 2020 | A1 |
20200341871 | Shveidel | Oct 2020 | A1 |
20200349027 | Bansod et al. | Nov 2020 | A1 |
20200394110 | Rao et al. | Dec 2020 | A1 |
20200401489 | Mitkar et al. | Dec 2020 | A1 |
20210034468 | Patel | Feb 2021 | A1 |
20210049079 | Kumar et al. | Feb 2021 | A1 |
20210064484 | Dornemann et al. | Mar 2021 | A1 |
20210075768 | Polimera et al. | Mar 2021 | A1 |
20210089215 | Ashraf et al. | Mar 2021 | A1 |
20210173744 | Agrawal et al. | Jun 2021 | A1 |
20210200648 | Clark | Jul 2021 | A1 |
20210209060 | Kottomtharayil et al. | Jul 2021 | A1 |
20210218636 | Parvathamvenkatas et al. | Jul 2021 | A1 |
20210255771 | Kilaru et al. | Aug 2021 | A1 |
20210271564 | Mitkar et al. | Sep 2021 | A1 |
20210286639 | Kumar | Sep 2021 | A1 |
20210357246 | Kumar et al. | Nov 2021 | A1 |
20220103622 | Camargos et al. | Mar 2022 | A1 |
20220214997 | Kavaipatti Anantharamakrishnan et al. | Jul 2022 | A1 |
Number | Date | Country |
---|---|---|
0259912 | Mar 1988 | EP |
0405926 | Jan 1991 | EP |
0467546 | Jan 1992 | EP |
0541281 | May 1993 | EP |
0774715 | May 1997 | EP |
0809184 | Nov 1997 | EP |
0817040 | Jan 1998 | EP |
0899662 | Mar 1999 | EP |
0981090 | Feb 2000 | EP |
WO 9513580 | May 1995 | WO |
WO 9912098 | Mar 1999 | WO |
2006052872 | Nov 2005 | WO |
WO 2016004120 | Jan 2016 | WO |
Entry |
---|
PTAB-IPR2021-00674—('723) POPR Final, filed Jul. 8, 2021, in 70 pages. |
PTAB-IPR2021-00674—Mar. 31, 2021 723 Petition, filed Mar. 31, 2021, in 87 pages. |
PTAB-IPR2021-00674—Mar. 31, 2021 Explanation for Two Petitions, filed Mar. 31, 2021, in 9 pages. |
PTAB-IPR2021-00674—Exhibit 1001—U.S. Pat. No. 9,740,723, Issue Date Aug. 22, 2017, in 51 pages. |
PTAB-IPR2021-00674—Exhibit 1002—Jagadish Declaration, dated Mar. 31, 2021, in 200 pages. |
PTAB-IPR2021-00674—Exhibit 1003—U.S. Pat. No. 9,740,723 file history, Issue Date Aug. 22, 2017, in 594 pages. |
PTAB-IPR2021-00674—Exhibit 1004—Virtual Machine Monitors Current Technology and Future Trends, May 2005, in 9 pages. |
PTAB-IPR2021-00674—Exhibit 1005—Virtualization Overview, 2005, 11 pages. |
PTAB-IPR2021-00674—Exhibit 1006—Let's Get Virtual_Final Stamped, May 14, 2007, in 42 pages. |
PTAB-IPR2021-00674—Exhibit 1007—U.S. Pat. No. 8,458,419—Basler, Issue Date Jun. 4, 2013, in 14 pages. |
PTAB-IPR2021-00674—Exhibit 1008—US20080244028A1 (Le), Publication Date Oct. 2, 2008, in 22 pages. |
PTAB-IPR2021-00674—Exhibit 1009—U.S. Appl. No. 60/920,847 (Le Provisional), filed Mar. 29, 2007, in 70 pages. |
PTAB-IPR2021-00674—Exhibit 1010—Discovery Systems in Ubiquitous Computing (Edwards), 2006, in 8 pages. |
PTAB-IPR2021-00674—Exhibit 1011—HTTP The Definitive Guide excerpts (Gourley), 2002, in 77 pages. |
PTAB-IPR2021-00674—Exhibit 1012—VCB White Paper (Wayback Mar. 21, 2007), retrieved Mar. 21, 2007, Coypyright Date 1998-2006, in 6 pages. |
PTAB-IPR2021-00674—Exhibit 1013—Scripting VMware excerpts (Muller), 2006, in 66 pages. |
PTAB-IPR2021-00674—Exhibit 1014—Rob's Guide to Using VMWare excerpts (Bastiaansen), Sep. 2005, in 178 pages. |
PTAB-IPR2021-00674—Exhibit 1015—Carrier, 2005 in 94 pages. |
PTAB-IPR2021-00674—Exhibit 1016—U.S. Pat. No. 7,716,171 (Kryger), Issue Date May 11, 2010, in 18 pages. |
PTAB-IPR2021-00674—Exhibit 1017—RFC2609, Jun. 1999, in 33 pages. |
PTAB-IPR2021-00674—Exhibit 1018—MS Dictionary excerpt, 2002, in 3 pages. |
PTAB-IPR2021-00674—Exhibit 1019—Commvault v. Rubrik Complaint, Filed Apr. 21, 2020, in 29 pages. |
PTAB-IPR2021-00674—Exhibit 1020—Commvault v. Rubrik Scheduling Order, Filed Feb. 17, 2021, in 15 pages. |
PTAB-IPR2021-00674—Exhibit 1021—Duncan Affidavit, Dated Mar. 3, 2021, in 16 pages. |
PTAB-IPR2021-00674—Exhibit 1022—Hall-Ellis Declaration, dated Mar. 30, 2021, in 291 pages. |
PTAB-IPR2021-00674—Exhibit 1023—Digital_Data_Integrity_2007_Appendix_A_UMCP, 2007, in 24 pages. |
PTAB-IPR2021-00674—Exhibit 1024—Rob's Guide—Amazon review (Jan 4, 2007), retrieved Jan. 4, 2007, in 5 pages. |
PTAB-IPR2021-00674—Exhibit 2001—esxRanger, 2006, in 102 pages. |
PTAB-IPR2021-00674—Exhibit 2002—Want, 1995, in 31 pages. |
PTAB-IPR2021-00674—Exhibit 2003—Shea, retrieved Jun. 10, 2021, in 5 pages. |
PTAB-IPR2021-00674—Exhibit 2004—Jones Declaration, Dated Jul. 8, 2021, in 36 pages. |
PTAB-IPR2021-00674—Exhibit 3001, dated Aug. 30, 2021, in 2 pages. |
PTAB-IPR2021-00674—Exhibit IPR2021-00674 Joint Request to Seal Settlement Agreement, dated Aug. 31, 2021, in 4 pages. |
PTAB-IPR2021-00674—Joint Motion to Terminate, Filed Aug. 31, 2021, in 7 pages. |
PTAB-IPR2021-00674—Response to Notice Ranking Petitions Final, filed Jul. 8, 2021, in 7 pages. |
PTAB-IPR2021-00674—Termination Order, filed Sep. 1, 2021, in 4 pages. |
Case No. 1:20-cv-00525-MN, Amended Complaint DDE-1-20-cv-00525-15, filed Jul. 27, 2020, in 30 pages. |
Case No. 1:20-cv-00525-MN, Complaint DDE-1-20-cv-00525-1, Apr. 21, 2020, in 28 pages. |
Case No. 1:20-cv-00525-MN, First Amended Answer DDE-1-20-cv-00525-95, filed Jul. 23, 2021, in 38 pages. |
Case No. 1:20-cv-00525-MN, Joint Claim Construction Brief DDE-1-20-cv-00525-107, filed Oct. 1, 2021, in 79 pages. |
Case No. 1:20-cv-00525-MN, Joint Claim Construction Brief Exhibits DDE-1-20-cv-00525-107-1, filed Oct. 1, 2021, in 488 pages (in 7 parts). |
Case No. 1:20-cv-00525-MN, Oral Order DDE-1-20-cv-00524-78_DDE-1-20-cv-00525-77, dated May 24, 2021, in 1 page. |
Case No. 1:20-cv-00525-MN, Oral Order DDE-1-20-cv-00524-86_DDE-1-20-cv-00525-87, dated Jun. 29, 2021, in 1 page. |
Case No. 1:20-cv-00525-MN, Order DDE-1-20-cv-00525-38_DDE-1-20-cv-00524-42, filed Feb. 10, 2021, in 4 pages. |
Case No. 20-525-MN-CJB, Joint Claim Construction Statement DDE-1-20-cv-00525-119, filed Oct. 29, 2021, in 12 pages. |
Case No. 1:20-525-MN-CJB, Farnan Letter DDE-1-20-cv-00525-111, filed Oct. 6, 2021, in 2 pages. |
Case No. 1:20-525-MN-CJB, Farnan Letter Exhibit A DDE-1-20-cv-00525-111-1, filed Oct. 6, 2021, in 7 pages. |
Case No. 1:20-cv-00525-CFC-CJB, Joint Appendix of Exhibits 1-6, filed Jan. 13, 2022, in 2 pages. |
Case No. 1:20-cv-00525-CFC-CJB, Joint Appendix of Exhibits 1-6, filed Jan. 13, 2022, in 224 pages. |
Case No. 1:20-cv-00525-CFC-CJB, Joint Claim Construction Brief On Remaining Disputed Terms, filed Jan. 13, 2022, in 54 pages. |
Case No. 120-cv-00525-MN—Stipulation of Dismissal, filed Jan. 27, 2022, in 2 pages. |
PTAB-IPR2021-00675—00589 590 675 Termination Order, filed Sep. 1, 2021, in 4 pages. |
PTAB-IPR2021-00675—Joint Motion to Terminate, Aug. 31, 2021, in 7 pages. |
PTAB-IPR2021-00675—Joint Request to Seal Settlement Agreement, Aug. 31, 2021, in 4 pages. |
PTAB-IPR2021-00675—Preliminary Sur-Reply Final, filed Aug. 16, 2021, in 6 pages. |
PTAB-IPR2021-00675—Reply to POPR, filed Aug. 9, 2021, in 6 pages. |
PTAB-IPR2021-00675—POPR Final, filed Jul. 9, 2021, in 48 pages. |
PTAB-IPR2021-00675—Mar. 25, 2021 IPR—Petition—Cls 5 and 21—Final, dated Mar. 25, 2021, in 72 pages. |
PTAB-IPR2021-00675—Exhibit 1001—U.S. Pat. No. 10,248,657, issue date Apr. 2, 2019, in 85 pages. |
PTAB-IPR2021-00675—Exhibit 1002—Jagadish_Declaration_Final, filed Mar. 24, 2021, in 175 pages. |
PTAB-IPR2021-00675—Exhibit 1003—WO2008070688A1 (Bunte), dated Jun. 12, 2008, in 71 pages. |
PTAB-IPR2021-00675—Exhibit 1004—US20070156842A1(Vermeulen), Publication Date Jul. 5, 2007, 1 in 69 pages. |
PTAB-IPR2021-00675—Exhibit 1005—US20020059317A1(Black), Publication Date May 16, 2002, in 14 pages. |
PTAB-IPR2021-00675—Exhibit 1006—US_20080133835 (Zhu), Publication Date Jun. 5, 2008, in 14 pages. |
PTAB-IPR2021-00675—Exhibit 1007—Introduction to AWS for Java Developers (Monson-Haefel), Jun. 26, 2007, in 3 pages. |
PTAB-IPR2021-00675—Exhibit 1008—U.S. Pat. No. 8,140,786 (Bunte Patent), Issue Date Mar. 20, 2012, in 37 pages. |
PTAB-IPR2021-00675—Exhibit 1009—US20060218435A1(Ingen), Publication Date Sep. 28, 2006, in 27 pages. |
PTAB-IPR2021-00675—Exhibit 1010—Declaration of Duncan Hall, dated Feb. 18, 2021, in 81 pages. |
PTAB-IPR2021-00675—Exhibit 1011—Controlling the Enterprise Information Life Cycle, Jun. 10, 2005 in 7 pages. |
PTAB-IPR2021-00675—Exhibit 1012—Deduplication_Stop repeating yourself _ Network World, Sep. 25, 2006, in 6 pages. |
PTAB-IPR2021-00675—Exhibit 1013—Lose Unwanted Gigabytes Overnight (McAdams), Feb. 26, 2007 in 5 pages. |
PTAB-IPR2021-00675—Exhibit 1014—Data Domain releases DD120 for backup and deduplication _ Network World, Feb. 26, 2008 in 2 pages. |
PTAB-IPR2021-00675—Exhibit 1015—PTAB-IPR2021-00675—Exhibit 1015—Hall-Ellis Declaration_Part1, dated Feb. 24, 2021, in 299 pages of Part 1 of 5. |
PTAB-IPR2021-00675—Exhibit 1015—Hall-Ellis Declaration_Part2, dated Feb. 24, 2021, in 306 pages of Part 2 of 5. |
PTAB-IPR2021-00675—Exhibit 1015—Hall-Ellis Declaration_Part3, dated Feb. 24, 2021, in 272 pages of Part 3 of 5. |
PTAB-IPR2021-00675—Exhibit 1015—Hall-Ellis Declaration_Part4, dated Feb. 24, 2021, in 364 pages of Part 4 of 5. |
PTAB-IPR2021-00675—Exhibit 1015—Hall-Ellis Declaration_Part5., dated Feb. 24, 2021, in 480 pages of Part 5 of 5. |
PTAB-IPR2021-00675—Exhibit 1016—Amazon.com Unveils Data Storage Service_ Computerworld, Mar. 20, 2006 in 4 pages. |
PTAB-IPR2021-00675—Exhibit 1017—Amazon Simple Storage Service (Amazon S3)b, Mar. 10, 2008, in 2 pages. |
PTAB-IPR2021-00675—Exhibit 1018—How I cut my data center costs by $700,000 _ Computerworld, Mar. 30, 2007 in 4 pages. |
PTAB-IPR2021-00675—Exhibit 1019—Microsoft Office Outlook 2003 (Boyce)_Part1, 2004, in 269 pages, Part 1 of 4. |
PTAB-IPR2021-00675—Exhibit 1019—Microsoft Office Outlook 2003 (Boyce)_Part2, 2004, in 303 pages, Part 2 of 4. |
PTAB-IPR2021-00675—Exhibit 1019—Microsoft Office Outlook 2003 (Boyce)_Part3, 2004, in 274 pages, Part 3 of 4. |
PTAB-IPR2021-00675—Exhibit 1019—Microsoft Office Outlook 2003 (Boyce)_Part4, in 220 pages, Part 4 of 4. |
PTAB-IPR2021-00675—Exhibit 1020—Finding Similar Files in a Large File System (Udi), Oct. 1993 in 11 pages. |
PTAB-IPR2021-00675—Exhibit 1021—Components of Amazon S3, Mar. 1, 2006, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1022—Apr. 21, 2020 [1] Complaint, filed Apr. 21, 2020, in 300 pages. |
PTAB-IPR2021-00675—Exhibit 1023—Feb. 17, 2021 Scheduling Order Case [dckt 46_0], filed Feb. 17, 2021, in 15 pages. |
PTAB-IPR2021-00675—Exhibit 1024—Corporate IT Warms Up to Online Backup Services _ Computerworld, Feb. 4, 2008, in 6 pages. |
PTAB-IPR2021-00675—Exhibit 1025—Who Are The Biggest Users of Amazon Web Services Its Not Startups TechCrunch 1, Apr. 21, 2008, in 2 pages. |
PTAB-IPR2021-00675—Exhibit 1026—Programming Amazon Web Services (Murty), Mar. 2008, OReilly Media, Inc. in 595 pages. |
PTAB-IPR2021-00675—Exhibit 1027—Announcement Lower Data Transfer Costs, posted on Apr. 22, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1028—A Ruby Library for Amazons Simple Storage Services (S3), May 26, 2008, 8 pages. |
PTAB-IPR2021-00675—Exhibit 1029—Erlaws. Mar. 7, 2008, in 2 pages. |
PTAB-IPR2021-00675—Exhibit 1030—S3Drive—Amazon S3 Filesystem, Jun. 16, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1031—S3Drive—Prerequisites, Jun. 5, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1032—S3Drive Screenshots, Jun. 21, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1033—S3Drive Download, Jun. 5, 2008, 1 page. |
PTAB-IPR2021-00675—Exhibit 1034—ElasticDrive, Jun. 11, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1035—Jungle Disk Overview, Jun. 26, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1036—Jungle Disk How It Works, Jun. 26, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1037—Jungle Disk Why Its Better, Jun. 26, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1038—Jungle Disk FAQs, Jun. 26, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1039—IBiz Amazon Integrator, May 17, 2008, in 3 pages. |
PTAB-IPR2021-00675—Exhibit 1040—JetS3t, May 29, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1041—SpaceBlock, Jun. 4, 2008, in 2 pages. |
PTAB-IPR2021-00675—Exhibit 1042—S3Safe, May 31, 2008, 1 page. |
PTAB-IPR2021-00675—Exhibit 1043—Veritas NetBackup, Jun. 22, 2008, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1044—Amazon S3 Application Programming Interfaces, Mar. 1, 2006, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1045—Bucket Restrictions and Limitations, Mar. 1, 2006, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1046—The Archive in the Sky, posted on Jun. 17, 2008, in 2 pages. |
PTAB-IPR2021-00675—Exhibit 1047—Market-Oriented Cloud Computing (Buyya), Sep. 25-27, 2008, in 9 pages. |
PTAB-IPR2021-00675—Exhibit 1048—Data Domain OpenStorage Software, May 12, 2008, 1 page. |
PTAB-IPR2021-00675—Exhibit 1049—Working With Amazon Buckets, Mar. 1, 2006, in 1 page. |
PTAB-IPR2021-00675—Exhibit 1050—FH 10248657, Issue Date Apr. 2, 2019, in 677 pages. |
PTAB-IPR2021-00675—Exhibit 1051—CommVault v. Cohesity Complaint, dated Apr. 21, 2020, in 28 pages. |
PTAB-IPR2021-00675—Exhibit 1052—US_2008_0052328_A1 (Widhelm), Publication Date Feb. 28, 2008, in 10 pages. |
PTAB-IPR2021-00675—Exhibit 1053—Microsoft Computer Dictionary, 2002, in 12 pages. |
PTAB-IPR2021-00675—Exhibit 2001—US20080229037A1, Publication Date Sep. 18, 2008, in 36 pages. |
PTAB-IPR2021-00675—Exhibit 2002—Comparison of Exs. 1008, 1003, in 75 pages. |
PTAB-IPR2021-00675—Exhibit 3001—Re_ IPR2021-00535, 2021-00589, 2021-00590, 2021-00609, 2021-00673, 2021-00674, 2021-00675, Aug. 30, 2021, in 2 pages. |
PTAB-IPR2021-00590—('533) POPR Final, filed Jun. 16, 2021, in 59 pages. |
PTAB-IPR2021-00590—Feb. 26 2021 Patent IPR, filed Feb. 26, 2021, in 89 pages. |
PTAB-IPR2021-00590—Exhibit 1001—U.S. Pat. No. 7,840,533, Issue Date Nov. 23, 2010, in 16 pages. |
PTAB-IPR2021-00590—Exhibit 1002—Chase Declaration (533 Patent IPR), dated Feb. 26, 2021, in 183 pages. |
PTAB-IPR2021-00590—Exhibit 1003—OSDI '94 paper, Nov. 1994, 13 pages. |
PTAB-IPR2021-00590—Exhibit 1004—U.S. Pat. No. 6,981,114 (Wu), Issue Date Dec. 27. 2005, in 16 pages. |
PTAB-IPR2021-00590—Exhibit 1005—US20030061456A1 (Ofek), Issue Date Mar. 27, 2003, in 60 pages. |
PTAB-IPR2021-00590—Exhibit 1006—U.S. Pat. No. 5,835,953 (Ohran), Issue Date Nov. 10, 1998, in 33 pages. |
PTAB-IPR2021-00590—Exhibit 1007—US20030046270A1 (Leung), Publication Date Mar. 6, 2003, in 27 pages. |
PTAB-IPR2021-00590—Exhibit 1008—U.S. Pat. No. 6,473,775 (Kusters), Oct. 29, 2002, in 15 pages. |
PTAB-IPR2021-00590—Exhibit 1009—VVM for Windows, Aug. 2002, in 498 pages. |
PTAB-IPR2021-00590—Exhibit 1010—U.S. Pat. No. 7,284,104B1, Issue Date Oct. 16, 2007, in 15 pages. |
PTAB-IPR2021-00590—Exhibit 1011—MS Computer Dictionary (5th ed) excerpts, 2002, in 5 pages. |
PTAB-IPR2021-00590—Exhibit 1012—NetBackup Sys Admin Guide, May 1999, 580 pages. |
PTAB-IPR2021-00590—Exhibit 1013—U.S. Pat. No. 7,873,806, Issue Date Jan. 18, 2011, in 27 pages. |
PTAB-IPR2021-00590—Exhibit 1014—U.S. Pat. No. 6,9205,37B2, Issue Date Jul. 19, 2005, in 57 pages. |
PTAB-IPR2021-00590—Exhibit 1015—Assignment to EMC—Ohran, Date Recorded Nov. 24, 2003, in 6 pages. |
PTAB-IPR2021-00590—Exhibit 1016—Assignment Docket—Ohran, Earliest Recordation Date Nov. 8, 1996, in 2 pages. |
PTAB-IPR2021-00590—Exhibit 1017—CommVault v. Rubrik Complaint, filed on Apr. 21, 2020, in 29 pages. |
PTAB-IPR2021-00590—Exhibit 1018—Feb. 17, 2021 (0046) Scheduling Order, filed Feb. 17, 2021, in 15 pages. |
PTAB-IPR2021-00590—Exhibit 2001—Jones Declaration, dated Jun. 16, 2021, in 53 pages. |
PTAB-IPR2021-00590—Exhibit 2009—590 Declaration, dated Jul. 7, 2021, in 8 pages. |
PTAB-IPR2021-00590—Exhibit 3001, dated Aug. 30, 2021, in 2 pages. |
PTAB-IPR2021-00590—Joint Motion to Terminate, filed Aug. 31, 2021, in 7 pages. |
PTAB-IPR2021-00590—Joint Request to Seal Settlement Agreement, filed Aug. 31, 2021, in 4 pages. |
PTAB-IPR2021-00590—Termination Order, filed Sep. 1, 2021, in 4 pages. |
Case No. 1-20-cv-00524-MN, Amended_Complaint_DDE-1-20-cv-00524-13, filed Jul. 27, 2020, in 30 pages. |
Case No. 1-20-cv-00524-45-MN, Answer to the Amended Complaint, filed Feb. 16, 2021, in 25 pages. |
Case No. 1-20-cv-00524-45-MN, Complaint_DDE-1-20-cv-00524-1, filed on Apr. 21, 2020, in 29 pages. |
Case No. 1-20-cv-00524-96-MN-CJB, First Amended Answer DDE-1-20-cv-00524-96, filed Jul. 23, 2021, in 41 pages. |
Case No. 1-20-cv-00524-96-MN-CJB, Oral Order DDE-1-20-cv-00524-86_DDE-1-20-cv-00525-87, filed Jun. 29, 2021, in 1 page. |
Case No. 1-20-cv-00524-96-MN-CJB, Order Dismissing with Prejudice DDE-1-20-cv-00524-101, filed Aug. 31, 2021, in 1 page. |
Case No. 1-20-cv-00524-MN, Order_DDE-1-20-cv-00525-38_DDE-1-20-cv-00524-42, filed Feb. 10, 2021, in 4 pages. |
Case No. 1:20-cv-00524-MN, Stipulation DDE-1-20-cv-00524-93, filed Jul. 14, 2021, in 3 pages. |
Case No. No. 6:21-CV-00634-ADA, Answer WDTX-6-21-cv-00634-19, filed Aug. 27, 2021, in 23 pages. |
Case No. 1:21-cv-00537, Complaint WDTX-1-21-cv-00537-1_WDTX-6-21-cv-00634-1, filed Jun. 18, 2021, in 44 pages. |
Case No. 6:21-cv-00634-ADA, Order Dismissing with Prejudice WDTX-6-21-cv-00634-22, filed Sep. 1, 2021, in 1 page. |
PTAB-IPR2021-00673—('723) POPR Final, filed Jun. 30, 2021, in 70 pages. |
PTAB-IPR2021-00673—('723) Sur-Reply Final, filed Aug. 16, 2021, in 7 pages. |
PTAB-IPR2021-00673—723 patent IPR—Reply to POPR, filed Aug. 9, 2021, in 6 pages. |
PTAB-IPR2021-00673—Mar. 17, 2021_Petition_723, filed Mar. 17, 2021, in 98 pages. |
PTAB-IPR2021-00673—Exhibit 1001—U.S. Pat. No. 9,740,723, Issue Date Aug. 22, 2017, in 51 pages. |
PTAB-IPR2021-00673—Exhibit 1002—Declaration_Jagadish_EXSRanger, filed Mar. 16, 2021, in 191 pages. |
PTAB-IPR2021-00673—Exhibit 1003—FH 9740723, Issue Date Aug. 22, 2017, in 594 pages. |
PTAB-IPR2021-00673—Exhibit 1004—esxRangerProfessionalUserManual v.3.1, 2006 in 102 pages. |
PTAB-IPR2021-00673—Exhibit 1005—VC_Users_Manual_11_NoRestriction, Copyright date 1998-2004, in 466 pages. |
PTAB-IPR2021-00673—Exhibit 1006—U.S. Pat. No. 8,635,429—Naftel, Issue Date Jan. 21, 2014, in 12 pages. |
PTAB-IPR2021-00673—Exhibit 1007—US20070288536A1—Sen, Issue Date Dec. 13, 2007, in 12 pages. |
PTAB-IPR2021-00673—Exhibit 1008—US20060224846A1—Amarendran, Oct. 5, 2006, in 15 pages. |
PTAB-IPR2021-00673—Exhibit 1009—U.S. Pat. No. 8,209,680—Le, Issue Date Jun. 26, 2012, in 55 pages. |
PTAB-IPR2021-00673—Exhibit 1010—Virtual Machine Monitors Current Technology and Future Trends, May 2005 in 9 pages. |
PTAB-IPR2021-00673—Exhibit 1011—Virtualization Overview, Copyright 2005, VMware, Inc., 11 pages. |
PTAB-IPR2021-00673—Exhibit 1012—Let's Get Virtual A Look at Today's Virtual Server, May 14, 2007 in 42 pages. |
PTAB-IPR2021-00673—Exhibit 1013—U.S. Pat. No. 8,135,930—Mattox, Issue Date Mar. 13, 2012, in 19 pages. |
PTAB-IPR2021-00673—Exhibit 1014—U.S. Pat. No. 8,060,476—Afonso, Issue Date Nov. 15, 2011, in 46 pages. |
PTAB-IPR2021-00673—Exhibit 1015—U.S. Pat. No. 7,823,145—Le 145, Issue Date Oct. 26, 2010, in 24 pages. |
PTAB-IPR2021-00673—Exhibit 1016—US20080091655A1—Gokhale, Publication Date Apr. 17, 2008, in 14 pages. |
PTAB-IPR2021-00673—Exhibit 1017—US20060259908A1—Bayer, Publication Date Nov. 16, 2006, in 8 pages. |
PTAB-IPR2021-00673—Exhibit 1018—U.S. Pat. No. 8,037,016—Odulinski, Issue Date Oct. 11, 2011, in 20 pages. |
PTAB-IPR2021-00673—Exhibit 1019—U.S. Pat. No. 7,925,850—Waldspurger, Issue Date Apr. 12, 2011, in 23 pages. |
PTAB-IPR2021-00673—Exhibit 1020—U.S. Pat. No. 8,191,063—Shingai, May 29, 2012, in 18 pages. |
PTAB-IPR2021-00673—Exhibit 1021—U.S. Pat. No. 8,959,509B1—Sobel, Issue Date Feb. 17, 2015, in 9 pages. |
PTAB-IPR2021-00673—Exhibit 1022—U.S. Pat. No. 8,458,419—Basler, Issue Date Jun. 4, 2013, in 14 pages. |
PTAB-IPR2021-00673—Exhibit 1023—D. Hall_Internet Archive Affidavit & Ex. A, dated Jan. 20, 2021, in 106 pages. |
PTAB-IPR2021-00673—Exhibit 1024—esxRangerProfessionalUserManual, 2006, in 103 pages. |
PTAB-IPR2021-00673—Exhibit 1025—D.Hall_Internet Archive Affidavit & Ex. A (source html view), dated Jan. 27, 2021, in 94 pages. |
PTAB-IPR2021-00673—Exhibit 1026—Scripting VMware (excerpted) (GMU), 2006, in 19 pages. |
PTAB-IPR2021-00673—Exhibit 1027—How to cheat at configuring VMware ESX server (excerpted), 2007, in 16 pages. |
PTAB-IPR2021-00673—Exhibit 1028—Robs Guide to Using VMware (excerpted), Sep. 2005 in 28 pages. |
PTAB-IPR2021-00673—Exhibit 1029—Hall-Ellis Declaration, dated Feb. 15, 2021, in 55 pages. |
PTAB-IPR2021-00673—Exhibit 1030—B. Dowell declaration, dated Oct. 15, 2020, in 3 pages. |
PTAB-IPR2021-00673—Exhibit 1031—Vizioncore esxEssentials Review ZDNet, Aug. 21, 2007, in 12 pages. |
PTAB-IPR2021-00673—Exhibit 1032—ZDNet Search on_ howorth—p. 6 _, printed on Jan. 15, 2021, ZDNet 3 pages. |
PTAB-IPR2021-00673—Exhibit 1033—ZDNet _ Reviews _ ZDNet, printed on Jan. 15, 2021, in 33 pages. |
PTAB-IPR2021-00673—Exhibit 1034—Understanding VMware Consolidated Backup, 2007, 11 pages. |
PTAB-IPR2021-00673—Exhibit 1035—techtarget.com news links—May 2007, May 20, 2007, in 39 pages. |
PTAB-IPR2021-00673—Exhibit 1036—ITPro 2007 Issue 5 (excerpted), Sep.-Oct. 2007 in 11 pages. |
PTAB-IPR2021-00673—Exhibit 1037—InfoWorld—Feb. 13, 2006, Feb. 13, 2006, in 17 pages. |
PTAB-IPR2021-00673—Exhibit 1038—InfoWorld—Mar. 6, 2006, Mar. 6, 2006, in 18 pages. |
PTAB-IPR2021-00673—Exhibit 1039—InfoWorld—Apr. 10, 2006, Apr. 10, 2006, in 18 pages. |
PTAB-IPR2021-00673—Exhibit 1040—InfoWorld—Apr. 17, 2006, Apr. 17, 2006, in 4 pages. |
PTAB-IPR2021-00673—Exhibit 1041—InfoWorld—May 1, 2006, May 1, 2006, in 15 pages. |
PTAB-IPR2021-00673—Exhibit 1042—InfoWorld—Sep. 25, 2006, Sep. 25, 2006, in 19 pages. |
PTAB-IPR2021-00673—Exhibit 1043—InfoWorld—Feb. 5, 2007, Feb. 5, 2007, in 22 pages. |
PTAB-IPR2021-00673—Exhibit 1044—InfoWorld—Feb. 12, 2007, Feb. 12, 2007, in 20 pages. |
PTAB-IPR2021-00673—Exhibit 1045—InformationWeek—Aug. 14, 2006, Aug. 14, 2006, in 17 pages. |
PTAB-IPR2021-00673—Exhibit 1046—esxRanger Ably Backs Up VMs, May 2, 2007 in 6 pages. |
PTAB-IPR2021-00673—Exhibit 1047—Businesswire—Vizioncore Inc. Releases First Enterprise-Class Hot Backup and Recovery Solution for VMware Infrastructure, Aug. 31, 2006 in 2 pages. |
PTAB-IPR2021-00673—Exhibit 1048—Vizioncore Offers Advice to Help Users Understand VCB for VMwar, Jan. 23, 2007 in 3 pages. |
PTAB-IPR2021-00673—Exhibit 1049—Dell Power Solutions—Aug. 2007 (excerpted), Aug. 2007 in 21 pages. |
PTAB-IPR2021-00673—Exhibit 1050—communities-vmware-t5-VI-VMware-ESX-3-5-Discussions, Jun. 28, 2007, in 2 pages. |
PTAB-IPR2021-00673—Exhibit 1051—Distributed_File_System_Virtualization, Jan. 2006, pp. 45-56, in 12 pages. |
PTAB-IPR2021-00673—Exhibit 1052—Distributed File System Virtualization article abstract, 2006, in 12 pages. |
PTAB-IPR2021-00673—Exhibit 1053—Cluster Computing _ vol. 9, issue 1, Jan. 2006 in 5 pages. |
PTAB-IPR2021-00673—Exhibit 1054—redp3939—Server Consolidation with VMware ESX Server, Jan. 12, 2005 in 159 pages. |
PTAB-IPR2021-00673—Exhibit 1055—Server Consolidation with VMware ESX Server _ Index Page, Jan. 12, 2005 in 2 pages. |
PTAB-IPR2021-00673—Exhibit 1056—Apr. 21, 2020 [1] Complaint, filed Apr. 21, 2020, in 300 pages. |
PTAB-IPR2021-00673—Exhibit 1057—Feb. 17, 2021 (0046) Scheduling Order, filed Feb. 17, 2021, in 15 pages. |
PTAB-IPR2021-00673—Exhibit 1058—Novell Netware 5.0-5.1 Network Administration (Doering), Copyright 2001 in 40 pages. |
PTAB-IPR2021-00673—Exhibit 1059—US20060064555A1 (Prahlad 555), Publication Date Mar. 23, 006, in 33 pages. |
PTAB-IPR2021-00673—Exhibit 1060—Carrier Book, 2005 in 94 pages. |
PTAB-IPR2021-00673—Exhibit 2001 Jones Declaration, filed Jun. 30, 2021, in 35 pages. |
PTAB-IPR2021-00673—Exhibit 2002 VM Backup Guide 3.0.1, updated Nov. 21, 2007, 74 pages. |
PTAB-IPR2021-00673—Exhibit 2003 VM Backup Guide 3.5, updated Feb. 21, 2008, 78 pages. |
PTAB-IPR2021-00673—Exhibit 3001 Re_ IPR2021-00535, 2021-00589, 2021-00590, 2021-00609, 2021-00673, 2021-00674, 2021-00675, Aug. 30, 2021, in 2 pages. |
PTAB-IPR2021-00673—Joint Motion to Terminate, filed Aug. 31, 2021, in 7 pages. |
PTAB-IPR2021-00673—Joint Request to Seal Settlement Agreement, filed Aug. 31, 2021, in 4 pages. |
PTAB-IPR2021-00673—673 674 Termination Order, Sep. 1, 2021, in 4 pages. |
PTAB-IPR2021-00673—Patent Owner Mandatory Notices, filed Apr. 7, 2021, 6 pages. |
U.S. Appl. No. 14/723,380, filed May 27, 2015, Lakshman. |
U.S. Appl. No. 15/939,186, filed Mar. 28, 2018, Nara et al. |
U.S. Appl. No. 16/848,799, filed Apr. 14, 2020, Lakshman et al. |
U.S. Appl. No. 16/875,854, filed May 15, 2020, Degaonkar et al. |
U.S. Appl. No. 16/907,023, filed Jun. 19, 2020, Owen et al. |
U.S. Appl. No. 16/911,291, filed Jun. 24, 2020, Bhagi et al. |
U.S. Appl. No. 17/079,023, filed Oct. 23, 2020, Agrawal et al. |
U.S. Appl. No. 17/114,296, filed Dec. 7, 2020, Ashraf et al. |
U.S. Appl. No. 17/120,555, filed Dec. 14, 2020, Chatterjee et al. |
U.S. Appl. No. 17/129,581, filed Dec. 21, 2020, Parvathamvenkatas et al. |
U.S. Appl. No. 17/153,667, filed Jan. 20, 2021, Naik et al. |
U.S. Appl. No. 17/153,674, filed Jan. 20, 2021, Naik et al. |
U.S. Appl. No. 17/165,266, filed Feb. 2, 2021, Nara et al. |
U.S. Appl. No. 17/179,160, filed Feb. 18, 2021, Anantharamakrishnan et al. |
U.S. Appl. No. 17/336,103, filed Jun. 1, 2021, Vastrad et al. |
U.S. Appl. No. 17/354,905, filed Jun. 22, 2021, Rana et al. |
U.S. Appl. No. 17/389,204, filed Jul. 29, 2021, Mahajan. |
U.S. Appl. No. 17/465,683, filed Sep. 2, 2021, Camargos et al. |
U.S. Appl. No. 17/465,722, filed Sep. 2, 2021, Jain et al. |
U.S. Appl. No. 17/494,702, filed Oct. 5, 2021, Polimera. |
U.S. Appl. No. 17/501,881, filed Oct. 14, 2021, Dornemann. |
U.S. Appl. No. 17/508,822, filed Oct. 22, 2021, Mutha. |
U.S. Appl. No. 17/526,927, filed Nov. 15, 2021, Kapadia. |
U.S. Appl. No. 63/053,414, filed Jul. 17, 2020, Lakshman et al. |
U.S. Appl. No. 63/065,722, filed Aug. 14, 2020, Lakshman et al. |
U.S. Appl. No. 63/070,162, filed Aug. 25, 2020, Naik et al. |
U.S. Appl. No. 63/081,503, filed Sep. 22, 2020, Lakshman et al. |
U.S. Appl. No. 63/082,624, filed Sep. 24, 2020, Lakshman et al. |
U.S. Appl. No. 63/082,631, filed Sep. 24, 2020, Lakshman et al. |
Armstead et al., “Implementation of a Campwide Distributed Mass Storage Service: The Dream vs. Reality,” IEEE, Sep. 11-14, 1995, pp. 190-199. |
Arneson, “Mass Storage Archiving in Network Environments,” Digest of Papers, Ninth IEEE Symposium on Mass Storage Systems, Oct. 31, 1988-Nov. 3, 1988, pp. 45-50, Monterey, CA. |
Bates, S et al., “Sharepoint 2007 User's Guide,” pp. 1-88, 2007, Springer-Verlag New York, Inc., 104 pages. |
Brandon, J., “Virtualization Shakes Up Backup Strategy,” <http://www.computerworld.com>, internet accessed on Mar. 6, 2008, 3 pages. |
Cabrera et al., “ADSM: A Multi-Platform, Scalable, Backup and Archive Mass Storage System,” Digest of Papers, Compcon '95, Proceedings of the 40th IEEE Computer Society International Conference, Mar. 5, 1995-Mar. 9, 1995, pp. 420-427, San Francisco, CA. |
Chiappetta, Marco, “ESA Enthusiast System Architecture,” <http://hothardware.com/Articles/NVIDIA_ESA_Enthusiast_System_Architecture/>, Nov. 5, 2007, 2 pages. |
CommVault Systems, Inc., “A CommVault White Paper: VMware Consolidated Backup (VCB) Certification Information Kit,” 2007, 23 pages. |
CommVault Systems, Inc., “CommVault Solutions—VMware,” <http://www.commvault.com/solutions/vmware/>, internet accessed Mar. 24, 2008, 2 pages. |
CommVault Systems, Inc., “Enhanced Protection and Manageability of Virtual Servers,” Partner Solution Brief, 2008, 6 pages. |
Davis, D., “3 VMware Consolidated Backup (VCB) Utilities You Should Know,” Petri IT Knowlegebase, <http://www.petri.co.il/vmware-consolidated-backup-utilities.htm>, internet accessed on Jul. 14, 2008, Jan. 7, 2008. |
Davis, D., “Understanding VMware VMX Configuration Files,” Petri IT Knowledgebase, <http://www.petri.co.il/virtual_vmware_vmx_configuration_files.htm>, internet accessed on Jun. 19, 2008, 6 pages. |
Davis, D., “VMware Server & Workstation Disk Files Explained,” Petri IT Knowledgebase, <http://www.petri.co.il/virtual_vmware_files_explained.htm>, internet accessed on Jun. 19, 2008, 5 pages. |
Davis, D., “VMware Versions Compared,” Petri IT Knowledgebase, <http://www.petri.co.il/virtual_vmware_versions_compared.htm>, internet accessed on Apr. 28, 2008, 6 pages. |
Eitel, “Backup and Storage Management in Distributed Heterogeneous Environments,” IEEE, Jun. 12-16, 1994, pp. 124-126. |
Gait, J., “The Optical File Cabinet: A Random-Access File System For Write-Once Optical Disks,” IEEE Computer, vol. 21, No. 6, pp. 11-22 (Jun. 1988). |
Jander, M., “Launching Storage-Area Net,” Data Communications, US, McGraw Hill, NY, vol. 27, No. 4 (Mar. 21, 98), pp. 64-72. |
Lakshman et al., “Cassandra—A Decentralized Structured Storage System”, https://doi.org/10.1145/1773912.1773922, ACM SIGOPS Operating Systems Review, vol. 44, Issue 2, Apr. 2010, pp. 35-40. |
Microsoft Corporation, “How NTFS Works,” Windows Server TechCenter, updated Mar. 28, 2003, internet accessed Mar. 26, 2008, 26 pages. |
Rosenblum et al., “The Design and Implementation of a Log-Structured File System,” Operating Systems Review SIGOPS, vol. 25, No. 5, New York, US, pp. 1-15 (May 1991). |
Sanbarrow.com, “Disktype-table,” <http://sanbarrow.com/vmdk/disktypes.html>, internet accessed on Jul. 22, 2008, 4 pages. |
Sanbarrow.com, “Files Used by a VM,” <http://sanbarrow.com/vmx/vmx-files-used-by-a-vm.html>, internet accessed on Jul. 22, 2008, 2 pages. |
Sanbarrow.com, “Monolithic Versus Split Disks,” <http://sanbarrow.com/vmdk/monolithicversusspllit.html>, internet accessed on Jul. 14, 2008, 2 pages. |
Seto, Christ, “Why Deploying on Kubernetes is Like Flying With an Alligator,” Cockroach Labs (https://www.cockroachlabs.com/blog/kubernetes-scheduler/), dated Feb. 9, 2021, retrieved on Mar. 8, 2021, in 8 pages. |
SwiftStack, Inc., The OpenStack Object Storage System, Feb. 2012, pp. 1-29. |
VMware, Inc., “Open Virtual Machine Format,”http://www.vmware.com/appliances/learn/ovf.html>, internet accessed on May 6, 2008, 2 pages. |
VMware, Inc., “OVF, Open Virtual Machine Format Specification, version 0.9,” White Paper, <http://www.vmware.com>, 2007, 50 pages. |
VMware, Inc., “The Open Virtual Machine Format Whitepaper for OVF Specification, version 0.9,” White Paper, <http://www.vmware.com>, 2007, 16 pages. |
VMware, Inc., “Understanding VMware Consolidated Backup,” White Paper, <http://www.vmware.com>, 2007, 11 pages. |
VMware, Inc., “Using VMware Infrastructure for Backup and Restore,” Best Practices, <http://www.vmware.com>, 2006, 20 pages. |
VMware, Inc., “Virtual Disk API Programming Guide,” <http://www.vmware.com>, Revision 20080411, 2008, 44 pages. |
VMware, Inc., “Virtual Disk Format 1.1,” VMware Technical Note, <http://www.vmware.com>, Revision 20071113, Version 1.1, 2007, 18 pages. |
VMware, Inc., “Virtual Machine Backup Guide, ESX Server 3.0.1 and VirtualCenter 2.0.1,” <http://www.vmware.com>, updated Nov. 21, 2007, 74 pages. |
VMware, Inc., “Virtual Machine Backup Guide, ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5,” <http://www.vmware.com>, updated Feb. 21, 2008, 78 pages. |
VMware, Inc., “Virtualized iSCSI SANS: Flexible, Scalable Enterprise Storage for Virtual Infrastructures,” White Paper, <http://www.vmware.com>, Mar. 2008, 13 pages. |
VMware, Inc., “VMware Consolidated Backup, Improvements in Version 3.5,” Information Guide, <http://www.vmware.com>, 2007, 11 pages. |
VMware, Inc., “VMware Consolidated Backup,” Product Datasheet, <http://www.vmware.com>, 2007, 2 pages. |
VMware, Inc., “VMware ESX 3.5,” Product Datasheet, <http://www.vmware.com>, 2008, 4 pages. |
VMware, Inc., “VMware GSX Server 3.2, Disk Types: Virtual and Physical,” <http://www.vmware.com/support/gsx3/doc/disks_types_gsx.html>, internet accessed on Mar. 25, 2008, 2 pages. |
VMware, Inc., “VMware OVF Tool,” Technical Note, <http://www.vmware.com>, 2007, 4 pages. |
VMware, Inc., “VMware Workstation 5.0, Snapshots in a Linear Process,” <http:/www.vmware.com/support/ws5/doc/ws_preserve_sshot_linear.html>, internet accessed on Mar. 25, 2008, 1 page. |
VMware, Inc., “VMware Workstation 5.0, Snapshots in a Process Tree,” <http://www.vmware.com/support/ws5/doc/ws_preserve_sshot_tree.html>, internet accessed on Mar. 25, 2008, 1 page. |
VMware, Inc., “VMware Workstation 5.5, What Files Make Up a Virtual Machine?” <http://www.vmware.com/support/ws55/doc/ws_learning_files_in_a_vm.html>, internet accessed on Mar. 25, 2008, 2 pages. |
Wikipedia, “Cloud computing,” <http://en.wikipedia.org/wiki/Cloud_computing>, internet accessed Jul. 8, 2009, 13 pages. |
Wikipedia, “Cluster (file system),” <http://en.wikipedia.org/wiki/Cluster_%28file_system%29>, internet accessed Jul. 25, 2008, 1 page. |
Wikipedia, “Cylinder-head-sector,” <http://en.wikipedia.org/wiki/Cylinder-head-sector>, internet accessed Jul. 22, 2008, 6 pages. |
Wikipedia, “File Allocation Table,” <http://en.wikipedia.org/wiki/File_Allocation_Table>, internet accessed on Jul. 25, 2008, 19 pages. |
Wikipedia, “Logical Disk Manager,” <http://en.wikipedia.org/wiki/Logical_Disk_Manager>, internet accessed Mar. 26, 2008, 3 pages. |
Wikipedia, “Logical Volume Management,” <http://en.wikipedia.org/wiki/Logical_volume_management>, internet accessed on Mar. 26, 2008, 5 pages. |
Wikipedia, “Storage Area Network,” <http://en.wikipedia.org/wiki/Storage_area_network>, internet accessed on Oct. 24, 2008, 5 pages. |
Wikipedia, “Virtualization,” <http://en.wikipedia.org/wiki/Virtualization>, internet accessed Mar. 18, 2008, 7 pages. |
International Search Report and Written Opinion for PCT/US2011/054374, dated May 2, 2012, 9 pages. |
PTAB-IPR2021-00609—('048) POPR Final, filed Jun. 16, 2021, in 28 pages. |
PTAB-IPR2021-00609—Mar. 10, 2021 IPR Petition—pty, Mar. 10, 2021, in 89 pages. |
PTAB-IPR2021-00609—Exhibit 1001—U.S. Pat. No. 10,210,048, Issue Date Feb. 19, 2019, in 49 pages. |
PTAB-IPR2021-00609—Exhibit 1002—Sandeep Expert Declaration, dated Mar. 10, 2021, in 176 pages. |
PTAB-IPR2021-00609—Exhibit 1003—U.S. Pat. No. 9,354,927 (Hiltgen), Issue Date May 31, 2016, in 35 pages. |
PTAB-IPR2021-00609—Exhibit 1004—U.S. Pat. No. 8,677,085 (Vaghani), Issue Date Mar. 18, 2014, in 44 pages. |
PTAB-IPR2021-00609—Exhibit 1005—U.S. Pat. No. 9,639,428 (Boda), Issue Date May 2, 2017, in 12 pages. |
PTAB-IPR2021-00609—Exhibit 1006—US20150212895A1 (Pawar), Publication Date Jul. 30, 2015, in 60 pages. |
PTAB-IPR2021-00609—Exhibit 1007—U.S. Pat. No. 9,665,386 (Bayapuneni), Issue Date May 30, 2017, in 18 pages. |
PTAB-IPR2021-00609—Exhibit 1008—Popek and Golberg, Jul. 1974, in 10 pages. |
PTAB-IPR2021-00609—Exhibit 1009—Virtualization Essentials—First Edition (2012)—Excerpted, 2012, in 106 pages. |
PTAB-IPR2021-00609—Exhibit 1010—Virtual Machine Monitors Current Technology and Future Trends, May 2005, in 9 pages. |
PTAB-IPR2021-00609—Exhibit 1011—Virtualization Overview, 2005, in 11 pages. |
PTAB-IPR2021-00609—Exhibit 1012—Let's Get Virtual A Look at Today's Virtual Server, May 14, 2007, in 42 pages. |
PTAB-IPR2021-00609—Exhibit 1013—Virtual Volumes, Jul. 22, 2016, in 2 pages. |
PTAB-IPR2021-00609—Exhibit 1014—Virtual Volumes and the SDDC—Virtual Blocks, Internet Archives on Sep. 29, 2015, in 4 pages. |
PTAB-IPR2021-00609—Exhibit 1015—NEC White Paper—VMWare vSphere Virtual Volumes (2015), Internet Archives Dec. 4, 2015 in 13 pages. |
PTAB-IPR2021-00609—Exhibit 1016—EMC Storage and Virtual Volumes, Sep. 16, 2015 in 5 pages. |
PTAB-IPR2021-00609—Exhibit 1017—U.S. Pat. No. 8,621,460 (Evans), Issue Date Dec. 31, 2013, in 39 pages. |
PTAB-IPR2021-00609—Exhibit 1018—U.S. Pat. No. 7,725,671 (Prahlad), Issue Date May 25, 2010, in 48 pages. |
PTAB-IPR2021-00609—Exhibit 1019—Assignment—Vaghani to VMWare, Feb. 8, 2012, in 8 pages. |
PTAB-IPR2021-00609—Exhibit 1020—Assignment Docket—Vaghani, Nov. 11, 2011, in 1 page. |
PTAB-IPR2021-00609—Exhibit 1021—Dive into the VMware ESX Server hypervisor—IBM Developer, Sep. 23, 2011, in 8 pages. |
PTAB-IPR2021-00609—Exhibit 1022—MS Computer Dictionary Backup labeled, 2002 in 3 pages. |
PTAB-IPR2021-00609—Exhibit 1023—Jul. 7, 2014_VMware vSphere Blog, Jun. 30, 2014, 4 pages. |
PTAB-IPR2021-00609—Exhibit 1024—CommVault v. Rubrik Complaint, filed on Apr. 21, 2020, in 29 pages. |
PTAB-IPR2021-00609—Exhibit 1025—CommVault v. Cohesity Complaint, filed on Apr. 21, 2020, in 28 pages. |
PTAB-IPR2021-00609—Exhibit 1026—Feb. 17, 2021 (0046) Scheduling Order, filed on Feb. 17, 2021, in 15 pages. |
PTAB-IPR2021-00609—Exhibit 2001—Prosecution History_Part1, Issue Date Feb. 19, 2019, in 300 pages, Part 1 of 2. |
PTAB-IPR2021-00609—Exhibit 2001—Prosecution History_Part2, Issue Date Feb. 19, 2019, in 265 pages, Part 2 of 2. |
PTAB-IPR2021-00609—Exhibit 2002—Jones Declaration, dated Jun. 16, 2021, in 38 pages. |
PTAB-IPR2021-00609—Exhibit 3001—Re_ IPR2021-00535, 2021-00589, 2021-00590, 2021-00609, 2021-00673, 2021-00674, 2021-00675, dated Aug. 30, 2021, in 2 pages. |
PTAB-IPR2021-00609—Joint Motion to Terminate. Filed Aug. 31, 2021, in 7 pages. |
PTAB-IPR2021-00609—Joint Request to Seal Settlement Agreement, filed Aug. 31, 2021, in 4 pages. |
PTAB-IPR2021-00609—Termination Order, Sep. 1, 2021, in 4 pages. |
Arneson, David A., “Development of Omniserver,” Control Data Corporation, Tenth IEEE Symposium on Mass Storage Systems, May 1990, ‘Crisis in Mass Storage’ Digest of Papers, pp. 88-93, Monterey, CA. |
Huff, KL, “Data Set Usage Sequence Number,” IBM Technical Disclosure Bulletin, vol. 24, No. 5, Oct. 1981 New York, US, pp. 2404-2406. |
NetApp, StorageGRID 11.3 documentation, “Recovery and Maintenance Guide”, published Jan. 2020. Accessed Sep. 2022. |
Zhou et al, A highly Reliable Metadata Service for Large-Scale Distributed File systems IEEE. |
Number | Date | Country | |
---|---|---|---|
20220214997 A1 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
63082631 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17179160 | Feb 2021 | US |
Child | 17702644 | US |