Snapshot technology is commonly used to preserve point-in-time (PIT) state and data of a virtual computing instance (VCI), such as a virtual machine. Snapshots of VCIs are used for various applications, such as VCI replication, VCI rollback and data protection for backup and recovery.
Current snapshot technology can be classified into two types of snapshot techniques. The first type of snapshot techniques includes redo-log based snapshot techniques, which involve maintaining changes for each snapshot in separate redo logs. A concern with this approach is that the snapshot technique cannot be scaled to manage a large number of snapshots, for example, hundreds of snapshots. In addition, this approach requires intensive computations to consolidate across different snapshots.
The second type of snapshot techniques includes tree-based snapshot techniques, which involve creating a chain or series of snapshots to maintain changes to the underlying data using a B tree structure, such as a B+ tree structure, where each snapshot has its own logical map in the B tree structure that manages the mapping between the logical block addresses to the physical block addresses. Significant advantage of the tree-based snapshot techniques over the redo-log based snapshot techniques is the scalability of the tree-based snapshot techniques. However, the snapshot B tree structures of the tree-based snapshot techniques may include many nodes that are shared by multiple snapshots. When a snapshot is requested to be deleted, the logical map of the snapshot needs to be deleted. The B tree nodes that are exclusive owned by the snapshot being deleted can be removed. However, the B tree nodes shared by multiple snapshots cannot be deleted. Consequently, the nodes of the snapshot B tree structures need to be efficiently managed, especially when the snapshots are being deleted.
Throughout the description, similar reference numbers may be used to identify similar elements.
As used herein, the term “virtual computing instance” refers to any software processing entity that can run on a computer system, such as a software application, a software process, a virtual machine or a virtual container. A virtual machine is an emulation of a physical computer system in the form of a software computer that, like a physical computer, can run an operating system and applications. A virtual machine may be comprised of a set of specification and configuration files and is backed by the physical resources of the physical host computer. A virtual machine may have virtual devices that provide the same functionality as physical hardware and have additional benefits in terms of portability, manageability, and security. An example of a virtual machine is the virtual machine created using VMware vSphere® solution made commercially available from VMware, Inc of Palo Alto, Calif. A virtual container is a package that relies on virtual isolation to deploy and run applications that access a shared operating system (OS) kernel. An example of a virtual container is the virtual container created using a Docker engine made available by Docker, Inc. In this disclosure, the virtual computing instances will be described as being virtual machines, although embodiments of the invention described herein are not limited to virtual machines (VMs).
The cluster management server 108 of the distributed storage system 100 operates to manage and monitor the cluster 106 of host computers 104. The cluster management server 108 may be configured to allow an administrator to create the cluster 106, add host computers to the cluster and delete host computers from the cluster. The cluster management server 108 may also be configured to allow an administrator to change settings or parameters of the host computers in the cluster regarding the VSAN 102, which is formed using the local storage resources of the host computers in the cluster. The cluster management server 108 may further be configured to monitor the current configurations of the host computers and any VCIs running on the host computers, for example, VMs. The monitored configurations may include hardware and/or software configurations of each of the host computers. The monitored configurations may also include VCI hosting information, i.e., which VCIs (e.g., VMs) are hosted or running on which host computers. The monitored configurations may also include information regarding the VCIs running on the different host computers in the cluster.
The cluster management server 108 may also perform operations to manage the VCIs and the host computers 104 in the cluster 106. As an example, the cluster management server 108 may be configured to perform various resource management operations for the cluster, including VCI placement operations for either initial placement of VCIs and/or load balancing. The process for initial placement of VCIs, such as VMs, may involve selecting suitable host computers for placement of the virtual instances based on, for example, memory and central processing unit (CPU) requirements of the VCIs, the current memory and CPU loads on all the host computers in the cluster, and the memory and CPU capacity of all the host computers in the cluster.
In some embodiments, the cluster management server 108 may be a physical computer. In other embodiments, the cluster management server may be implemented as one or more software programs running on one or more physical computers, such as the host computers 104 in the cluster 106, or running on one or more VCIs, which may be hosted on any host computers. In an implementation, the cluster management server is a VMware vCenter™ server with at least some of the features available for such a server.
As illustrated in
The hypervisor 112 of each host computer 104, which is a software interface layer, enables sharing of the hardware resources of the host computer by VMs 124, running on the host computer using virtualization technology. With the support of the hypervisor 112, the VMs provide isolated execution spaces for guest software. In other embodiments, the hypervisor may be replaced with an appropriate virtualization software to support a different type of VCIs.
The VSAN module 114 of each host computer 104 provides access to the local storage resources of that host computer (e.g., handle storage input/output (I/O) operations to data objects stored in the local storage resources as part of the VSAN 102) by other host computers 104 in the cluster 106 or any software entities, such as VMs 124, running on the host computers in the cluster. As an example, the VSAN module of each host computer allows any VM running on any of the host computers in the cluster to access data stored in the local storage resources of that host computer, which may include virtual disks (or portions thereof) of VMs running on any of the host computers and other related files of those VMs. In addition, the VSAN module generates and manages snapshots of storage objects, such as virtual disk files of the VMs, in an efficient manner, where each snapshot has its own logical map that manages the mapping between logical block addresses to physical block addresses for the data of the snapshot.
In an embodiment, the VSAN module 114 leverages B tree structures, such as copy-on-write (COW) B+ tree structures, to organize storage objects and their snapshots taken at different times. In this embodiment, a single COW B+ tree structure can be used to build up the logical maps for all the snapshots of a storage object, which saves the space overhead of B+ tree nodes with shared mapping entries, as compared to standard B+ tree structure per snapshot logical map approach. An example of a COW B+ tree structure for one storage object managed by the VSAN module 114 in accordance with an embodiment of the invention is illustrated in
When a modification of the storage object is made, after the first snapshot SS1 is created, a new root node and one or more index and leaf nodes are created. In
In
In
In this manner, multiple snapshots of a storage object can be created at different times. These multiple snapshots create a hierarchy of snapshots.
As more COW B+ tree snapshots are created for a storage object, e.g., a virtual disk of a virtual machine, more nodes are shared by the various snapshots. When a snapshot is requested to be deleted, the logical map of that snapshot needs to be deleted. However, not all COW B+ tree nodes for a snapshot can be deleted when that snapshot is being deleted. There are two catalogs or types of COW B+ tree nodes accessible to a snapshot logical map: (1) exclusively owned nodes and (2) shared nodes. Exclusively owned nodes are nodes that are exclusively owned by a snapshot, which can be deleted when the snapshot is deleted. Shared nodes are nodes that are shared by multiple snapshots, which cannot be deleted when one of the snapshot is being deleted since the nodes are needed by at least one other snapshot. When a snapshot is being deleted, shared nodes of the snapshot are unlinked from the logical map subtree of the COW B+ tree for the snapshot, but remain linked to the other snapshot(s).
In some embodiments, a performance-efficient method is used to manage the shared status of a logical map COW B+ tree node. In these embodiments, each node is stamped, when the node is created, with a monotonically increased sequence value (SV), which can be used as a node ownership value, as explained below. These monotonically increased SVs may be exclusively numbers, alphanumerical characters or other symbols/characters with increasing values. Each snapshot is also assigned with the current SV when the snapshot is created. This SV assigned to the snapshot is the minimum SV of all nodes owned by the snapshot. Thus, the SV assigned to each snapshot is referred to herein as the minimum SV or minSV, which can be used as a minimum node ownership value. A node is shared between a snapshot and its parent snapshot if the SV of the node is smaller than the minSV of the snapshot since the node was generated before the snapshot was created. A node is exclusively owned by a snapshot if the SV of the node is equal to or larger than the minSV of the snapshot. Thus, the system can quickly determine the shared status of nodes for write requests at the running point (i.e., the current state of a storage object). Unshared nodes are reused for new writes. However, shared nodes are copied out first as new nodes, which are then used for new writes. This approach is more performance efficient than some state-of-art methods, such as shared bits, to manage the shared status of logical map COW B+ tree nodes since no input/output (IO) is required to update the shared status changes for individual nodes.
However, there is one challenging problem when the parent snapshot of the running point is being deleted under the performance efficient approach. During deletion of the parent snapshot, the shared nodes are just unlinked from the COW B+ tree subtree of the logical map for the parent snapshot. When a shared node is involved in a write at the running point, the system cannot distinguish whether the shared node is already unlinked from the logical map subtree of the parent snapshot or not. Totally different actions need to be taken based on the sharing status of a node. For a shared node still accessible to the parent snapshot that is involved in a write, a new node needs to be copied out from the shared node. For a node unlinked from the parent snapshot that is involved in a write, the system needs to in-place update the node. Misjudgment on the sharing status of the node will result in orphan nodes or data loss.
In an embodiment, as illustrated in
When deleting an ordinary snapshot, i.e., snapshots other than the parent snapshots of running points of storage objects, the nodes exclusively owned by that snapshot are deleted. However, the shared nodes that are accessible by the snapshot being deleted cannot be removed (i.e., deleted). Thus, these shared nodes are unlinked from the logical map subtree of the snapshot, but not deleted so that the shared nodes are accessible to other snapshot(s). However, as noted above, during deletion of the parent snapshot of a running point, the snapshot manager cannot distinguish whether nodes that are shared by the parent snapshot and the running point have been unlinked from the logical map subtree of the parent snapshot or not. Thus, when deleting the parent snapshot of a running point, the nodes of the parent snapshot are handled differently by the snapshot manager to ensure that new write requests at the running point that involve shared nodes (i.e., nodes that are shared by the parent snapshot and the running point) are properly processed. As used herein, a node is involved in a write request if the node needs to be updated to fulfill the write request.
In the distributed storage system 100, the snapshot manager 400 of each VSAN module 114 in the respective host computer 104 is able to properly delete nodes that are shared by the running point and its parent snapshot when the parent snapshot is being deleted. As described in more detail below, the snapshot manager uses an exclusive node list that will contain nodes that are exclusively owned by the parent snapshot of the running point, which can be deleted at an appropriate time. All non-shared nodes accessible to the parent snapshot are added to the exclusive node list. The minimum node ownership value (e.g., minSV) of the running point is then updated to the minimum node ownership value of the parent snapshot in order to transfer the ownership of all remaining nodes shared between the parent snapshot and the running point. However, if there are any writes at the running point that involve the shared nodes before the ownership transfer, these shared nodes are first copied out to produce new nodes that are then used for the writes. The new nodes are exclusively owned by the running point. However, the original shared nodes are now exclusively owned by the parent snapshot. Thus, these original shared nodes that have been copied out are also added to the exclusive node list. After the ownership transfer, the nodes in the exclusive node list are deleted from the B+ tree subtree corresponding to the logical map of the parent snapshot.
An operation executed by a particular snapshot manager 400 in the distributed storage system 100 to delete the parent snapshot of the running point of a storage object in accordance with an embodiment is described with reference to a process flow diagram of
At block 502, the first stage of the parent snapshot delete operation is executed by the snapshot manager 400. During this stage, the COW B+ subtree corresponding to the logical map of the parent snapshot of the storage object running point (i.e., the running point of the storage object) is traversed to determine all the nodes of the COW B+ subtree of the parent snapshot logical map that are exclusively owned by the parent snapshot. A node of the COW B+ subtree of the parent snapshot logical map that is not accessible to the running point and also not accessible to the grandparent snapshot of the running point is exclusively owned by the parent snapshot. A node of the COW B+ subtree of the parent snapshot logical map that is accessible to the running point and/or the grandparent snapshot of the running point is a shared node. All the nodes that are exclusively owned by the parent snapshot, are added to the exclusive node list.
Turning now to
If the node is determined to be exclusively owned by the parent snapshot, then the process proceeds to step 606, where the node is added to the exclusive node list by the snapshot manager 400. The process then proceeds to step 608. However, if the node is determined to be not exclusively owned by the parent snapshot, then the process proceeds directly to step 608.
At step 608, a determination is made by the snapshot manager 400 whether the current node is the last node of the COW B+ subtree corresponding to the logical map of the parent snapshot to be processed. If the current node is the last node to be processed, then the process is completed. However, if the current node is not the last node to be processed, the proceeds back to step 602, where the next node of the COW B+ subtree corresponding to the logical map of the parent snapshot is selected to be processed.
In an embodiment, if the current node has one or more child nodes, then one of those child nodes may be selected to be processed next. If the current node does not have any child nodes, then a sibling node of the current node may be selected to be processed next. If the current node does not have sibling nodes, then a sibling node of a processed node closest to the current node may be selected to be processed next. This process of selecting the next node to be processed is repeated until all the nodes of the COW B+ subtree corresponding to the logical map of the parent snapshot have been processed. In other embodiments, any selection process may be used to select the next node to be processed, such as a random selection process or a selection process based on the SVs or other values assigned to the nodes.
A pseudo code that may be used for the first stage of the parent snapshot delete operation in accordance with an embodiment of the invention is as follows:
In the above pseudo code, the minimum key of the child node is used to determine whether the page of the node in an extent of the storage is accessible by the child snapshot, i.e., the running point, where the extent is one or more contiguous blocks of a physical storage and the page is the data of the node stored in the extent. For a leaf node of a COW B+ tree, each extent has a unique key (i.e., a minimum key), which can be used to locate the extent if it is also accessible by the logical map of the child snapshot (e.g., the running point). For an index node, the extent consisted of a pair of data: a pivot key and a pointer to a child node. The keys of extents under the child node are equal to or larger than the pivot key. So, the look-up process for an extent with the key same as the value of a pivot key can traverse the index node if the index node is accessible by the child snapshot as well. Although the minimum key is used in an embodiment, another key in the page of a child node can be used.
For the above pseudo code, a node with an SV less than the minSV of the parent snapshot of the running point, i.e., shared with the grandparent snapshot of the running point, is filtered out before the step of adding the node to the exclusive node list of the parent snapshot, i.e., the line—add(node, exclusiveNodeList).
Turning back to
Turning now to
If the node is not shared with the parent snapshot, then the process proceeds to block 706, where the node is modified in-place to execute the write request by the VSAN module 114. The process then comes to an end. However, if the node is shared with the parent snapshot, then the process proceeds to block 708, where the shared node is copied out to create a new node, which is a copy of the shared node, by the VSAN module. Thus, the shared node is the source node of the new node. This new node is then modified to fulfill the write request. Next, at step 710, the source node of the new node, i.e., the shared node that was copied out, is added to the exclusive node list of the parent snapshot by the snapshot manager 400. The process is now completed. This process is repeated for every write request that involves one or more nodes that have been processed by the execution of the first stage of the parent snapshot delete operation, until the third stage is executed.
A pseudo code that may be used for the second stage of the parent snapshot delete operation in accordance with an embodiment of the invention is as follows:
Turning back to
Turning now to
Next, at step 804, the minSV of the running point is updated to the value of the minSV of the parent snapshot by the snapshot manager 400. As a result, the ownership of all remaining nodes shared between the parent snapshot and the running point are transferred to the running point. Thus, any new writes that involve these remaining shared nodes will not require copies of the remaining shared nodes. Instead, the new writes can be executed using the original remaining shared nodes.
Turning back to
Turning now to
A pseudo code that may be used for the fourth stage of the parent snapshot delete operation in accordance with an embodiment of the invention is as follows:
In an alternative embodiment, the logical tree of the parent snapshot may be traversed to find nodes that are in the exclusive node list of the parent snapshot. If a node in the logical tree of the parent snapshot is found in the exclusive node list of the parent snapshot, then that node is deleted. This process is continued until all the nodes of the logical tree of the parent snapshot have been processed.
A pseudo code that may be used for the fourth stage of the parent snapshot delete operation in accordance with the alternative embodiment of the invention is as follows:
Turning back to
In an embodiment, metadata of snapshots for the storage object is maintained by the snapshot manager 400 in persistent storage. The metadata of snapshots for the storage object may be stored in a B+ tree (“snapTree”) to keep the records of all active snapshots, which include the running point. The snapshot metadata may include at least an identifier and the logical map root node for each snapshot. When the parent snapshot is being deleted, the snapshot metadata may be updated to remove the parent snapshot metadata information from the snapshot metadata.
The parent snapshot delete operation is further described using an example of a COW B+ tree structure shown in
Initially, the exclusive node list of the parent snapshot is empty, i.e., exclusiveNodeList=[ ]. During the first stage of the parent snapshot delete operation, the nodes A and E will be put into the exclusive node list of the parent snapshot, since these nodes are not shared with running point, i.e., exclusiveNodeList=[A, E]. At the second stage of the parent snapshot delete operation before the first stage is finished, the node C is copied out as new node H for a new write IO at the running point that involves the node C because the SV of the node C is SV2 and SV2<SV5, and thus, the SV of the node C is less than the minSV (SV5) of the running point. That is, the node C is shared between the parent snapshot and the running point. The SV of the new node H is SV7. After the node C is copied out, the node C is put into the exclusive node list, i.e., exclusiveNodeList=[A, C, E] because the node C is now exclusively owned by the parent snapshot. Thus, the node layout of the parent snapshot can now be expressed as: [A=SV1, C=SV2, D=SV3, E=SV4] and the node layout of the running point can be expressed as: [F=SV5, H=SV7, D, G=SN6], which is illustrated in
At the third stage of the parent snapshot delete operation, the minSV of the running point is changed to SV1, which is the minSV of the parent snapshot. After the minSV of the running point has been changed to SV1, any new write IOs that involve any of the nodes having an SV equal to or greater than the new minSV of the running point (i.e., SV1) will be in-place operated at those nodes. For example, if the node D is involved in a new write IO at the running point, then the update for the new write IO will be in-place updated at the node D.
At the fourth stage of the parent snapshot delete operation, the nodes in the exclusive node list, i.e., the nodes A, C and E, will be deleted. After the fourth stage is completed, the node layout of the running point can be expressed as: [F=SV5, H=SV7, D=SN3, G=SN6], which is illustrated in
In an embodiment, the process of deleting a node of the logical map COW B+ subtree of the parent snapshot found in the exclusive node list of the parent snapshot involves updating a block allocation bitmap of the nodes of the COW B+ tree that includes the parent snapshot being deleted. In this embodiment, when a node of an COW B+ tree is allocated to a block, i.e., the node is to be stored in the block, a corresponding bit in the block allocation bitmap is marked as used. When a node of the COW B+ tree is being deallocated or deleted, a corresponding bit in the block allocation bitmap is marked as free. Thus, the nodes of the logical map COW B+ subtree of the parent snapshot found in the exclusive node list of the parent snapshot can be deleted by updating the bits in the block allocation bitmap corresponding to the blocks used for the nodes being deleted.
The process of updating a block allocation bitmap of nodes of a COW B+ tree in accordance with an embodiment is described using a simple example. In this example, a disk of 48 KB (kilobytes) and 4 KB blocks are used. Thus, the disk has 12 blocks.
Initially, all the blocks are free as indicated below.
After allocation of nodes A (B0), C (B1), D (B2) and E (B3), the block allocation bitmap is updated as follows:
After allocation of nodes F (B4) and G (B5), the block allocation bitmap is further updated as follows:
After allocation of node H (B6), the block allocation bitmap is further updated as follows:
Thus, when nodes are allocated at certain blocks, the bits of the block allocation bitmap corresponding to those blocks are updated to indicate that those blocks are used.
After deallocation or deletion of node C (B1), the block allocation bitmap is updated as follows:
After deallocation or deletion of E (B3) and A (B0), the block allocation bitmap is further updated as follows:
Thus, when nodes are deleted or deallocated at certain blocks, the bits of the block allocation bitmap corresponding to those blocks are updated to indicate that those blocks are free.
The block allocation bitmap may be stored in one or more of the blocks of the disk along with the nodes. Alternatively, the block allocation bitmap may be stored elsewhere in any physical storage.
In the embodiment where the metadata of snapshots for the storage object is maintained by the snapshot manager 400, the snapshot metadata may be updated to remove the parent snapshot metadata information from the snapshot metadata after all the nodes in the exclusive node list of the parent snapshot have been deleted. In the above example, before deleting the parent snapshot, the snapshot metadata maintained in the snapTree is as follows:
Turning now to
The CLOM 1102 operates to validate storage resource availability, and the DOM 1104 operates to create components and apply configuration locally through the LSOM 1106. The DOM 1104 also operates to coordinate with counterparts for component creation on other host computers 104 in the cluster 106. All subsequent reads and writes to storage objects funnel through the DOM 1104, which will take them to the appropriate components. The LSOM 1106 operates to monitor the flow of storage I/O operations to the local storage 122, for example, to report whether a storage resource is congested. The CMMDS 1108 is responsible for monitoring the VSAN cluster's membership, checking heartbeats between the host computers in the cluster, and publishing updates to the cluster directory. Other software components use the cluster directory to learn of changes in cluster topology and object configuration. For example, the DOM uses the contents of the cluster directory to determine the host computers in the cluster storing the components of a storage object and the paths by which those host computers are reachable.
The RDT manager 1110 is the communication mechanism for storage-related data or messages in a VSAN network, and thus, can communicate with the VSAN modules 114 in other host computers 104 in the cluster 106. As used herein, storage-related data or messages (simply referred to herein as “messages”) may be any pieces of information, which may be in the form of data streams, that are transmitted between the host computers 104 in the cluster 106 to support the operation of the VSAN 102. Thus, storage-related messages may include data being written into the VSAN 102 or data being read from the VSAN 102. In an embodiment, the RDT manager uses the Transmission Control Protocol (TCP) at the transport layer and it is responsible for creating and destroying on demand TCP connections (sockets) to the RDT managers of the VSAN modules in other host computers in the cluster. In other embodiments, the RDT manager may use remote direct memory access (RDMA) connections to communicate with the other RDT managers.
As illustrated in
A computer-implemented method for deleting parent snapshots of running points of storage objects stored in a storage system in accordance with an embodiment of the invention is described with reference to a flow diagram of
The components of the embodiments as generally described in this document and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner.
It should also be noted that at least some of the operations for the methods may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. As an example, an embodiment of a computer program product includes a computer useable storage medium to store a computer readable program that, when executed on a computer, causes the computer to perform operations, as described herein.
Furthermore, embodiments of at least portions of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The computer-useable or computer-readable medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device), or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disc. Current examples of optical discs include a compact disc with read only memory (CD-ROM), a compact disc with read/write (CD-R/W), a digital video disc (DVD), and a Blu-ray disc.
In the above description, specific details of various embodiments are provided. However, some embodiments may be practiced with less than all of these specific details. In other instances, certain methods, procedures, components, structures, and/or functions are described in no more detail than to enable the various embodiments of the invention, for the sake of brevity and clarity.
Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.