As computer memory storage and data bandwidth increase, so does the amount and complexity of data that businesses manage daily. Large-scale distributed storage systems, such as data centers, typically run many business operations. A datacenter, which also may be referred to as a server room, is a centralized repository, either physical or virtual, for the storage, management, and dissemination of data pertaining to one or more businesses. A distributed storage system may be coupled to client computers interconnected by one or more networks. If any portion of the distributed storage system has poor performance, company operations may be impaired. A distributed storage system therefore maintains high standards for data availability and high-performance functionality.
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
Embodiments are described for migrating data between volumes using a virtual copy operation. In one embodiment, a user may wish to migrate data from an existing block-based storage system, in a storage area network (SAN) volume, for example to a file-based storage system, perhaps managed by a network attached storage (NAS) server. Conventional systems performing such a migration generally involve having a client copy data from the SAN volume, which may be fronted by a file system, to a NAS volume. In some cases, though, the SAN volume and destination NAS volume may be located on storage devices in the same storage array. When the migration is being performed within the same storage array, it may not be necessary to actually copy any data, which utilizes processing resources and bandwidth. Instead, in one embodiment, the storage system can use the virtual copy techniques described herein.
In one embodiment, virtual copy logic having knowledge of the file structure on the SAN volume goes through the volume, associating particular blocks with the destination NAS volume. For example, this approach might identify block numbers 5, 100, 72, and 90 from the SAN volume as blocks to migrate and include in the NAS volume. The virtual copy logic may instruct the destination NAS volume to make this association by adding an indication of the identified blocks (i.e., virtual block numbers) to volume metadata corresponding to the NAS volume. In addition, certain blocks included in the SAN volume may include file metadata specifying the structure and contents of a particular file to be included on the NAS volume. The virtual copy logic can add the file metadata to the filesystem of the NAS volume to create the file out of the underlying data blocks. If the logical volumes are on the same storage system, no actual movement of the underlying data is involved. Instead, the virtual copy operation establishes a relationship between the data blocks (or other data elements) from a first logical volume and the file name from the second logical volume. This approach is much faster than a client reading the data from the SAN volume and subsequently writing it to the NAS volume. The virtual copy operation requires no user data transfer and minimal bandwidth to implement. As a result, the virtual copy operation is faster and more efficient than copying the data directly. In addition, the virtual copy operation prevents the unnecessary creation of duplicate copies of the underlying data which saves valuable space on the physical storage medium.
In one embodiment, the destination NAS volume may optionally renumber the data blocks in the file into a more efficient arrangement. In this embodiment, the migration client would still provide the block numbers and the destination file name, but the NAS volume may establish new links with new block numbers to the existing data. For example, using the previous copy example, the NAS server might rename the data blocks to be 8, 9, 10, 11 on the NAS volume, while preserving their physical location on the underlying flash storage devices. In this case, the NAS server can then use more efficient metadata layouts because the block locations have been “normalized.” For example, the blocks migrated to the file could be represented by the single extent 8-11. The result is a virtual defragmentation performed by identifying blocks that go together and renumbering them so that they can be described more efficiently. Furthermore, this approach requires very little extra storage space since both the old and new volumes use the same underlying storage devices. The only additional extra space utilized is for metadata storage in the new volume. Since metadata typically represents less than 1% of the total storage space required, this is a highly efficient way to migrate data from block-based storage systems to file or object storage systems.
Whether or not the blocks are renumbered upon migration, there is no requirement that the NAS server use the same file system type as that in use on the SAN volume, assuming that the NAS server supports the naming conventions and other user visible features of the file system. In this embodiment, the NAS volume could simply allocate slightly more space than the SAN volume utilizes and place metadata structures above the SAN volume. When the blocks are renumbered, the destination volume can be the same size as the source volume, or would allow alternate volume layouts. This approach can be very useful in migrating SAN volumes under regular file system management to NAS file systems.
In another embodiment, the virtual copy techniques described herein can be used to migrate data between two file-based storage systems as well. As with the migrations from block-based storage systems to file-based storage systems, this type of migration need not include physical copying if the two file system volumes are on the same server. Instead, the destination volume can simply accept the block lists for each file, and create a new entry in the new file system that corresponds to the existing entry in the original file system. As before, the only information that needs to be transferred is the file metadata and block list. This technique allows very fast migration from one file system format to a new file system format, which can be done with minimal added storage overhead, allowing for even large file system format changes to be implemented, even on a nearly full storage array.
The virtual copy techniques described herein can work even if some user data is actually copied from one location to another. The physical copying of a portion of the user data does not remove the advantages of this approach. These techniques, however, do enable the storage system to eliminate most, if not all, of the user data movement in migrating from one data management format to another. In one embodiment, the server may mark files as transferred in the old file system, allowing automatic forwarding of requests to the new file system. This would allow the migration to be done with essentially zero downtime for users of the system. An individual file may have a short period during which it is locked as it's migrated from the old volume to the new volume, but this downtime would be on the order of milliseconds, at most, even for large files because only the file metadata is being recreated, and the underlying data need not actually be moved. Additional details of these virtual copy operations are provided below.
Storage controller 110 may include software and/or hardware configured to provide access to storage devices 135A-n. Although storage controller 110 is shown as being separate from storage array 130, in some embodiments, storage controller 110 may be located within storage array 130. Storage controller 110 may include or be coupled to a base operating system (OS), a volume manager, and additional control logic, such as virtual copy logic 140, for implementing the various techniques disclosed herein.
Storage controller 110 may include and/or execute on any number of processing devices and may include and/or execute on a single host computing device or be spread across multiple host computing devices, depending on the embodiment. In some embodiments, storage controller 110 may generally include or execute on one or more file servers and/or block servers. Storage controller 110 may use any of various techniques for replicating data across devices 135A-n to prevent loss of data due to the failure of a device or the failure of storage locations within a device. Storage controller 110 may also utilize any of various deduplication techniques for reducing the amount of data stored in devices 135A-n by deduplicating common data.
In one embodiment, storage controller 110 may utilize logical volumes and mediums to track client data that is stored in storage array 130. A medium is defined as a logical grouping of data, and each medium has an identifier with which to identify the logical grouping of data. A volume is a single accessible storage area with a single file system, typically, though not necessarily, resident on a single partition of a storage device. In one embodiment, storage controller 110 includes storage volumes 142 and 146. In other embodiments, storage controller 110 may include any number of additional or different storage volumes. In one embodiment, storage volume 142 may be a SAN volume providing block-based storage. The SAN volume 142 may include block data 144 controlled by a server-based operating system, where each block can be controlled as an individual hard drive. Each block in block data 144 can be identified by a corresponding block number and can be individually formatted. In one embodiment, storage volume 146 may be a NAS volume providing file-based storage. The NAS volume 146 may include file data 148 organized according to an installed file system. The files in file data 148 can be identified by file names and can include multiple underlying blocks of data which are not individually accessible by the file system.
In one embodiment, storage volumes 142 and 146 may be logical organizations of data physically located on one or more of storage device 135A-n in storage array 130. Storage controller 110 may maintain a volume to medium mapping table to map each volume to a single medium, and this medium is referred to as the volume's anchor medium. A given request received by storage controller 110 may indicate at least a volume and block address or file name, and storage controller 110 may determine an anchor medium targeted by the given request from the volume to medium mapping table.
In one embodiment, storage controller 110 includes virtual copy logic 140. Virtual copy logic 140 may receive a request and subsequently initiate the migration of data elements from one logical storage volume to another using a virtual copy. In response to receiving the request, virtual copy logic 140 may identify a number of data blocks from SAN volume 142 that are to be migrated. In one embodiment, at least some of the identified data blocks may have non-sequential block numbers in block data 144. Virtual copy logic 140 may further identify certain characteristics of the data blocks including, for example, a size of the data blocks, an owner of the data blocks, a creation time of the data blocks and a last modification time of the data blocks. If not already in existence, virtual copy logic 140 may generate a destination file as part of file data 148 in NAS volume 146. In one embodiment, to associate the data blocks from SAN volume 142 with the NAS volume 146, virtual copy logic 140 may generate volume metadata for the NAS volume 146 including the block numbers of the data blocks to be included in the volume and the identified characteristics of the data blocks. In addition, virtual copy logic may identify any file metadata present in the data blocks and create one or more files in file data 148 according to the filesystem used on NAS volume 146 using the file metadata. The file metadata may be stored as part of file data 148 or may be stored in some other designated location. As a result, the data blocks from SAN volume 142 are associated with the file in NAS volume 146 without having to copy or relocate any of the underlying data from storage devices 135A-n in storage array 130.
In various embodiments, multiple mapping tables may be maintained by storage controller 110. These mapping tables may include a medium mapping table and a volume to medium mapping table. These tables may be utilized to record and maintain the mappings between mediums and underlying mediums and the mappings between volumes and mediums. Storage controller 110 may also include an address translation table with a plurality of entries, wherein each entry holds a virtual-to-physical mapping for a corresponding data component. This mapping table may be used to map logical read/write requests from each of the initiator devices 115 and 125 to physical locations in storage devices 135A-n. A “physical” pointer value may be read from the mappings associated with a given medium during a lookup operation corresponding to a received read/write request. The term “mappings” is defined as the one or more entries of the address translation mapping table which convert a given medium ID and block number into a physical pointer value. This physical pointer value may then be used to locate a physical location within the storage devices 135A-n. The physical pointer value may be used to access another mapping table within a given storage device of the storage devices 135A-n. Consequently, one or more levels of indirection may exist between the physical pointer value and a target storage location.
In alternative embodiments, the number and type of client computers, initiator devices, storage controllers, networks, storage arrays, and data storage devices is not limited to those shown in
Network 120 may utilize a variety of techniques including wireless connection, direct local area network (LAN) connections, wide area network (WAN) connections such as the Internet, a router, storage area network, Ethernet, and others. Network 120 may comprise one or more LANs that may also be wireless. Network 120 may further include remote direct memory access (RDMA) hardware and/or software, transmission control protocol/internet protocol (TCP/IP) hardware and/or software, router, repeaters, switches, grids, and/or others. Protocols such as Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI, and so forth may be used in network 120. The network 120 may interface with a set of communications protocols used for the Internet such as the Transmission Control Protocol (TCP) and the Internet Protocol (IP), or TCP/IP. In one embodiment, network 120 represents a storage area network (SAN) which provides access to consolidated, block level data storage. The SAN may be used to enhance the storage devices accessible to initiator devices so that the devices 135A-n appear to the initiator devices 115 and 125 as locally attached storage.
Initiator devices 115 and 125 are representative of any number of stationary or mobile computers such as desktop personal computers (PCs), servers, server farms, workstations, laptops, handheld computers, servers, personal digital assistants (PDAs), smart phones, and so forth. Generally speaking, initiator devices 115 and 125 include one or more processing devices, each comprising one or more processor cores. Each processor core includes circuitry for executing instructions according to a predefined general-purpose instruction set. For example, the x86 instruction set architecture may be selected. Alternatively, the ARM®, Alpha®, PowerPC®, SPARC®, or any other general-purpose instruction set architecture may be selected. The processor cores may access cache memory subsystems for data and computer program instructions. The cache subsystems may be coupled to a memory hierarchy comprising random access memory (RAM) and a storage device.
In one embodiment, initiator device 115 includes initiator application 112 and initiator device 125 includes initiator application 122. Initiator applications 112 and 122 may be any computer application programs designed to utilize the data from block data 144 or file data 148 in storage volumes 142 and 146 to implement or provide various functionalities. Initiator applications 112 and 122 may issue requests to migrate data within storage system 100. For example, the request may be to migrate all or a portion of block data 144 from SAN volume 142 to NAS volume 146. In response to the request, virtual copy logic 140 may use the virtual copy techniques described herein to generate the corresponding file metadata to indicate which blocks from block data 144 are to be associated with a file in file data 148. Thus, the migration can be performed without physically copying any of the underlying data or moving the data from one storage device to another.
In one embodiment, storage system 100 further includes host device 160. In certain embodiments, the file system used in connection with one or both of the storage volumes implemented on storage array 130 may run on host device 160, rather than storage controller 110. In this embodiment, virtual copy logic 140 can communicate with host device 160, over network 120, to obtain file system data and volume to medium mapping data to perform the data migration using virtual copy.
In one embodiment, initiator interface 242 manages communication with initiator devices in storage system 100, such as initiator devices 115 or 125. Initiator interface 242 can receive I/O requests to access data storage volumes 142 and 146 from an initiator application 112 or 122 over network 120. In one embodiment, the I/O request includes a request to migrate at least a portion of block data 144 from SAN volume 142 to a file in file data 148 of NAS volume 146. The request may be received as part of the installation of a new storage system, the addition of new storage device to storage array 130, the upgrade of an existing storage volume, etc. After the migration is performed, using a virtual copy by other components of virtual copy logic 140, initiator interface may provide a notification to initiator device 115 or 125 over network 120 indicating that the migration was successfully performed.
In one embodiment, data block interface 244 interacts with SAN volume 142 as part of the virtual copy operation. For example, data block interface 244 may identify the blocks in block data 144 that were specified in the request received by initiator interface 242. In one embodiment, the request specifies a series of block numbers (e.g., 5, 100, 72, and 90) to identify those blocks that are to be migrated from SAN volume 142 to NAS volume 146. In one embodiment, at least some of the identified data blocks may have non-sequential block numbers in block data 144. In one embodiment, data block interface 244 may further identify certain characteristics of the data blocks including, for example, a size of the data blocks, an owner of the data blocks, a creation time of the data blocks, a last modification time of the data blocks, or other characteristics. Data block interface 244 may obtain these characteristics from SAN volume metadata 252 stored in data store 250 and associated with the blocks in block data 144.
In one embodiment, file system interface 246 interacts with NAS volume 146 as part of the virtual copy operation. File system interface 246 may determine if the target file has already been created by scanning the file names present in file data 148. In one embodiment, the name of the target file may be specified in the request received by initiator interface 242. If the target file does not exist, file system interface 246 may generate the file as part of file data 148 in NAS volume 146. In one embodiment, to associate the data blocks identified by data block interface 244 from SAN volume 142 with NAS volume 146, file system interface 246 may generate or annotate metadata associated with the volume. For example, file system interface 246 may write an indication of the block numbers to identify the data blocks to be migrated to NAS volume metadata 254 stored in data store 250. Virtual addressing allows the blocks associated with NAS volume 146 to point to the same underlying data blocks addressed by SAN volume 142. In addition, file system interface 246 may write the identified characteristics of the data blocks and any other information that can be used to populate the filesystem of NAS volume 146 to NAS file metadata 256. NAS file metadata 256 may be part of file data 148, for example, and may be a copy of file metadata present in one of the blocks virtually copied from SAN volume 142 to NAS volume 146. The file metadata may define which particular blocks in NAS volume 146 are part of a given file, identified by a unique file name, as well as the identified characteristics of the underlying data blocks.
In one embodiment, external host interface 248 interacts with external host device 160 (if present in storage system 100) as part of the virtual copy operation. In one embodiment, external host device 160 may have a file system used with one or more of the logical volumes maintained across storage array 130. When this logical volume is being used as the source volume, for example, in a virtual copy operation, storage controller 110 may not know how to read the logical volume. Thus, external host interface 248 can send a request to host device 160 for file and block mapping data associated with the logical volume. External host interface 248 can receive the requested information from host device 160, so that virtual copy logic 140 can determine which data blocks correspond to a particular file.
Referring to
At block 320, method 300 identifies a plurality of data blocks to be transformed from the block-based storage system. In one embodiment, the request received by initiator interface 242 includes identifiers, such as block numbers, of certain data blocks to be added to the file-based storage system. Data block interface 244 may identify those designated blocks using the block numbers included in the request. In one embodiment, data block interface 244 additionally identifies certain characteristics of those data blocks from SAN volume metadata 252.
At block 330, method 300 generates volume metadata for the file-based storage volume, the metadata to associate the plurality of data blocks with the volume. In addition, file system interface 246 may generate file metadata to create a file on the file-based storage volume and associate at least some of the plurality of data blocks with the file. A name of the target file may be specified in the request received by initiator interface 242. In one embodiment, file system interface 246 may determine if the target file has already been created by scanning the file names present in file data 148. If the target file does not exist, file system interface 246 may generate the file as part of file data 148 in NAS volume 146. In one embodiment, file system interface 246 may write an indication of the block numbers to NAS file metadata 256 in order to associate the data blocks identified by data block interface 244 from SAN volume 142 with the file in NAS volume 146. In addition, file system interface 246 may write the identified characteristics of the data blocks and any other information that can be used to populate the filesystem of NAS volume 146 to NAS file metadata 256. In one embodiment, the data blocks associated with the file may maintain their original block numbers. For example, file system interface 246 may establish new links to the existing data where the new block numbers are the same as the original block numbers. In another embodiment, the data blocks may be renumbered to have sequential or consecutive block numbers. In either case, as a result, the data blocks from SAN volume 142 are associated with the file in NAS volume 146 without having to copy or relocate any of the underlying data from storage devices 135A-n in storage array 130.
Referring to
At block 420, method 400 identifies a plurality of data blocks from the SAN volume 142, the plurality of data blocks having non-sequential block numbers. As described above with respect to block 320, data block interface 244 may identify the blocks using block numbers included in the request received at block 410. In one embodiment, at least two of the plurality of data blocks to be migrated may have non-sequential block numbers. As such, these at least two data blocks may not contain data that resides physically adjacent to each other on the underlying one of storage devices 135A-135n of storage array 130 and the at least two data blocks may not be able to be identified using an extent.
At block 430, method 400 identifies characteristics of the plurality of data blocks from the SAN volume 142. In one embodiment, data block interface 244 may obtain these characteristics from SAN volume metadata 252 stored in data store 250 and associated with the blocks in block data 144. The characteristics may include, for example, a size of the data blocks, an owner of the data blocks, a creation time of the data blocks, a last modification time of the data blocks, or other characteristics or information that can be used to populate the filesystem of NAS volume 146.
At block 440, method 400 optionally moves data underlying at least one of the plurality of data blocks associated with the file from a first storage device 135A in the storage array 130 to a second storage device 135B in the storage array. Although not required as part of the request to migrate the data, storage controller 110 may execute other operations involving the copying or relocating of data (e.g., a defragmentation operation). As a result, certain data may be moved between storage devices in order to optimize or improve the efficiency of future data access operations.
At block 450, method 400 performs a virtual defragmentation operation on the plurality of data blocks associated with the file in the NAS volume. In one embodiment, file system interface 246 may assign new sequential block numbers to the data blocks associated with the file. For example, file system interface 246 may establish new links with new block numbers to the existing data, while preserving their physical location on the underlying storage devices. The result is a virtual defragmentation performed by identifying blocks that go together and renumbering them so that they can be described more efficiently (e.g., by a single extent).
At block 460, method 400 generates metadata for a file in the NAS volume, the metadata to associate the plurality of data blocks with the file. In one embodiment, file system interface 246 may write an indication of the identified block numbers to NAS file metadata 256 in order to associate the data blocks identified by data block interface 244 from SAN volume 142 with the file in NAS volume 146. In addition, file system interface 246 may write the identified characteristics of the data blocks and any other information that can be used to populate the filesystem of NAS volume 146 to NAS file metadata 256. As a result, the data blocks from SAN volume 142 are associated with the file in NAS volume 146.
Referring to
At block 520, method 500 generates the file in the second logical volume. In one embodiment, file system interface 246 may determine if the target file has already been created by scanning the file names present in file data 148. The name of the target file may be specified in the request received by initiator interface 242 at block 510. If the target file does not exist, file system interface 246 may generate the file as part of file data 148 in NAS volume 146.
At block 530, method 500 adds an indication of the first data element to volume metadata corresponding to the second logical volume, the indication in metadata to associate the first data element with the volume. In addition, file system interface 246 may copy file metadata from at least one of the data blocks to file metadata on the second logical volume, the file metadata to associate the first data element with the file. In one embodiment, to associate the data element with the file in NAS volume 146, file system interface 246 may generate or annotate file metadata associated with the file. For example, file system interface 246 may write an identifier of the data element (e.g., a block number or file name) to NAS file metadata 256 stored in data store 250 to associate the data element with the file.
Referring to
At block 620, method 600 sends a request for characteristics of the first and second data elements to an external host device 160 that manages the block-based storage system. In one embodiment, external host device 160 may have a file system used with one or more of the logical volumes maintained across storage array 130. When this logical volume is being used as the source volume, for example, in a virtual copy operation, storage controller 110 may not know how to read the logical volume. Thus, external host interface 248 can send a request to host device 160 for file and block mapping data associated with the logical volume.
At block 630, method 600 receives, from the external host device 160, the characteristics of the data elements. In one embodiment, external host interface 248 can receive the requested information from host device 160, so that virtual copy logic 140 can determine which data blocks correspond to a particular file.
At block 640, method 600 generates the file in the second logical volume. As described above with respect to block 520, in one embodiment, if the target file does not already exist, file system interface 246 may generate the file as part of file data 148 in NAS volume 146. In one embodiment, file system interface 246 may assign new sequential block numbers to the data elements associated with the file. For example, file system interface 246 may establish new links with new block numbers to the existing data, while preserving their physical locations on the underlying storage devices. The renumbering data blocks can thus be described by a single extent, making servicing future data access requests more efficient.
At block 650, method 600 adds an indication of the first and second data elements and the characteristics of the data elements to file metadata corresponding to the file in the second logical volume, the indication in metadata to associate the first and second data elements with the file. As described above with respect to block 530, in one embodiment, to associate the data element with the file in NAS volume 146, file system interface 246 may write an identifier of the data element (e.g., a block number or file name) to NAS file metadata 256 stored in data store 250 to associated the data element with the file.
The exemplary computer system 700 includes a processing device 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 718, which communicate with each other via a bus 730. Data storage device 718 may be one example of any of the storage devices 135A-n in
Processing device 702 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 702 is configured to execute processing logic 726, which may be one example of virtual copy logic 140 shown in
The data storage device 718 may include a machine-readable storage medium 728, on which is stored one or more set of instructions 722 (e.g., software) embodying any one or more of the methodologies of functions described herein, including instructions to cause the processing device 702 to execute virtual copy logic 140 or initiator application 112 or 122. The instructions 722 may also reside, completely or at least partially, within the main memory 704 and/or within the processing device 702 during execution thereof by the computer system 700; the main memory 704 and the processing device 702 also constituting machine-readable storage media. The instructions 722 may further be transmitted or received over a network 720 via the network interface device 708.
The machine-readable storage medium 728 may also be used to store instructions to perform a method for data refresh in a distributed storage system without corruption of application state, as described herein. While the machine-readable storage medium 728 is shown in an exemplary embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a good understanding of several embodiments of the present disclosure. It will be apparent to one skilled in the art, however, that at least some embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram format in order to avoid unnecessarily obscuring the present disclosure. Thus, the specific details set forth are merely exemplary. Particular embodiments may vary from these exemplary details and still be contemplated to be within the scope of the present disclosure.
In situations in which the systems discussed herein collect personal information about users, or may make use of personal information, the users may be provided with an opportunity to control whether programs or features collect user information (e.g., information about a user's social network, social actions or activities, profession, a user's preferences, or a user's current location), or to control whether and/or how to receive content from the media server that may be more relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be treated so that no personally identifiable information can be determined for the user, or a user's geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the web server or media server.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments included in at least one embodiment. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive or.
Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner.
Number | Name | Date | Kind |
---|---|---|---|
5208813 | Stallmo | May 1993 | A |
5403639 | Belsan | Apr 1995 | A |
5940838 | Schmuck et al. | Aug 1999 | A |
6263350 | Wollrath et al. | Jul 2001 | B1 |
6286056 | Edgar et al. | Sep 2001 | B1 |
6412045 | DeKoning et al. | Jun 2002 | B1 |
6718448 | Ofer | Apr 2004 | B1 |
6757769 | Ofer | Jun 2004 | B1 |
6799283 | Tamai et al. | Sep 2004 | B1 |
6804703 | Allen et al. | Oct 2004 | B1 |
6834298 | Singer et al. | Dec 2004 | B1 |
6850938 | Sadjadi | Feb 2005 | B1 |
6915434 | Kuroda | Jul 2005 | B1 |
6954881 | Flynn, Jr. et al. | Oct 2005 | B1 |
6973549 | Testardi | Dec 2005 | B1 |
7028216 | Aizawa et al. | Apr 2006 | B2 |
7028218 | Schwarm et al. | Apr 2006 | B2 |
7039827 | Meyer et al. | May 2006 | B2 |
7139907 | Bakke et al. | Nov 2006 | B2 |
7216164 | Whitmore et al. | May 2007 | B1 |
7272674 | Nandi et al. | Sep 2007 | B1 |
7313636 | Qi | Dec 2007 | B2 |
7577802 | Parsons | Aug 2009 | B1 |
7783682 | Patterson | Aug 2010 | B1 |
7873619 | Faibish et al. | Jan 2011 | B1 |
7913300 | Flank et al. | Mar 2011 | B1 |
7933936 | Aggarwal et al. | Apr 2011 | B2 |
7979613 | Zohar et al. | Jul 2011 | B2 |
8086652 | Bisson et al. | Dec 2011 | B1 |
8103754 | Luong et al. | Jan 2012 | B1 |
8117464 | Kogelnik | Feb 2012 | B1 |
8200887 | Bennett | Jun 2012 | B2 |
8205065 | Matze | Jun 2012 | B2 |
8301811 | Wigmore et al. | Oct 2012 | B1 |
8352540 | Anglin et al. | Jan 2013 | B2 |
8527544 | Colgrove et al. | Sep 2013 | B1 |
8560747 | Tan et al. | Oct 2013 | B1 |
8621241 | Stephenson | Dec 2013 | B1 |
8645649 | Kaiya et al. | Feb 2014 | B2 |
8700875 | Barron et al. | Apr 2014 | B1 |
8751463 | Chamness | Jun 2014 | B1 |
8806160 | Colgrove et al. | Aug 2014 | B2 |
8874850 | Goodson et al. | Oct 2014 | B1 |
8959305 | Lecrone et al. | Feb 2015 | B1 |
9063937 | McDowell et al. | Jun 2015 | B2 |
9081713 | Bennett | Jul 2015 | B1 |
9189334 | Bennett | Nov 2015 | B2 |
9294567 | Hussain et al. | Mar 2016 | B2 |
9311182 | Bennett | Apr 2016 | B2 |
9423967 | Colgrove et al. | Aug 2016 | B2 |
9430412 | Huang | Aug 2016 | B2 |
9436396 | Colgrove et al. | Sep 2016 | B2 |
9436720 | Colgrove et al. | Sep 2016 | B2 |
9454476 | Colgrove et al. | Sep 2016 | B2 |
9454477 | Colgrove et al. | Sep 2016 | B2 |
9501245 | Hussain et al. | Nov 2016 | B2 |
9513820 | Shalev | Dec 2016 | B1 |
9516016 | Colgrove et al. | Dec 2016 | B2 |
9552248 | Miller et al. | Jan 2017 | B2 |
9565269 | Malwankar et al. | Feb 2017 | B2 |
9632870 | Bennett | Apr 2017 | B2 |
20020038436 | Suzuki | Mar 2002 | A1 |
20020087544 | Selkirk et al. | Jul 2002 | A1 |
20020178335 | Selkirk et al. | Nov 2002 | A1 |
20030140209 | Testardi | Jul 2003 | A1 |
20040049572 | Yamamoto et al. | Mar 2004 | A1 |
20050066095 | Mullick et al. | Mar 2005 | A1 |
20050216535 | Saika et al. | Sep 2005 | A1 |
20050223154 | Uemura | Oct 2005 | A1 |
20060074940 | Craft et al. | Apr 2006 | A1 |
20060090049 | Saika | Apr 2006 | A1 |
20060136365 | Kedem et al. | Jun 2006 | A1 |
20060155946 | Ji | Jul 2006 | A1 |
20070067585 | Ueda et al. | Mar 2007 | A1 |
20070109856 | Pellicone et al. | May 2007 | A1 |
20070162954 | Pela | Jul 2007 | A1 |
20070171562 | Maejima et al. | Jul 2007 | A1 |
20070174673 | Kawaguchi et al. | Jul 2007 | A1 |
20070220313 | Katsuragi et al. | Sep 2007 | A1 |
20070245090 | King et al. | Oct 2007 | A1 |
20070266179 | Chavan et al. | Nov 2007 | A1 |
20080034167 | Sharma et al. | Feb 2008 | A1 |
20080059699 | Kubo et al. | Mar 2008 | A1 |
20080065852 | Moore et al. | Mar 2008 | A1 |
20080134174 | Sheu et al. | Jun 2008 | A1 |
20080155191 | Anderson et al. | Jun 2008 | A1 |
20080178040 | Kobayashi | Jul 2008 | A1 |
20080209096 | Lin et al. | Aug 2008 | A1 |
20080244205 | Amano et al. | Oct 2008 | A1 |
20080275928 | Shuster | Nov 2008 | A1 |
20080282045 | Biswas et al. | Nov 2008 | A1 |
20080285083 | Aonuma | Nov 2008 | A1 |
20080307270 | Li | Dec 2008 | A1 |
20090006587 | Richter | Jan 2009 | A1 |
20090037662 | La Frese et al. | Feb 2009 | A1 |
20090204858 | Kawaba | Aug 2009 | A1 |
20090228648 | Wack | Sep 2009 | A1 |
20090300084 | Whitehouse | Dec 2009 | A1 |
20100057673 | Savov | Mar 2010 | A1 |
20100058026 | Heil et al. | Mar 2010 | A1 |
20100067706 | Anan et al. | Mar 2010 | A1 |
20100077205 | Ekstrom et al. | Mar 2010 | A1 |
20100082879 | McKean et al. | Apr 2010 | A1 |
20100106905 | Kurashige et al. | Apr 2010 | A1 |
20100153620 | McKean et al. | Jun 2010 | A1 |
20100153641 | Jagadish et al. | Jun 2010 | A1 |
20100191897 | Zhang et al. | Jul 2010 | A1 |
20100250802 | Waugh et al. | Sep 2010 | A1 |
20100250882 | Hutchison et al. | Sep 2010 | A1 |
20100281225 | Chen et al. | Nov 2010 | A1 |
20100287327 | Li et al. | Nov 2010 | A1 |
20110072300 | Rousseau | Mar 2011 | A1 |
20110121231 | Haas et al. | Jun 2011 | A1 |
20110145598 | Smith et al. | Jun 2011 | A1 |
20110161559 | Yurzola et al. | Jun 2011 | A1 |
20110167221 | Pangal et al. | Jul 2011 | A1 |
20110238634 | Kobara | Sep 2011 | A1 |
20120023375 | Dutta et al. | Jan 2012 | A1 |
20120036309 | Dillow et al. | Feb 2012 | A1 |
20120117029 | Gold | May 2012 | A1 |
20120198175 | Atkisson | Aug 2012 | A1 |
20120330954 | Sivasubramanian et al. | Dec 2012 | A1 |
20130042052 | Colgrove et al. | Feb 2013 | A1 |
20130046995 | Movshovitz | Feb 2013 | A1 |
20130047029 | Ikeuchi et al. | Feb 2013 | A1 |
20130091102 | Nayak | Apr 2013 | A1 |
20130205110 | Kettner | Aug 2013 | A1 |
20130227236 | Flynn et al. | Aug 2013 | A1 |
20130275391 | Batwara et al. | Oct 2013 | A1 |
20130275656 | Talagala et al. | Oct 2013 | A1 |
20130283058 | Fiske et al. | Oct 2013 | A1 |
20130290648 | Shao et al. | Oct 2013 | A1 |
20130318314 | Markus et al. | Nov 2013 | A1 |
20130339303 | Potter et al. | Dec 2013 | A1 |
20140052946 | Kimmel | Feb 2014 | A1 |
20140068791 | Resch | Mar 2014 | A1 |
20140089730 | Watanabe et al. | Mar 2014 | A1 |
20140101361 | Gschwind | Apr 2014 | A1 |
20140143517 | Jin et al. | May 2014 | A1 |
20140172929 | Sedayao et al. | Jun 2014 | A1 |
20140201150 | Kumarasamy et al. | Jul 2014 | A1 |
20140215129 | Kuzmin et al. | Jul 2014 | A1 |
20140229131 | Cohen et al. | Aug 2014 | A1 |
20140229452 | Serita et al. | Aug 2014 | A1 |
20140281308 | Lango et al. | Sep 2014 | A1 |
20140325115 | Ramsundar et al. | Oct 2014 | A1 |
20150019798 | Huang | Jan 2015 | A1 |
20150234709 | Koarashi | Aug 2015 | A1 |
20150244775 | Vibhor et al. | Aug 2015 | A1 |
20150278534 | Thiyagarajan et al. | Oct 2015 | A1 |
20160019114 | Han et al. | Jan 2016 | A1 |
20160098191 | Golden et al. | Apr 2016 | A1 |
20160098199 | Golden et al. | Apr 2016 | A1 |
20160335009 | Vijayan | Nov 2016 | A1 |
20170024166 | Singh et al. | Jan 2017 | A1 |
Entry |
---|
Ouyang, J. et al. (Mar. 1-5, 2014) “SDF: Software-Defined Flash for Web-Scale Internet Storage Systems”, ASPLOS 2014, 14 pages. |
Zhang, J. et al. (2016) “Application-Aware and Software-Defined SSD Scheme for Tencent Large-Scale Storage System” 2016 IEEE 22nd International Conference on Parallel and Distributed Systems, 482-490. |
“Open-Channel Solid State Drives NVMe Specification” (Apr. 2016), 24 pages. |
Number | Date | Country | |
---|---|---|---|
20180095667 A1 | Apr 2018 | US |