Embodiments of the application relate generally to software, data storage, and virtualized computing and processing resources. More specifically, systems, methods, and apparatuses are described for aggregating nodes to a virtual aggregate for a virtualized desktop environment.
Conventional solutions for virtualization technology provide numerous capabilities to efficiently deliver applications and desktops by packaging them as virtual machines. Virtualization is a technology that provides a software based abstraction to a physical hardware based computer. The abstraction layer decouples the physical hardware components—CPU, memory, and disk from the Operating System (OS) and thus allows many instances of an OS to be run side-by-side as virtual machines (VMs) in complete isolation to one another. The OS within each virtual machine sees a complete, consistent and normalized set of hardware regardless of the actual physical hardware underneath the software based abstraction. Virtual machines are encapsulated as files (also referred to as images) making it possible to save, replay, edit, copy, cut, and paste the virtual machine like any other file on a file-system. This ability is fundamental to enabling better manageability and more flexible and quick administration compared to physical virtual machines.
Those benefits notwithstanding, conventional VMs suffer from several performance related weaknesses that arise out of the way the VM interfaces with the storage sub-system(s) that stores the VM images or files. The storage sub-system(s) can include one or more server racks with each rack including networking gear (e.g., routers, switches, etc.), server computers, and locally attached storage (e.g., a hard disk drive—HDD) for each server. Furthermore, the storage sub-system(s) can be in communication with a storage network such as a Storage Area Network (SAN) or Network Attached Storage (NAS). The servicing of I/O from the VMs through those storage sub-system(s) introduces latencies (e.g., due to write operations) and redundancies that can create I/O bottlenecks and can reduce system performance. The aforementioned performance weaknesses include but are not limited to the following examples.
First, every read operation or write operation performed by every single VM (and there can be hundreds if not thousands of VMs performing such operations concurrently) is serviced in a queue by the storage system. This creates a single point of contention that results in below-par performance.
Second, there are numerous latencies that develop as input/output (IO) is queued at various points in an IO stack from a VM hypervisor to the storage system. Examples of latencies include but are not limited to: (a) when an application residing inside a Guest OS issues an IO, that IO gets queued to the Guest OS's Virtual Adapter driver; (b) the Virtual Adapter driver then passes the IO to a LSI Logic/BusLogic emulator; (c) the LSI Logic/BusLogic emulator queues the IO to a VMkernel's Virtual SCSI layer, and depending on the configuration, IOs are passed directly to the SCSI layer or are passed thru a Virtual Machine File System (VMFS) file system before the IO gets to the SCSI layer; (d) regardless of the path followed in (c) above, ultimately all IOs will end up at the SCSI layer; and (e) IOs are then sent to a Host Bus Adapter driver queue. From then on, IOs hit a disk array write cache and finally a back-end disk. Each example in (a)-(e) above introduces various degrees of latency.
Third, Least Recently Used (LRU)/Least Frequently Used (LFU)/Adaptive Replacement (ARC) cache replacement techniques all ultimately rely on building a frequency histogram of block storage access to determine a value for keeping or replacing a block from cache memory. Therefore, storage systems that rely on these cache management techniques will not be effective when servicing virtualization workloads especially Desktop VMs as the working set is too diverse for these techniques to manage cache consolidation and not cause cache fragmentation.
Fourth, in a virtualization environment, there typically exist multiple hierarchical caches in different subsystems—i.e. the Guest OS, the VM Hypervisor and a Storage Area Network (SAN)/Network Attached Storage (NAS) storage layer. As all the caches are independent of each other and unaware of each other, each cache implements the same cache replacement policies (e.g., algorithms) and thus all the caches end up all caching the same data within each independent cache. This results in an inefficient usage of the cache as cache capacity is lost to storing the same block multiple times. This is referred to as the cache inclusiveness problem and cannot be overcome without the use of external mechanisms to co-ordinate the contents of the multiple hierarchical caches in different subsystems.
Finally, SAN/NAS based storage systems that are under load ultimately will always be at a disadvantage to service virtualization workloads as they will need to service every IO operation from disk as the cache will be overwhelmed and fragment in the face of a large and diverse working set and because of diminished capacity within the caches due to the aforementioned cache inclusiveness problem.
Reference is now made to
Each storage device 125 can include data 127a comprised of an OS Image, OS Runtime, Application Image, and Application Runtime, each of which is associated with the various VMs 135. Data 127a may be duplicated in one or more of the storage devices 107 in server rack 175 as denoted by duplicate data 127b. As described above in regards caches, it is undesirable to duplicate data, particularly if there is no advantage to having duplicate storage of the same data. Storage system 120 is particularly well suited to running read intensive operations such as web page browsing, for storage of files associated with programs such as MS Office (e.g., MS Word, Excel, and PowerPoint files), and for database applications that are read intensive, for example. However, programs or activity by users 140 or others that result in intensive write operations can create I/O latencies among components of the server rack 175, storage system 120, and storage network 110. I/O latencies can be caused by sequentially bound I/O operations to/from various storage elements in the rack 175 and/or storage system 120. For example, for write intensive operations to those storage elements, the write operations can be sequentially bound, regardless of whether the write data is the same or different, such that N write operations requires N sequentially executed write operations (e.g., one after another). The above performance weakness examples are a non-exhaustive list and there are other performance weaknesses in conventional virtualization technology.
There are continuing efforts to reduce data I/O latencies, data redundancy, and to improve processes, cache techniques, software, data structures, hardware, and systems for VM technology.
Various embodiments are disclosed in the following detailed description and the accompanying drawings:
Although the above-described drawings depict various examples of the present application, the application is not limited by the depicted examples. It is to be understood that, in the drawings, like reference numerals designate like structural elements. Also, it is understood that the drawings are not necessarily to scale.
Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, a user interface, a GUI, or a series of program instructions on a non-transitory computer readable medium such as a computer readable storage medium or a computer network where the program instructions are sent over optical, electronic, or wireless communication links. In general, operations of disclosed processes may be performed in an arbitrary order, unless otherwise provided in the claims.
A detailed description of one or more examples is provided below along with accompanying figures. The detailed description is provided in connection with such examples, but is not limited to any particular example. The scope is limited only by the claims and numerous alternatives, modifications, and equivalents are encompassed. Numerous specific details are set forth in the following description in order to provide a thorough understanding. These details are provided for the purpose of example and the described techniques may be practiced according to the claims without some or all of these specific details. For clarity, technical material that is known in the technical fields related to the examples has not been described in detail to avoid unnecessarily obscuring the description.
In some examples, the described techniques may be implemented as a computer program or application (“application”) or as a plug-in, module, or sub-component of another application. The described techniques may be implemented as software, hardware, firmware, circuitry, or a combination thereof. If implemented as software, then the described techniques may be implemented using various types of programming, development, scripting, or formatting languages, frameworks, syntax, applications, protocols, objects, or techniques, including ASP, ASP.net, .Net framework, Ruby, Ruby on Rails, C, Objective C, C++, C#, Adobe® Integrated Runtime™ (Adobe® AIR™), ActionScript™, Flex™, Lingo™, Java™, Javascript™, Ajax, Perl, COBOL, Fortran, ADA, XML, MXML, HTML, DHTML, XHTML, HTTP, XMPP, PHP, and others. Software and/or firmware implementations may be embodied in a non-transitory computer readable medium configured for execution by a general purpose computing system or the like. The described techniques may be varied and are not limited to the examples or descriptions provided.
The present application overcomes all of the limitations of the aforementioned conventional solutions for servicing I/O's generated by VM users and virtualization technology by providing a virtual aggregate that transparently replaces locally attached storage resources and/or storage systems such as SAS or NAS, for example.
The following description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the application. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the application. In fact, this description should not be read to limit any feature or aspect of the present application to any embodiment; rather features and aspects of one embodiment can readily be interchanged with other embodiments. Notably, not every benefit described herein need be realized by each embodiment of the present application; rather any specific embodiment can provide one or more of the advantages discussed above. In the claims, elements and/or operations do not imply any particular order of operation, unless explicitly stated in the claims. It is intended that the following claims and their equivalents define the scope of the application. Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the above-described techniques are not limited to the details provided. There are many alternative ways of implementing the above-described application techniques. The disclosed examples are illustrative and not restrictive.
Turning now to
Servers 202 can be a blade server or X86 based server, for example. Furthermore, each server 202 in rack 275 can include a CPU 203 (e.g., Intel X86 or AMD processor), memory 205 (e.g., DRAM or the like), and virtual storage devices 207. In some examples, one or more of the servers 202 can include conventional storage devices (not shown) (e.g., locally attached storage such as a HDD, SSD, or equivalent devices). However, in configuration 200, virtual storage devices 207 are implemented as a virtual aggregate 250 that is an application running under VM Hypervisor 230. For example, the virtual aggregate 250 can be a subroutine or algorithm that is part of the computer program for VM Hypervisor 230. Therefore, unlike conventional configurations where the locally attached storage comprises physical storage devices (e.g., 107 of
Virtual aggregate 250 can be part of a program for the VM Hypervisor 230 or can be a separate program or algorithm. VM Hypervisor 230 and virtual aggregate 250 can run on the same hardware or different hardware (e.g., a computer, computer system, server, PC, or other compute engine).
Moving on to configuration 280 of
Reference is now made to
Attention is now directed to
Implementing storage of data for VMs, such as the data 260 of
Referring again to virtual aggregate 250, metadata copies 403 require a minimum 402 of three (3) nodes 401 to provide redundant mirroring and striping of data such as: “Able”; “Marv”; “Gift”; and “Trust”, as well as the metadata 403 itself. Therefore, at least three copies of the data and metadata 403 across all three of the nodes 401 are required for a fully coherent system. Adding additional nodes 401 increments the number of locations that duplicate data and metadata 403 can be placed within virtual aggregate 250. The minimum number of three nodes 402 need not be contiguous nodes (e.g., N0, N1, N2) as depicted in
As one example, as depicted in
As another example, user 245 saves a MS Word document via the Windows OS to a directory “c:\documents” and that word file contains the four words “Able”; “Marv”; “Gift”; and “Trust”. Assume for purposes of explanation, that each of those four words are 1 KB in size such that the save operation would save 4 KB to system storage via VM Hypervisor 230. To the VM 235 that is servicing the user's 245 Windows OS save operation, the resulting write operation to the virtual aggregate 250 appears to be a write operation to a physical storage device (e.g., a HDD). However, virtual aggregate 250 takes the document and writes it into five nodes 401, for example, of the data structure in 4 KB blocks, performing a mirroring operation on three nodes within the data structure to make full copies of the mirrored data 405 and metadata 403. Which nodes are selected for the mirroring operation can be determined by factors including but not limited to capacity, access speed, data type, etc. to find an optimum node to assign the mirrored copies to. Similarly, full copies of the data are to be striped to the striped data field 407 of each of the three nodes, and the striping operation can be determined by factors including but not limited to capacity, access speed, data type, etc. to find an optimum node to assign the striped copies to. Metadata copies 403 of all the nodes in virtual aggregate 250 are updated to be identical to one another such that content and location data are identical in each nodes metadata field. In
In
In
Referring now to
According to some examples, computer system 700 performs specific operations by processor 704 executing one or more sequences of one or more instructions stored in system memory 706. Such instructions may be read into system memory 706 from another computer readable medium, such as static storage device 708 or disk drive 710. In some examples, disk drive 710 can be implemented using a HDD, SSD, or some combination thereof. In some examples, hard-wired circuitry may be used in place of or in combination with software instructions for implementation.
The term “computer readable medium” refers to any non-transitory tangible medium that participates in providing instructions to processor 704 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, Flash memory, optical or magnetic disks, such as disk drive 710. Volatile media includes dynamic memory, such as system memory 706. Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, optical disk, magnetic tape, any other magnetic medium, CD-ROM, DVD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. Instructions may further be transmitted or received using a transmission medium. The term “transmission medium” may include any tangible or intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise a bus (e.g., bus 702) for transmitting a computer data signal.
In some examples, execution of the sequences of instructions may be performed by a single computer system 700. According to some examples, two or more computer systems 700 coupled by communication link 720 (e.g., LAN, PSTN, or wireless network) may perform the sequence of instructions in coordination with one another. Computer system 700 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 720 and communication interface 712. Received program code may be executed by processor 704 as it is received, and/or stored in disk drive 710, or other storage for later execution. Single computer system 700 may be replicated, duplicated, or otherwise modified to service the needs of a virtualized desktop environment, VM Hypervisor 230, and virtual aggregate 250 as described herein. In some examples, there may be multiple processors 704.
The foregoing description, for purposes of explanation, uses specific nomenclature to provide a thorough understanding of the application. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the application. In fact, this description should not be read to limit any feature or aspect of the present application to any embodiment; rather features and aspects of one embodiment can readily be interchanged with other embodiments. Notably, not every benefit described herein need be realized by each embodiment of the present application; rather any specific embodiment can provide one or more of the advantages discussed above. In the claims, elements and/or operations do not imply any particular order of operation, unless explicitly stated in the claims. It is intended that the following claims and their equivalents define the scope of the application. Although the foregoing examples have been described in some detail for purposes of clarity of understanding, the above-described techniques are not limited to the details provided. There are many alternative ways of implementing the above-described techniques. The disclosed examples are illustrative and not restrictive.
Number | Name | Date | Kind |
---|---|---|---|
4603380 | Easton et al. | Jul 1986 | A |
6675214 | Stewart et al. | Jan 2004 | B2 |
6807619 | Ezra et al. | Oct 2004 | B1 |
6915302 | Christofferson et al. | Jul 2005 | B1 |
7269608 | Wong et al. | Sep 2007 | B2 |
7356651 | Liu et al. | Apr 2008 | B2 |
7571288 | Pudipeddi et al. | Aug 2009 | B2 |
7908436 | Srinivasan et al. | Mar 2011 | B1 |
8046446 | Karr et al. | Oct 2011 | B1 |
8117464 | Kogelnik | Feb 2012 | B1 |
8312471 | Davis | Nov 2012 | B2 |
8495288 | Hosoya et al. | Jul 2013 | B2 |
8732401 | Venkatesh et al. | May 2014 | B2 |
20020124137 | Ulrich et al. | Sep 2002 | A1 |
20030145045 | Pellegrino et al. | Jul 2003 | A1 |
20030188045 | Jacobson | Oct 2003 | A1 |
20040111443 | Wong et al. | Jun 2004 | A1 |
20040128470 | Hetzler et al. | Jul 2004 | A1 |
20050038850 | Oe et al. | Feb 2005 | A1 |
20050108440 | Baumberger et al. | May 2005 | A1 |
20050114595 | Karr et al. | May 2005 | A1 |
20050131900 | Palliyll et al. | Jun 2005 | A1 |
20060112251 | Karr et al. | May 2006 | A1 |
20060272015 | Frank et al. | Nov 2006 | A1 |
20070005935 | Khosravi et al. | Jan 2007 | A1 |
20070192534 | Hwang et al. | Aug 2007 | A1 |
20070248029 | Merkey et al. | Oct 2007 | A1 |
20070266037 | Terry et al. | Nov 2007 | A1 |
20080183986 | Yehia et al. | Jul 2008 | A1 |
20090063528 | Yueh | Mar 2009 | A1 |
20090063795 | Yueh | Mar 2009 | A1 |
20090089337 | Perlin et al. | Apr 2009 | A1 |
20090254507 | Hosoya et al. | Oct 2009 | A1 |
20090319772 | Singh et al. | Dec 2009 | A1 |
20100031000 | Flynn et al. | Feb 2010 | A1 |
20100064166 | Dubnicki et al. | Mar 2010 | A1 |
20100070725 | Prahlad et al. | Mar 2010 | A1 |
20100180153 | Jernigan, IV et al. | Jul 2010 | A1 |
20100181119 | Saigh et al. | Jul 2010 | A1 |
20100188273 | He et al. | Jul 2010 | A1 |
20100274772 | Samuels | Oct 2010 | A1 |
20100306444 | Shirley et al. | Dec 2010 | A1 |
20110035620 | Elyashev | Feb 2011 | A1 |
20110055471 | Thatcher et al. | Mar 2011 | A1 |
20110071989 | Wilson et al. | Mar 2011 | A1 |
20110082836 | Wang et al. | Apr 2011 | A1 |
20110131390 | Srinivasan et al. | Jun 2011 | A1 |
20110145243 | Yudenfriend | Jun 2011 | A1 |
20110167045 | Okamoto | Jul 2011 | A1 |
20110196900 | Drobychev et al. | Aug 2011 | A1 |
20110265083 | Davis | Oct 2011 | A1 |
20110276781 | Sengupta et al. | Nov 2011 | A1 |
20110295914 | Mori | Dec 2011 | A1 |
20120016845 | Bates | Jan 2012 | A1 |
20120054445 | Swart et al. | Mar 2012 | A1 |
20120137054 | Sadri et al. | May 2012 | A1 |
20120159115 | Cha et al. | Jun 2012 | A1 |
20120254131 | Al Kiswany et al. | Oct 2012 | A1 |
20130013865 | Venkatesh et al. | Jan 2013 | A1 |
20130117494 | Hughes et al. | May 2013 | A1 |
20130124523 | Rogers et al. | May 2013 | A1 |
20130166831 | Atkisson et al. | Jun 2013 | A1 |
20130238876 | Fiske et al. | Sep 2013 | A1 |
20130282627 | Faddoul et al. | Oct 2013 | A1 |
20130283004 | Devine et al. | Oct 2013 | A1 |
Entry |
---|
U.S. Appl. No. 13/269,525, Office Action, Mailed Jul. 26, 2013, 6 pages. |
U.S. Appl. No. 13/269,525, Final Office Action, Mailed Jan. 2, 2014, 9 pages. |
U.S. Appl. No. 13/269,525, Office Action, Mailed May 12, 2014, 8 pages. |
U.S. Appl. No. 13/269,503, Office Action, Mailed Nov. 8, 2013, 6 pages. |
U.S. Appl. No. 13/269,503, Notice of Allowance, Mailed Feb. 25, 2014, 8 pages. |
PCT/US2013/076704, International Search Report and Written Opinion, Mailed Aug. 22, 2014. |
PCT/US2013/076683, International Search Report and Written Opinion, Mailed May 23, 2014. |
U.S. Appl. No. 13/725,939, Office Action, Mailed Jan. 28, 2015, 17 pages. |
Number | Date | Country | |
---|---|---|---|
20140181366 A1 | Jun 2014 | US |