The subject matter described herein relates to testing devices. More specifically, the subject matter relates to methods, systems, and computer readable media for emulating virtualization resources.
A data center is a facility used to house computer systems and associated components (e.g., storage systems). Data centers typically provide high reliability and security and typically include resources shared by multiple clients of the data center operator. Large data centers are industrial scale operations using as much electricity as a small town. Various data centers may utilize virtualization. For example, a data center may implement multiple virtual machines (VMs), e.g., virtual servers, using a physical server or node in the data center. In this example, each VM may execute an operating system and other software, where each VM may appear as a physical server to end users.
Generally, when one or more VMs are implemented on a physical server, a hypervisor is used to manage and facilitate the VMs. For example, a hypervisor can emulate various hardware features available to the VMs. In this example, software (e.g., an operating system) executing on the VM may have access to hardware, such as video, keyboard, storage, and/or network interfaces, emulated by the hypervisor. The hypervisor may also segregate the VMs from each other such that an operation within one VM is kept within that VM and is not visible to or modifiable from another VM.
When testing data center equipment, it is important to make sure that testing mimics real world scenarios and conditions. For example, when testing a data center server, it may be necessary to mimic or emulate resources in the data center. However, conventional testing of such equipment requires manufacturers to use the same scale of equipment that is found in a data center, which can require substantial resources, e.g., tens of millions of dollars or more and a significant amount of time to configure such equipment.
Accordingly, in light of these difficulties, a need exists for methods, systems, and computer readable media for emulating virtualization resources.
Methods, systems, and computer readable media for emulating virtualization resources are disclosed. According to one method, the method occurs at a computing platform. The method includes receiving a message associated with a device under test (DUT) and in response to receiving the message, performing an action associated with at least one of an emulated hypervisor and an emulated virtual machine (VM).
According to one system, the system includes a computing platform. The computing platform includes at least one processor and memory. The computing platform is configured to receive a message associated with a DUT and to perform, in response to receiving the message, an action associated with at least one of an emulated hypervisor and an emulated VM.
The subject matter described herein may be implemented in software in combination with hardware and/or firmware. For example, the subject matter described herein may be implemented in software executed by a processor. In one exemplary implementation, the subject matter described herein may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory devices, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
As used herein, the term “node” refers to a physical computing platform including one or more processors and memory.
As used herein, the terms “function” or “module” refer to hardware, firmware, or software in combination with hardware and/or firmware for implementing features described herein.
As used herein, the terms “device under test”, “device(s) under test”, or “DUT” refer to one or more devices, systems, communications networks, and/or computing platforms for communicating or interacting with testing equipment or related software.
The subject matter described herein will now be explained with reference to the accompanying drawings of which:
The subject matter described herein includes methods, systems, and computer readable media for emulating virtualization resources. When testing one or more network resources, it may be desirable to test the resources under non-trivial load conditions that mirror or closely approximate real world scenarios.
In accordance with some aspects of the subject matter described herein, a computing platform (e.g., a testing platform, device, or a node executing a single operating system (OS) and utilizing no hypervisor or virtualization software) may be configured to emulate virtualization resources, such as hypervisors and/or virtual machines (VMs), virtual networking resources, and/or data center related resources. For example, a computing platform in accordance with the present disclosure may be configured to maintain state information associated with at least one or more hypervisors and/or one or more VMs. In this example, a hypervisor controller or other control entity may send a request message for creating a new VM to the computing platform. Instead of creating a real VM, the computing platform may update a table (e.g., a VM state table) containing entries representing emulated VMs in an emulated hypervisor and related state information. By maintaining state information using a table or another data structure, the computing platform may provide response messages or perform other actions indicating that a VM has been created (even though no actual VM has been created). Hence, a computing platform in accordance with the present disclosure can test hypervisor controllers and/or other entities by responding and/or acting like real hypervisors and/or VMs. Further, such a computing platform may be able to simulate or emulate traffic of an entire data-center rack utilizing significantly less resources than conventional testing systems, e.g., a 40 times reduction in size, power, and operating cost from the amounts used when testing with a non-emulated data center rack.
In accordance with some aspects of the subject matter described herein, a computing platform may be configured to test virtualization resources (e.g., data center related resources) and virtualization related configurations (e.g., data center related configurations). For example, an exemplary computing platform described herein may utilize a system architecture capable of emulating both hypervisor (e.g., VM management) functionality and associated VM functionality.
In accordance with some aspects of the subject matter described herein, a computing platform may be configured to efficiently test virtualization related configurations by monitoring and/or analyzing various device(s) under test (DUT) associated with one or more emulated resources. For example, an exemplary computing platform described herein may be configured to emulate data center or resources therein. The emulated data center may be comprised of multiple pods and each pod may be comprised of multiple servers. Each server may be configured with multiple VMs, where the VMs on each server are managed by a hypervisor. Each VM may use a different “guest” OS. Each VM may host and/or execute multiple software applications. In this example, the exemplary computing platform may generate realistic workloads (e.g., packet generation and/or packet processing) involving one or more emulated resources and may perform meaningful and comprehensive testing of one or more DUT, e.g., a network router, a network switch, a hypervisor controller, a network controller, and/or associated data centers or related systems.
Reference will now be made in detail to exemplary embodiments of the subject matter described herein, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
In some embodiments, computing platform 100 may be configured to perform one or more aspects associated with testing one or more DUT 106. In some embodiments, computing platform 100 may be a stand-alone tool, a testing device, or software executing on a processor or across multiple processors. In some embodiments, computing platform 100 may be a single node or may be distributed across multiple computing platforms or nodes.
Computing platform 100 may include a virtualization resources emulation module (VREM) 102. VREM 102 may be any suitable entity (e.g., software executing on a processor or across multiple processors) for performing one or more aspects associated with emulating virtualization related resources, such as hypervisors, virtual machines, and/or other virtualization hardware or software, including protocols, services, and/or applications associated with the emulated resources.
In some embodiments, VREM 102 may be configured to emulate a data center and/or resources therein. For example, VREM 102 may be configured as a cloud simulator and/or emulator module and may use a single server with a single OS and no hypervisor to emulate a plurality of hypervisors, where each emulated hypervisor appears to be executing one or more VMs.
In some embodiments, VREM 102 may be implemented using a non-emulated or actual VM or may be configured to use a non-emulated or actual VM. For example, VREM 102 may include software executing “on” a single non-emulated VM associated with computing platform 100. In this example, VREM 102 may be configured to emulate hundreds or thousands of hypervisors and millions of VMs.
In some embodiments, VREM 102 may any include functionality for receiving and responding to network requests from a hypervisor controller (e.g., DUT 106) to perform various actions. Exemplary actions may include creating a VM, powering up a VM, and/or powering down a VM. Instead of creating real VMs, VREM 102 may just update a state information data structure (e.g., a VM state table) containing VMs in each hypervisor and their state related information. VREM 102 may then provide responses indicating that a VM has been created (even though no actual VM has been created) and/or performing other appropriate actions, thereby appearing like actual virtualization resources (e.g., a “real” hypervisor).
In some embodiments, VREM 102 may include functionality for emulating network interfaces, network traffic, protocols, and/or applications of VMs associated with a hypervisor. VREM 102 may perform such emulation by using maintained state information associated with each VM. For example, if a simulated VM is indicated to be in a powered on & running state by a state information data structure (e.g., a VM state table), VREM 102 or a related entity may respond to a network request, such as an Internet control message protocol (ICMP) request (e.g., a PING request) message, with a relevant response message, such as an ICMP response message. In this example, if the VM is not in a running state (e.g., as indicated by the VM state table), no response message may be provided, e.g., no ICMP response may be sent. In another example, in a similar way to ICMP or PING messages, VREM 102 may emulate other applications, routing protocols, and/or devices using maintained state information associated with one or more emulated resources.
In some embodiments, VREM 102 may include functionality for migrating, also referred to herein as teleporting or moving, a VM from one resource to another resource (e.g., between hypervisors). For example, VREM 102 may be capable of performing VM migrations or movements regardless of the operational status of a VM. In this example, VREM 102 may migrate or move a “live” or “online” VM between hypervisors and/or may migrate or move a “stopped” or “offline” VM between hypervisors.
In some embodiments, VREM 102 may respond to a hypervisor controller commands requesting teleportation and may update state information in a VM state table. VREM 102 may send synthetic network traffic from a first interface associated with a first emulated hypervisor containing the teleportation source VM to a second interface associated with a second emulated hypervisor containing the teleportation target VM. The synthetic network traffic may be representative of traffic that occurs during an actual or non-emulated VM teleportation. The VM state table entries may be updated to indicate that the VM is on the new hypervisor. In this example, VREM 102 may facilitate stopping and/or discarding any traffic coming to and/or from the VM via the first interface and VREM 102 may facilitate starting or allowing traffic coming to and/or from the VM via the second interface.
In some embodiments, VREM 102 may include functionality for emulating “overlay tunnels” between DUT 106 and one or more emulated resources and/or between emulated resources. Exemplary overlay tunnels may use encapsulation, security, encryption, and/or tunneling for providing or facilitating communications among emulated resources (e.g., VMs and hypervisors) and/or DUT 106. For example, VREM 102 may implement or emulate an overlay tunnel connecting an emulated VM and DUT 106 (e.g., a network switch and/or a hypervisor controller) by adding, removing processing, and/or modifying header information to one or more packets associated with the emulated VM. In another example, VREM 102 may implement or emulate an overlay tunnel connecting two emulated VMs and/or other emulated resources.
In some embodiments, VREM 102 may include one or more communications interfaces and related functionality for interacting with users and/or nodes. For example, VREM 102 may provide a communications interface for communicating with VREM user 104. In some embodiments, VREM user 104 may be an automated system or may be controlled or controllable by a human user.
In some embodiments, VREM user 104 may select and/or configure various aspects associated with emulating virtualization and/or testing of DUT 106. For example, various user interfaces (e.g., an application user interface (API) and graphical user interface (GUI)) may be provided for generating workloads and/or configuring one or more actions or scenarios to be tested, monitored, or performed. Exemplary user interfaces for testing DUT 106 and/or performing emulation may support automation (e.g., via one or more scripting languages), a representation state transfer (REST) API, a command line, and/or a web based GUI.
In some embodiments, VREM 102 may include one or more communications interfaces and related functionality for interacting with DUT 106. DUT 106 may be any suitable entity or entities (e.g., devices, systems, or platforms) for communicating with, accessing, or otherwise using emulated virtualization resources. For example, DUT 106 may include a router, a network switch, a hypervisor controller, a data center manager, or a network controller. In another example, DUT 106 may include one or more systems and/or computing platforms, e.g., a data center or a group of servers and/or routers. In yet another example, DUT 106 may include one or more networks or related components, e.g., an access network, a core network, or the Internet.
In some embodiments, DUT 106 may communicate with one or more emulated virtualization resources, such as an emulated hypervisor, using one or more protocols. For example, VREM 102 may be configured to receive, process, and/or send messages in various protocols associated with emulated hypervisor. In this example, VREM 102 may be configured to send such messages via an overlay or encapsulation (e.g., using Internet protocol security (IPsec)) tunnel via network 110 to DUT 106.
VREM 102 may include or access VREM storage 108. VREM storage 108 may be any suitable entity or entities for maintaining or storing information related to emulated virtualization resources. For example, VREM 102 may access VREM storage 108 containing state information about a hypervisor, a virtual machine, a virtual switch, a virtual router and/or services or applications associated with virtualization resources. VREM storage 108 may include non-transitory computer readable media, such as flash memory, random access memory, or other storage devices. In some embodiments, VREM storage 108 may be external to and/or or integrated with computer platform 100 and/or VREM 102. In some embodiments, VREM storage 108 may be located at a single node or distributed across multiple platforms or devices.
Network 110 may be any suitable entity or entities for facilitating communications between various nodes and/or modules, e.g., in a test environment. For example, network 110 may be an access network, a mobile network, the Internet, or another network for communicating with one or more data centers, DUT 106, VM user 104, VREM 102, and/or computing platform 100. In some embodiments, DUT 106, VM user 104, VREM 102, and/or computing platform 100 may be located in network 110 or another location.
It will be appreciated that
Referring to
In some embodiments, emulated data center 200 may be configured or adapted to generate test packet traffic for testing DUT 106. For example, DUT 106 may be one or more routers and/or switches located in a live deployment or test environment (e.g., a federated data center or server farm) and emulated data center 200 may generate traffic that traverses DUT 106 thereby testing performance of DUT 106 as the DUT 106 routes or switches the generated packets and other related traffic (e.g., packets from other entities). In another example, test traffic may be used to test non-emulated (e.g., real) resources, e.g., physical servers within a local data center or a remote data center.
In some embodiments, emulated data center 200 may be configured or adapted to emulate and/or simulate migrating or moving one or more E-VMs between emulated servers and/or emulated hypervisors, and to monitor and/or analyze DUT performance in response to such migrations. For example, a request message for moving E-VM 212 from emulated hypervisor 204 to emulated hypervisor 202 may be sent from an hypervisor controller to VREM 102 and, in response to receiving the request message, VREM 102 may emulate the movement of E-VM 212 by updating maintained state information associated with E-VM 212 to indicate the migration and by sending appropriate response messages indicating E-VM 212 has been moved. In this example, VREM 102 may be configured to generate appropriate response messages that include header information and other information such that packets sent from E-VM 212 appear to originate from emulated hypervisor 202 instead of emulated hypervisor 204.
In some embodiments, emulated data center 200 may be configured or adapted to support one or more protocols for receiving or sending traffic associated with various emulated resources. Exemplary protocols may include a virtual extensible LAN (VXLAN) protocol, an OpenFlow protocol, a virtual network tag (VN-Tag) protocol, an open vSwitch database management (OVSDB) protocol, a border gateway protocol (BGP), a BGP link-state (BGP-LS) protocol, a network configuration (NETCONF) protocol interface, and/or a simple network management protocol (SNMP) interface. For example, VREM 102 may generate and/or process traffic associated with E-VMs 206-212. In this example, exemplary VM related protocols or interfaces may be associated with one or more applications or services “executing” on EVMs 206-212, such as Internet control message protocol (ICMP) for sending or receiving a ping command or an L23 interface for receiving an L23 data stream.
In some embodiments, emulated data center 200 and/or related emulated resources and/or applications may be configured or adapted to generate stateful packet traffic, stateless packet traffic, and/or a combination of stateful and stateless packet traffic and may generate packet traffic associated with a control plane and/or a user plane.
In some embodiments, emulated data center 200 may be configured or adapted to support one or more tunneling protocols and may receive and/or send tunneled traffic. For example, VREM 102 may implement or emulate an overlay tunnel connecting an E-VM 208 and DUT 106 via network 110 by adding, removing processing, and/or modifying header information to one or more packets associated with E-VM 208, e.g., packets transmitted from or to computing platform 100 and/or VREM 102. In another example, VREM 102 may implement or emulate an overlay tunnel connecting E-VM 208 and E-VM 212 and/or other emulated resources. In this example, the overlay tunnel may be implemented using encapsulation techniques, security protocols, and/or other tunneling technologies. Exemplary tunneling technologies may include a NVGRE protocol, a VXLAN protocol, an IPSec protocol, and/or another protocol.
In some embodiments, emulated data center 200 may be configured or adapted to emulate and/or simulate various network (e.g., wide area network (WAN) or local area network (LAN)) environments. For example, VREM 102 may generate packet traffic that appears to be affected by one or more network impairments and/or may include or exhibit certain characteristics. Exemplary characteristics or impairments may be associated with latency, jitter, packet loss, packet misdirection, congestion, and/or other factors.
In some embodiments, emulated data center 200 may be configured or adapted to emulate various OSs, protocols, and/or applications for different emulated resources. For example, emulated hypervisors 202-204 may be associated with various brands of VM managers, such as VMware, KVM, Xen, and/or HyperV. In another example, E-VMs 206-212 may be associated with one or more “guest” OSs, such as Windows 7, Windows 8, Linux or a variant, UNIX or a variant, and/or a MAC OS.
In some embodiments, emulated data center 200 may be configured or adapted to perform various actions associated with one or more emulated resources. For example, packets sent to or from emulated data center 200 may be delayed and/or modified such that the packets appear to interact with E-vSwitch 214 associated with E-VM 208. In this example, such packets may include header data (e.g., in VN-Tag header) that indicates a particular virtual port or interface associated with E-VM 208.
In some embodiments, emulated data center 200 may be configured to perform actions based on scripts (e.g., user-configured actions or preconfigured workloads), historical data (e.g., previously executed tests), and/or dynamic or static (e.g., preconfigured) environment conditions. For example, at an initial testing period, emulated data center 200 may be configured to provide emulation for up to 60 E-VMs and up to 50 emulated hypervisors. During testing of DUT 106, emulated data center 200 may indicate failures of one or more resources, e.g., rack/pod 222 or emulated server 218. DUT 106 may be monitored (e.g., by VREM 102 or computing platform 100) to determine when and/or whether failures are detected and, if any mitigation actions are taken, when the mitigation action (e.g., migrating VMs to other non-affected resources) are initiated by DUT 106. In another example, where DUT 106 includes a network switch or router, emulated data center 200 may be configured to execute a script of scheduled VM events. The VM events may include performing migrations of one or more VMs to different resources and DUT 106 may be monitored to determine when and/or whether DUT 106 sends packets (e.g., subsequent to a migration operation) are addressed correctly and/or whether the packets reach the new location (e.g., the interface or port associated with the migrated-to resource).
It will be appreciated that
Referring to
In some embodiments, state information 300 may include one or more identifiers and/or related information. Exemplary VM identifiers may include a VM ID, a VM name, one or more Internet protocol (IP) addresses, and/or one or more media access control (MAC) addresses. For example, a VM ID, a VM Name, an IP address, and/or a MAC address may be usable to uniquely identify an E-VM. In another example, an IP address and/or a MAC address associated with an E-VM may be usable for a VM ID and/or a VM name.
In some embodiments, state information 300 may include information associated with one or more E-VMs 206-212. For example, state information 300 associated with one or more E-VMs 206-212 may include information about VM identifiers, related hypervisors, related virtual networking resources (e.g., a virtual switch or a virtual router), a memory size or other configurations associated with the E-VM, a VM state or operational status, a VM OS (e.g., Linux, Windows 7, Windows 8, Mac OS X, etc.), information about applications, protocols, interfaces, ports, and/or services associated with the E-VM, and/or other information.
In some embodiments, state information 300 may include information about resources (e.g., emulated and non-emulated resources) other than an E-VM, e.g., hypervisors, virtual networking resources, racks, pods, and/or related physical resources (e.g., a processor performing emulation and/or a physical port or physical interface associated with a virtual resource). For example, state information 300 may be maintained for an emulated hypervisor. In this example, the emulated hypervisor may include information about a hypervisor ID, a hypervisor OS (e.g., VMware, KVM, Xen, etc.), a hypervisor state or operational status, information about VMs, applications, protocols, interfaces, ports, and/or services associated with the emulated hypervisor, and/or other information.
It will be appreciated that state information 300 in
At step 402, a message may be received that is associated with DUT 106. For example, a request message may be sent from a hypervisor controller for creating a VM, powering up a VM, powering down a VM, or moving a VM (e.g., from one hypervisor or physical server to another hypervisor or physical server).
In some embodiments, a message may be received for moving an E-VM from a first emulated hypervisor to a second emulated hypervisor. For example, a request message may be for moving E-VM 212 from emulated hypervisor 204 to emulated hypervisor 202.
In some embodiments, DUT 106 may include a router, a network switch, a hypervisor controller, a data center manager, or a network controller.
At step 404, in response to receiving the message, an action associated with an emulated hypervisor or an emulated virtual machine (VM) may be performed. For example, in response to receiving, from DUT 106, a request message for stopping E-VM 206, VREM 102 may be configured to update an entry of state information 300 (e.g., stored in VREM storage 108) to indicate that E-VM 206 is stopped or powered down. In this example, VREM 102 may send a response message to DUT 106, where the response message indicates that E-VM 206 has powered down or stopped.
In some embodiments, performing an action associated with an emulated hypervisor or an E-VM may include sending a response message, monitoring performance of DUT 106, creating the E-VM, powering up the E-VM, powering down the E-VM, modifying state information associated with the emulated hypervisor or the E-VM, deleting state information associated with the emulated hypervisor or the E-VM, adding state information associated with the emulated hypervisor or the E-VM, emulating a communications protocol associated with the DUT, emulating traffic associated with the emulated hypervisor or the E-VM, emulating a virtual networking component, and/or instantiating a virtual networking component.
In some embodiments, moving or transporting an E-VM from a first emulated hypervisor to a second emulated hypervisor may include updating state information associated with the E-VM (e.g., in a data structure maintained by computing platform 100 or VREM 102) to indicate that the E-VM is moving to the second emulated hypervisor, sending at least one message for moving the E-VM from a first interface associated with the first emulated hypervisor to a second interface associated with the second emulated hypervisor, updating the state information associated with the E-VM to indicate that the E-VM is associated with the second emulated hypervisor, and stopping traffic associated with the E-VM sent via the first interface.
In some embodiments, computing platform 100 or a related module (e.g., VREM 102) may be configured to emulate network interfaces, network traffic, protocols, applications executing on the E-VM, and/or applications executing on the emulated hypervisor.
In some embodiments, computing platform 100 or a related module (e.g., VREM 102) may be configured to emulate or implement an overlay tunnel associated with an emulated hypervisor and/or other emulated resource. For example, VREM 102 may implement or emulate an overlay tunnel connecting an E-VM 208 and DUT 106 by adding or modifying header information to one or more packets transmitted from or to computing platform 100. In another example, VREM 102 may implement or emulate an overlay tunnel connecting an E-VM 208 and E-VM 206.
In some embodiments, computing platform 100 or a related module (e.g., VREM 102) may be configured to maintain state information associated with an E-VM or an emulated hypervisor in a data structure such that the computing platform is capable of responding to messages or requests with appropriate state information.
In some embodiments, state information associated with an E-VM or an emulated hypervisor may include information about the E-VM, information about a current emulated hypervisor associated with the E-VM, information about a previous emulated hypervisor associated with the E-VM, information about an emulated application executing on the E-VM, information about an emulated protocol associated with the E-VM, information about an emulated OS associated with the E-VM, information about an operating status associated with the E-VM, information about the emulated hypervisor, information about a plurality of E-VMs associated with the emulated hypervisor, information about an emulated application executing on the emulated hypervisor, information about an emulated protocol associated with the emulated hypervisor, information about an emulated OS associated with the emulated hypervisor, and/or information about an operating status associated with the emulated hypervisor.
It will be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the subject matter described herein is defined by the claims as set forth hereinafter.
This application claims the benefit of U.S. Provisional Patent Application No. 61/805,915, filed Mar. 27, 2013; the disclosure of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6542854 | Yang et al. | Apr 2003 | B2 |
6769054 | Sahin et al. | Jul 2004 | B1 |
6792393 | Farel et al. | Sep 2004 | B1 |
7159184 | Ullah et al. | Jan 2007 | B2 |
7277395 | Rosen | Oct 2007 | B2 |
7328134 | Burbidge, III et al. | Feb 2008 | B1 |
7603372 | Honicky, Jr. et al. | Oct 2009 | B1 |
7730492 | Blaukopf | Jun 2010 | B1 |
7814495 | Lim | Oct 2010 | B1 |
7865908 | Garg | Jan 2011 | B2 |
7890951 | Vinberg et al. | Feb 2011 | B2 |
8068602 | Bluman | Nov 2011 | B1 |
8145470 | Green | Mar 2012 | B2 |
8286147 | Alpern | Oct 2012 | B2 |
8694644 | Chen | Apr 2014 | B2 |
8805951 | Faibish | Aug 2014 | B1 |
8984341 | Chandrasekharapuram et al. | Mar 2015 | B1 |
9317252 | Roy et al. | Apr 2016 | B2 |
9436566 | Panda et al. | Sep 2016 | B2 |
9507616 | Ramanath et al. | Nov 2016 | B1 |
20020087282 | Millard | Jul 2002 | A1 |
20020184614 | Davia et al. | Dec 2002 | A1 |
20030036897 | Flores et al. | Feb 2003 | A1 |
20030154432 | Scott et al. | Aug 2003 | A1 |
20030182408 | Hu | Sep 2003 | A1 |
20040010787 | Traut | Jan 2004 | A1 |
20040015600 | Tiwary et al. | Jan 2004 | A1 |
20040021678 | Ullah et al. | Feb 2004 | A1 |
20040139437 | Arndt | Jul 2004 | A1 |
20050039180 | Fultheim | Feb 2005 | A1 |
20050116920 | Park | Jun 2005 | A1 |
20050216234 | Glas et al. | Sep 2005 | A1 |
20050268298 | Hunt | Dec 2005 | A1 |
20060025985 | Vinberg et al. | Feb 2006 | A1 |
20060037002 | Vinberg et al. | Feb 2006 | A1 |
20060123416 | Cibrario Bertolotti | Jun 2006 | A1 |
20070067374 | Iketani et al. | Mar 2007 | A1 |
20070069005 | Dickerson et al. | Mar 2007 | A1 |
20070112549 | Lau et al. | May 2007 | A1 |
20070283347 | Bobroff | Dec 2007 | A1 |
20080163207 | Reumann | Jul 2008 | A1 |
20080189700 | Schmidt | Aug 2008 | A1 |
20080208554 | Igarashi | Aug 2008 | A1 |
20080221857 | Casotto | Sep 2008 | A1 |
20090089038 | Nadgir et al. | Apr 2009 | A1 |
20090089781 | Shingai | Apr 2009 | A1 |
20090119542 | Nagashima et al. | May 2009 | A1 |
20090259704 | Aharoni et al. | Oct 2009 | A1 |
20090300613 | Doi | Dec 2009 | A1 |
20100111494 | Mazzaferri | May 2010 | A1 |
20100153529 | Moser | Jun 2010 | A1 |
20100161864 | Barde et al. | Jun 2010 | A1 |
20100169882 | Ben-Yehuda | Jul 2010 | A1 |
20100235831 | Dittmer | Sep 2010 | A1 |
20100241734 | Miyajima | Sep 2010 | A1 |
20100250824 | Belay | Sep 2010 | A1 |
20100299666 | Agbaria | Nov 2010 | A1 |
20100325191 | Jung | Dec 2010 | A1 |
20100332212 | Finkelman | Dec 2010 | A1 |
20110010515 | Ranade | Jan 2011 | A1 |
20110010691 | Lu | Jan 2011 | A1 |
20110066786 | Colbert | Mar 2011 | A1 |
20110066819 | Mashtizadeh | Mar 2011 | A1 |
20110126193 | Mullin | May 2011 | A1 |
20110197190 | Hattori | Aug 2011 | A1 |
20110202917 | Laor | Aug 2011 | A1 |
20110246171 | Cleeton | Oct 2011 | A1 |
20110307739 | El Mahdy et al. | Dec 2011 | A1 |
20120054409 | Block | Mar 2012 | A1 |
20120054740 | Chakraborty | Mar 2012 | A1 |
20120060167 | Salsburg | Mar 2012 | A1 |
20120084487 | Barde | Apr 2012 | A1 |
20120102492 | Iwata | Apr 2012 | A1 |
20120110181 | Tsirkin | May 2012 | A1 |
20120131576 | Hatta | May 2012 | A1 |
20120159473 | Tsirkin | Jun 2012 | A1 |
20120192182 | Hayward | Jul 2012 | A1 |
20120246644 | Hattori | Sep 2012 | A1 |
20120284709 | Lorenc | Nov 2012 | A1 |
20120290766 | Oshins | Nov 2012 | A1 |
20120311387 | Santhosh et al. | Dec 2012 | A1 |
20130013657 | Emelko et al. | Jan 2013 | A1 |
20130019242 | Chen | Jan 2013 | A1 |
20130036416 | Raju | Feb 2013 | A1 |
20130055026 | Hatano et al. | Feb 2013 | A1 |
20130080999 | Yang | Mar 2013 | A1 |
20130139154 | Shah | May 2013 | A1 |
20130139155 | Shah | May 2013 | A1 |
20130152083 | Miki | Jun 2013 | A1 |
20130159650 | Wakamiya | Jun 2013 | A1 |
20130179879 | Zhang | Jul 2013 | A1 |
20130227551 | Tsirkin | Aug 2013 | A1 |
20130238802 | Sarikaya | Sep 2013 | A1 |
20130247056 | Hattori | Sep 2013 | A1 |
20130263118 | Kannan | Oct 2013 | A1 |
20130275592 | Xu | Oct 2013 | A1 |
20130282354 | Sayers et al. | Oct 2013 | A1 |
20130283265 | Acharya | Oct 2013 | A1 |
20130297769 | Chang | Nov 2013 | A1 |
20130318528 | Hirose | Nov 2013 | A1 |
20130325433 | Albano | Dec 2013 | A1 |
20130326175 | Tsirkin | Dec 2013 | A1 |
20130339956 | Murase | Dec 2013 | A1 |
20130346987 | Raney | Dec 2013 | A1 |
20140006358 | Wang et al. | Jan 2014 | A1 |
20140013306 | Gounares et al. | Jan 2014 | A1 |
20140047272 | Breternitz et al. | Feb 2014 | A1 |
20140067940 | Li et al. | Mar 2014 | A1 |
20140068335 | Bromley et al. | Mar 2014 | A1 |
20140108001 | Brown et al. | Apr 2014 | A1 |
20140109051 | McDonald | Apr 2014 | A1 |
20140223431 | Yoshimura | Aug 2014 | A1 |
20140289418 | Cohen | Sep 2014 | A1 |
20140317625 | Ichikawa | Oct 2014 | A1 |
20140378057 | Ramon et al. | Dec 2014 | A1 |
20150007174 | Jain | Jan 2015 | A1 |
20150046141 | Lahiri et al. | Feb 2015 | A1 |
20150100958 | Banavalikar | Apr 2015 | A1 |
20150120797 | Roy et al. | Apr 2015 | A1 |
20150135178 | Fischer | May 2015 | A1 |
20150140956 | Prewitt, II et al. | May 2015 | A1 |
20150293826 | Sincan et al. | Oct 2015 | A1 |
20160034289 | Amano | Feb 2016 | A1 |
20160034372 | Panda et al. | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
WO 2011002578 | Jan 2011 | WO |
WO 2015023369 | Feb 2015 | WO |
Entry |
---|
Lorenzo Martignoni et al., Testing System Virtual Machines, 2010, [Retrieved on May 25, 2017]. Retrieved from the internet: <URL: http://s3.amazonaws.com/academia.edu.documents/46669074/Testing—system—virtual—machines20160621-10024-1371z9y.pdf?> 11 Pages (1-11). |
Haikun Liu et al., Live Migration of Virtual Machine Based on Full System Trace and Replay, Jun. 11-13, 2009, [Retrieved on May 25, 2017]. Retrieved from the internet: <URL: http://d1.acm.org/citation.cfm?id=1551630> 8 Pages (101-110). |
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT International Application No. PCT/US2014/031637 (Jul. 31, 2014). |
Commonly-assigned, co-pending International Patent Application No. PCT/US2014/045658 for “Methods, Systems, and Computer Readable Media for Modeling a Workload,” (Unpublished, filed Jul. 8, 2014). |
“IxLoad: Specifications,” https://web.archive.org/web/20130901094417/http://ixiacom.com/products/network—test/applications/ixload/specifications/index.php, pp. 1-5 (Sep. 1, 2013). |
Commonly-assigned, co-pending U.S. Appl. No. 13/969,085 for “Methods, Systems, and Computer Readable Media for Modeling a Workload,” (Unpublished, filed Aug. 16, 2013). |
“IxLoad” Ixia, Solution Brief, pp. 1-4 (Feb. 2012). |
Notification of Transmittal of the International Search Report and the Written Opnion of the International Searching Authority, or the Declaration for PCT International Application No. PCT/US2014/045658 (Oct. 30, 2014). |
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/251,547 (Aug. 23, 2016). |
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 13/969,085 (Aug. 10, 2016). |
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/749,606 (Jul. 27, 2016). |
Applicant-Initiated Interview Summary for U.S. Appl. No. 13/969,085 (Jun. 7, 2016). |
Communication of European publication number and information on the application of Article 67(3) EPC for European Application No. 14836839.2 (May 25, 2016). |
Notice of Allowance and Fees(s) Due for U.S. Appl. No. 14/445,921 (May 12, 2016). |
Final Office Action for U.S. Appl. No. 13/969,085 (Apr. 19, 2016). |
Non-Final Office Action for U.S. Appl. No. 14/251,547 (Apr. 26, 2016). |
Applicant-Initiated Interview Summary for U.S. Appl. No. 14/445,921 (Apr. 14, 2016). |
Corrected Notice of Allowability for U.S. Appl. No. 14/158,659 (Jan. 11, 2016). |
Notice of Allowance and Fee(s) Due for U.S. Appl. No. 14/158,659 (Dec. 11, 2015). |
Non-Final Office Action for U.S. Appl. No. 14/445,921 (Jan. 14, 2016). |
Non-Final Office Action for U.S. Appl. No. 13/969,085 (Sep. 24, 2015). |
Commonly-assigned, co-pending U.S. Appl. No. 14/749,606 for “Methods, Systems, and Computer Readable Media for Emulating Computer Processing Usage Patterns on a Virtual Machine,” (Unpublished, filed Jun. 24, 2015). |
“IxVM: Validating Virtualized Assets and Environments,” Ixia, Data Sheet, pp. 1-8 (Jun. 2015). |
“Fisher-Yates Shuffle,” http://en.wikipedia.org/wiki/Fisher-Yates—shuffle, pp. 1-11, (May 8, 2014). |
“ImpairNetTM—EIM1G4S, EIM10G4S, and EIM40G2Q Ethernet Impairment Load Module,” pp. 1-5, (Jul. 2013). |
Communication of the extended European search report for European Patent Application No. 14774774.5 (Nov. 23, 2016). |
Communication of European publication number and information on the application of Article 67(3) DPC for European Application No. 14774774.5 (Jan. 7, 2016). |
Number | Date | Country | |
---|---|---|---|
20140298335 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61805915 | Mar 2013 | US |