Virtualization technologies have become widely used for sharing limited computer hardware resources. In a hardware virtualization environment, sometimes also referred to as a platform virtualization environment, one or more virtual machines can simulate a host computer with its own operating system. In particular, software executed on these virtual machines can be separated from the underlying hardware resources. As a result, virtual machines running on a same host computer can each have its own operating system. For example, a computer that is running Microsoft Windows operating system may host a virtual machine that simulates a computer running a Linux operating system, based on which Linux-based software can be executed.
Virtualization technologies are often used in the cloud computing context. Cloud computing environments can provide computing infrastructures that are abstracted from the underlying physical hardware. Cloud computing service providers can deliver cloud computing services (e.g., Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and storage services) by providing the ability to create virtual machines on demand. Such virtual machines can have defined attributes such as size, operating system, number of block devices, etc.
In a virtualization environment, a virtual machine can be associated with one or more volumes. A volume is a virtual resource accessible by virtual machines. For example, a volume can be a virtual disk or disks. A virtual disk can have disk partitions and a file system, similar to those of a physical disk. The file system of a virtual disk can also include files and folders. A volume can be a system volume or a bootable volume, which includes disk partitions, system files, a bootable operating system, and/or software applications. A volume can also be a data storage volume, which includes user disk storing user data such as user files and folders. A volume can be used as a virtual hard disk drive of a virtual machine. For various purposes, volumes can also be transmitted from a client device (e.g., a client device with XenDesktop) to a cloud computing device (e.g., a cloud storage) provided by a cloud computing service, or vice versa.
Currently, for transmitting volumes from client devices to cloud computing devices, cloud computing service providers have different mechanisms to allow volumes to be created on the cloud computing devices. The mechanisms are usually based on templates and some predefined workflows, and can vary widely between different cloud computing service providers. For example, some service providers require push operations being performed by the client devices to inject the volume into the cloud computing devices; while other service providers require pull operations being performing by using a Universal Resource Locator (URL). Moreover some service providers may not provide any mechanism for transmitting volumes into the cloud computing devices.
Further, in some cloud computing services, virtual machines of a cloud computing device can also restrict volume transmission based on the type of content stored in the volume to be transmitted (e.g., a volume storing bootable operating system or a volume storing user data). Moreover, in some cloud computing services, if multiple volumes are required to be transmitted on a particular virtual machine and the virtual machine becomes overloaded, it may de-prioritize the request for transmitting some of the volumes and delay transmitting the volumes. Sometimes, the delay can be unacceptable (e.g., 24 hours). As a result, a client device requesting the transmission may not be able to transmit the volumes into the cloud computing devices in a timely manner or as desired.
Reference will now be made to the accompanying drawings showing example embodiments of this disclosure. In the drawings:
Reference will now be made in detail to the exemplary embodiments implemented according to the present disclosure, the examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
The embodiments described herein provide techniques for transmitting volumes via a secured connection to a cloud computing device. The techniques described herein can support any type of volumes including, for example, system volumes or data storage volumes. The techniques described herein can also transmit the volumes on-demand. For example, using controllers for transmitting volumes, client devices can transmit the volumes to a cloud computing device at any desired time based on a worker virtual machine and standard cloud API calls. As a result, the technique can eliminate or reduce the delay of transmission caused by overloading or unavailability of a particular virtual machine of the cloud computing devices. The techniques can also provide secured transmission of the volumes to a cloud computing device. Moreover, the techniques can enable parallel transmitting of a plurality of volumes using the same worker virtual machine of the cloud computing device.
One or more client devices 102A-F (collectively as 102) are devices that can acquire remote services from data center 120 through various means. Client devices 102A-F can communicate with data center 120 either directly (e.g., client device 102E) or indirectly through a public network 104 (e.g., client devices 102A-D) or a private network 110 (e.g., client device 102F). In some embodiment, a main office 114 and a branch office 116 can also include one or more client devices that are similar to client devices 102A-F. Main office 114 can be located, for example, in a principle place of business of a company. Branch office 116 can be located, for example, remote to main office 114. In some embodiments, the client devices of main office 114 and branch office 116 can also acquire remote services from data center 120 through, for example, private network 110.
When client device 102 communicates through public network 104 or private network 110, a communication link can be established. For example, a communication link can be established by public network 104, thereby providing a client device (e.g. client devices 102A-D) access to data center 120. A communication link can also be established by private network 110, thereby providing client device 102F, main office 114 and/or branch office 116 accesses to data center 120. While client devices 102A-D are portrayed as a computer (e.g., client devices 102A), a laptop (e.g., client device 102B), a tablet (e.g., client device 102C), and a mobile smart phone (e.g., client device 102D), it is appreciated that client device 102 could be any type of device that communicates packets to and from data center 120.
Public network 104 and private network 110 can be any type of network such as a wide area network (WAN), a local area network (LAN), or a metropolitan area network (MAN). As an example, a WAN can be the Internet or the World Wide Web, and a LAN can be a corporate Intranet. Public network 104 and private network 110 can be a wired network, a wireless network, or a combination of both.
Data center 120 can be a central repository, either physical or virtual, for the storage, management, and dissemination of data and information pertaining to a particular public or private entity. Data center 120 can be used to house computer systems and associated components, such as one or physical servers, virtual servers, and storage systems. Data center 120 may be owned and/or controlled by one entity (e.g., a cloud computing service provider) and made available to other entities (e.g., client devices), for example, as part of a cloud or remote storage service, an Infrastructure-as-a-Service (IaaS), a Platform-as-a-Service (PaaS), and/or a Software-as-a-Service (SaaS). Data center 120 can include, among other things, one or more servers (e.g., server 122), a desktop delivery controller 124, a virtual desktop 126, applications 128, and a backend system 130.
Server 122 can be an entity represented by an IP address and can exist as a single entity or a member of a server farm. Server 122 can be a physical server or a virtual server. In some embodiments, server 122 can include a hardware layer, an operating system, a communication subsystem, and a hypervisor creating or managing one or more virtual machines. Server 122 can provide one or more services to an endpoint. These services can include providing one or more applications 128 to one or more endpoints (e.g., client devices 102A-F). For example, one or more applications 128 can include Windows™- or SAP™-based applications and computing resources. Server 122 can also enable bootstrapping for establish secured connections, generating or instantiating virtual machines based on Application Program Interface (API) calls, and/or generating or instantiating volumes based on API calls. The API calls can be standard cloud API calls such as API calls to start a virtual machine or to create a volume. Exemplary API calls can include, for example, an HTTP command to deploy a virtual machine. Via a communication subsystem, server 122 can communicate with other devices (e.g., client devices 102) through various types of networks (e.g., private network 110 and public network 104).
Desktop delivery controller 124 can be a device that enables delivery of services, such as virtual desktops 126 to client devices (e.g., client devices 102A-F). Desktop delivery controller 124 can provide functionality required to manage, maintain, and optimize all virtual desktop communications. In some embodiments, desktop delivery controller 124 can control, manage, maintain, or optimize the provisioning of applications 128.
In some embodiments, one or more virtual desktops 126 can provide one or more applications 128. Virtual desktops 126 can include hosted shared desktops allowing multiple user to access a single shared remote-desktop-services desktop, virtual desktop infrastructure desktops allowing each user to have their own virtual machine, streaming disk images, a local virtual machine, individual applications (e.g., one or more applications 128), or a combination thereof.
Backend system 130 can be a single or multiple instances of computer networking hardware, appliances, or servers in a server farm or a bank of servers. Backend system 130 can interface directly or indirectly with server 122. For example, backend system 130 can include Microsoft™ Active Directory, which can provide a number of network services, including lightweight directory access protocol (LDAP) directory services, Kerberos-based authentication, domain name system (DNS) based naming and other network information, and synchronization of directory updates amongst several servers. Backend system 130 can also include, among other things, an Oracle backend server, a SQL Server backend, and/or a dynamic host configuration protocol (DHCP). Backend system 130 can provide data, services, or a combination of both to data center 120, which can then provide that information via varying forms to client devices 102 or branch office 140.
GPUs 225 can be any type of specialized circuitry that can manipulate and alter memory (e.g., graphic memory 226) to provide and/or accelerate the creation of images stored in a frame buffer (e.g., frame buffer 316 shown in
CPUs 221 can connect to system memory 222 and system interface 250. CPUs 221 can execute programming instructions stored in the system memory 222, operates on data stored in system memory 222 and communicates with the GPUs 225 through the system interface 250, which bridges communication between the CPUs 221 and GPUs 225. In some embodiments, CPUs 221, GPUs 225, system interface 250, or any combination thereof, can be integrated into a single processing unit. GPUs 225 can be capable of executing particular sets of instructions stored in system memory 222 to manipulate graphical data store in system memory 222 or graphic memory 226. For example, GPUs 225 can receive instructions transmitted by the CPUs 221 and processes the instructions in order to render graphics data stored in the graphic memory 226. Graphic memory 226 can be any memory space accessible by GPUs 225, including local memory, system memory, on-chip memories, and hard disk. GPUs 225 can enable displaying of graphical data stored in graphic memory 226 on display device 224.
Client device 102 can also include display device 224 and an input/output (I/O) device 230 (e.g., a keyboard, mouse, or pointing device) connected through an I/O controller 223, both of which communicate via system interface 250. It is appreciated that CPUs 221 can also communicate with system memory 222 and other devices in manners other than through system interface 250, such as through serial communication manners or point-to-point communication manners. Similarly, GPUs 225 can also communicate with graphic memory 226 and other devices in manners other than system interface 250. Furthermore, I/O device 230 can also provide storage and/or an installation medium for the client device 102.
As shown in
As shown in
Furthermore, client device 102 can include a network interface 218 to interface to a LAN, WAN, MAN, or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (e.g., 802.11, T1, T3, 56 kb, X.25), broadband connections (e.g., ISDN, Frame Relay, ATM), wireless connections, or some combination of any or all of the above. Network interface 218 can comprise a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing client device 102 to any type of network capable of communication and performing the operations described herein.
Virtualization environment 300 can include a hardware layer 310 that can include one or more physical disks 304, one or more physical devices 306, one or more physical processors 308, a system memory 312, and a graphic memory 314. In some embodiments, frame buffer 316 can be stored within a memory element in graphic memory 314 and can be executed by one or more of physical processors 308.
Physical disk 304 can be either an internal or an external hard disk. Virtualization environment 300, in some embodiments, can communicate with an external hard disk that is included in the hardware layer 310 as a physical disk 304. Physical devices 306, in some embodiments, can be any combination of devices included in virtualization environment 300 and external devices that communicate with virtualization environment 300. Physical device 306 can be any device such as a network interface card, a video card, a keyboard, a mouse, an input device, a monitor, a display device, speakers, an optical drive, a storage device, a universal serial bus connection, any device connected to virtualization environment 300, any device communicating with virtualization environment 300, a printer, a scanner, or any other device that is desired. In some embodiments, physical processors 308 can be any processor and can include, for example, CPUs and GPUs.
System memory 312 can include any type of memory that can store data, programs, firmwares, or set of executable instructions. Programs, firmwares, or executable instructions stored in system memory 312 can be executed by one or more physical processors 308 of virtualization environment 300. Graphic memory 314 can be any memory space accessible by the physical processors 308, including local memory, system memory, on-chip memories, and hard disk. Physical processors 308 can display certain graphics corresponding to graphical data stored in graphic memory 316 on a display device of physical devices 306.
Virtualization environment 300 can further include an operating system 318 that can be stored in a memory element in system memory 312 and executed by one or more of physical processors 308. Operating system 318 can also be referred to as kernel. Moreover, virtualization environment 300 can include a hypervisor 302. Hypervisor 302 can be a program executed by physical processors 308 in virtualization environment 300 to manage any number of virtual machines. Hypervisor 302 can be referred to as a virtual machine monitor, or platform virtualization software. In some embodiments, hypervisor 302 can be any combination of executable instructions and hardware that monitors virtual machines executing on a computing device. Hypervisor 302 can be stored in a memory element in system memory 312.
Hypervisor 302, in some embodiments, can provide virtual resources to one or more virtual machines, e.g., virtual machines 332A-C. A virtual machine can be a fully-virtualized virtual machine. A fully-virtualized virtual machine can have a guest operating system to allow executing of its software. While running on a host computer, a fully-virtualized virtual machine is unaware that it is a virtual machine. A fully-virtualized virtual machine is sometimes also referred as a Domain U or domU virtual machine (e.g., virtual machines 332B-C). A domU virtual machine can be controlled by a control program of another virtual machine. The control program can also be referred to as a control operating system, a control domain, a Domain 0, or dom0. Thus, the virtual machine that runs the control operating system can be referred to as a dom0 virtual machine (e.g., virtual machine 332A). In some embodiments, a dom0 virtual machine can have direct access to host computer's hardware resources and thus the control program can be executed by the host computer's operating system. A dom0 virtual machine can have access to the host computer's hardware resources through a hypervisor that either runs directly on the host computer's hardware (i.e., a bare metal hypervisor) or runs within the host computer's operating system (i.e., a hosted hypervisor). In some embodiments, a virtual machine can also be a service domain virtual machine, also referred as a Domain S or domS virtual machine (not shown).
Hypervisor 302, in some embodiments, can provide virtual resources to guest operating systems (domU) 330A-B and/or control operating system (dom0) 320 in any manner such that hypervisor 302 simulates any desirable operating systems (e.g., Windows, Linux, Unix) to execute on virtual machines 332A-C. The system resources can include, for example, hardware layer 310 and any other component included in virtualization environment 300. In these embodiments, hypervisor 302 may be used to emulate virtual hardware, partition physical hardware, virtualize physical hardware, or execute virtual machines that provide access to computing environments. In some embodiments, hypervisor 302 can control processor scheduling and memory partitioning for virtual machine 332A-C executing in virtualization environment 300.
In some embodiments, hypervisor 302 can create virtual machines 332A-C, in which control operating system 320 or guest operating systems 330A-B execute, respectively. As an example, hypervisor 302 can load a virtual machine image to create a virtual machine 332. As another example, hypervisor 302 can execute guest operating systems 330A and 330B within virtual machines 332B and 332C, respectively. Guest operating systems 330A-B are further described in details below.
As shown in
In some embodiments, hypervisor 302 of virtualization environment 300 can be a bare metal hypervisor, or a hypervisor that has direct access to all applications and processes executing in the host computing device (e.g., client device 102), all resources on the host computing device, and all hardware on the host computing device (e.g., the hardware layer shown in
In a virtualization environment that employs a bare metal hypervisor configuration, the host operating system can be executed by one or more virtual machines 332. Thus, a user of the computing device can designate one or more virtual machines 332 as the dom0 virtual machine (e.g. virtual machine 332A). This dom0 virtual machine can imitate the host operating system by allowing a user to interact with the computing device in substantially the same manner that the user would interact with the computing device via host operating system 318.
In some embodiments, virtualization environment 300 can host or execute one or more virtual machines 332A-C. As described above, a virtual machine executing a control operating system can be referred to as a dom0 virtual machine, and a guest virtual machine can be referred as a domU virtual machine. A virtual machine 332 can be a set of executable instructions that, when executed by physical processors 308, imitate the operation of a physical computing device such that programs and processes can be executed on virtual machine 332 in a manner similar to that on a physical computing device. It is appreciated that virtualization environment 300 can host any number of virtual machines 332. In some embodiments, each virtual machine 332 can be provided, such as by hypervisor 302, with a unique virtual view of the physical hardware, memory, processor, and other system resources available to that virtual machine 332. The unique virtual view can be based on, for example, virtual machine permissions, application of a policy engine to one or more virtual machine identifiers, the user accessing a virtual machine, the applications executing on a virtual machine, networks accessed by a virtual machine, or any other desired criteria. In some embodiments, each virtual machine 332 can be provided with a substantially similar virtual view of the physical hardware, memory, processor, and other system resources available to the virtual machines 332.
As shown in
The file systems of volumes 326A-C can also include files and folders. For example, volume 326A can also include a user disk, which can store user data such as user files and folders. The user data stored on a user disk can also be referred to as persistent user data. In some embodiments, volumes, such as the system disk and/or the user disk, associated with a client device (e.g., client device 102), can be transmitted to a cloud computing device (e.g., server 122) in a data center (e.g., data center 120) and/or retrieved from the cloud computing device. In some embodiments, the transmission and/or retrieval of the volumes can be initiated by the client device via a controller. A controller can be a software application or a hardware device. The controller can reside on the client device or can be a standalone service. The controller will be described in detail below.
As shown in
In some embodiments, virtualization environment 300 can also include virtual apertures (not shown) in a virtual memory space, which can be a virtual view of the virtual memory available to virtual machines 332. The virtual apertures can correspond to for example, caches, buffers, physical memories such as system memory 312, and graphic memory 314, internal or external physical disks such as hard disk 304. As an example, under the circumstances that applications running on virtual machine 332A do not require memory more than that is available in system memory 312; the virtual apertures of virtual machine 332A can correspond to a portion of system memory 312. As another example, under the circumstances that applications executed by virtual machine 332B requires memory more than that is available in system memory 312, the virtual apertures of virtual machine 332B can correspond to one or more portions of system memory 312, graphic memory 314, or even one or more portions of physical disks 304. The virtual apertures can be generated, provided, and managed by hypervisor 302.
Virtual processors 328A-C (collectively as 328) can be a virtualized view of one or more physical processors 308 of virtualization environment 300. In some embodiments, the virtualized view of the physical processors 308 can be generated, provided, and managed by hypervisor 302. In some embodiments, virtual processor 328 can have substantially the same characteristics as those of physical processors 308. In some embodiments, virtual processors 328 can provide a modified view of the physical processors 308 such that at least some of the characteristics of virtual processor 328 are different from the characteristics of the corresponding physical processors 308.
In
Moreover, in some embodiments, control operating system 320 can start new virtual machines 332 or terminate execution of virtual machines 332. Control operating system 320 can also directly access hardware and/or resources within the hardware layer 310. In some embodiments, control operating system 320 can interface with programs and applications executing within the context of a virtual machine 332. Control operating system 320 can also interface with programs and applications executing on the computing device in virtualization environment 300 that are outside of the context of a virtual machine 332.
Furthermore, in some embodiments, control operating system 320 can also interact with one or more guest operating systems 330. Control operating system 320 can communicate with guest operating systems 330 through hypervisor 302. As an example, guest operating systems 330 can communicate with control operating system 320 via a communication channel established by the hypervisor 302, such as via a plurality of shared memory pages made available by the hypervisor 302. In some embodiments, control operating system 320 can also include a network back-end driver (not shown) for communicating directly with networking hardware provided by virtualization environment 300. The network back-end driver can process at least one virtual machine request from at least one guest operating system 330. Control operating system 320 can also include a block back-end driver for communicating with a storage element included in virtualization environment 300, such as system memory 312 and graphic memory 314. In some embodiments, the block back-end driver can read and write data from the storage element based upon at least one request received from a guest operating system 330.
Control operating system 320 can also include a tools stack 324, which can provide functionality for interacting with the hypervisor 302. Tools stack 324 can include customized applications for providing improved management functionality to an administrator of a virtual machine farm. In some embodiments, at least one of tools stack 324 and control operating system 320 can include a management application programming interface (API) that provides an interface for remotely configuring and controlling virtual machines 332 running in virtualization environment 300.
As shown in
In some embodiments, controller 420 can provide instructions to cloud computing device 440 to generate a worker virtual machine 442 (worker VM 442). Such instructions can be, for example, an API call 402. API call 402 can be a standard cloud API call, such as a cloud platform API request in the form of a HTTP GET/POST with an associated command and parameters. API call 402 can include a cloud platform API URL, which identifies cloud computing device 440 (or any cloud resources) that controller 420 desires to interact with. The command, for example, can be a command to generate or instantiate a virtual machine on cloud computing device 440. The parameters can provide further information for cloud computing device 440 to execute according to the API call.
In some embodiments, after the worker VM 442 is generated based on API call 402, worker VM 442 can provide an identification, such as an IP address, to controller 420. The IP address of worker VM 442 can be a public IP address or an internal IP address. A public IP address can be a globally routable unicast IP address, which is visible publically. An internal IP address can be routable or visible within a private network. Cloud computing devices of some cloud computing service providers can provide an internal IP address after establishing a worker VM. Controller 420 can use the identification (e.g., public IP address and/or internal IP address) for subsequent communications with worker VM 442.
After receiving API call 402, cloud computing device 440 can generate or instantiate worker VM 442. Worker VM 442 can be, for example, a dom0 or domU virtual machine (e.g., virtual machine 332A-C). In some embodiments, worker VM 442 can include a bootstrap 450 and a worker service 452. Worker VM 442 can also be associated with any number of volumes (e.g., 10 target volumes). Operation of worker VM 442 is further discussed below.
As shown in
As discussed above, controller 420 can provide instructions to cloud computing device 440 to generate worker VM 442 via, for example, API call 402. Via the same API call 402 or a different API call, controller 420 can also instruct cloud computing device 440 to provide a VM user data 446. VM user data 446 can be a volume or a disk that enables worker VM 442 to receive private data from controller 420. For example, controller 420 can transmit certificates 422A′ and 422B′ to VM user data 446, based on which worker VM 442 can perform authentication.
Based on certificates 422A′ and 422B′ stored in VM user data 446, controller 420 and worker VM 442 can be mutually or unilaterally authenticated. For example, as discussed above, certificate 422A (and 422A′) can be a private certificate for authenticating worker VM 442 and certificate 422B (and 422B′) can be a private certificate for authenticating controller 420. In some embodiments, in a first handshake communication between controller 420 and worker VM 442, a bootstrap 450 can send certificate 422A′ to controller 420. After receiving certificate 422A′ provided by bootstrap 450, controller 420 can verify whether certificate 422A′ matches with the corresponding certificate 442A stored in controller 420. If the two certificates match, worker VM 442 is authenticated by controller 420. In some embodiments, authentication of controller 420 is also required. Thus, in a second handshake communication, controller 420 can send certificate 442B to bootstrap 450 of worker VM 442. After receiving certificate 422B, bootstrap 450 of worker VM 442 can verify whether certificate 422B matches with the corresponding certificate 442B′ stored in VM user data 446. If the two certificates match, controller 420 is authenticated by worker VM 442. As a result, controller 420 and worker VM 442 can be mutually authenticated.
It is appreciated that any other certificate-based authenticating mechanisms can be used to authenticate one or both of controller 420 and worker VM 442. For example, in some embodiments, only worker VM 442 is authenticated. In some embodiments, public certificates or public keys can also be used to encrypt the private certificates before the private certificates are transmitted, for example, during handshake communications. Moreover, it is also appreciated that other authenticating mechanisms, such as password or random number based authenticating mechanisms can be used to mutually or unilaterally authenticate controller 420 and/or worker VM 442.
After authentication of one or both of controller 420 and worker VM 442, bootstrap 450 can enable the initiating and performing of a self-sustaining process to establish a secured connection 406. For example, bootstrap 450 can initiate or generate a worker service 452 of worker VM 442. Bootstrap 450 and/or controller 420 can thus establish secured connection 406 between controller 420 and worker service 452. Secured connection 406 can be based on, for example, secured socket layer (SSL) protocols implemented on top of any transport layer protocols, such as transmission control protocol (TCP). SSL protocols can also be used together with application-specific protocols, such as HTTP (to form HTTPS), FTP, etc. In some embodiments, after authenticating of one or both of controller 420 and worker VM 442, a session key can be exchanged, and any data communicated between controller 420 and worker VM 442 can be encrypted based on the session key. As a result, the communication between controller 420 and worker VM 442 can be secured. As discussed above, controller 420 can provide instructions to generate a plurality of worker VMs. As a result, in some embodiments, a secured connection for each worker VM can be established between the individual worker VM and controller 420.
As shown in
In some embodiments, the instructions that controller 420 provide to cloud computing device 440 to generate one or more target volumes can include, for example, an API call 408. Similar to API call 402, API call 408 can be a standard cloud API call such as a cloud platform API request in the form of a HTTP GET/POST with an associated command and parameters. The command, for example, can be a command to generate or instantiate one or more target volumes in cloud computing device 440. The parameters can provide further information for cloud computing device 440 to execute according to API call 408.
Further, as discussed above, controller 420 can obtain an identification, such as an IP address, of worker VM 442. In some embodiments, based on the IP address, controller 420 can provide instructions to associate the one or more of target volumes (e.g., target volumes 1 454 through target volume N 456) to worker VM 442. For example, via API call 408, controller 420 can associated target volume 1 454, which is generated for storing volume 426, to worker VM 442.
After secured connection 406 is established between controller 420 and worker service 452 and one or more of target volume 1 454 through target volume N 456 are associated with worker VM 442, volume 426 can be transmitted to the corresponding one or more target volumes (e.g., target volume 1 454) via secured connection 406. As an example, via secured connection 406, controller 420 can transmit (e.g., push or inject) volume 426 to worker service 452 and worker service 452 can store volume 426 to target volume 1 454. As another example, worker service 452 can also copy volume 426 from controller 420 and store it to target volume 1 454. It is appreciated that volume 426 can be transmitted irrespective of the content type of volume 426 and the size of volume 426 (i.e., volume 426 can be an unlimited volume).
After volume 426 is transmitted and stored in target volume 1 454, controller 420 can instruct (e.g., via an API call) cloud computing device 440 to disassociate target volume 1 454 from worker VM 442. It is appreciated that controller 420 can transmit any number of volumes in any size to any of the target volumes (e.g., target volume 1 454 through target volume N 456). In some embodiments, if controller 420 determines that volumes being transmitted to a particular worker VM (e.g., worker VM 442) have reached a pre-configured threshold, controller 420 can instruct cloud computing device 440 to generate additional worker VMs and establish additional secured connections. For example, controller 420 may have fifteen volumes to be transmitted and is transmitting ten volumes to the corresponding target volumes via worker VM 442. If the pre-configured threshold of volumes for a single worker VM is ten, controller 420 can determine that no further volume will be transmitted via worker VM 442. Instead, controller 420 can instruct cloud computing device 440 to generate an additional worker VM (not shown) for transmitting the remaining five volumes.
In some embodiments, after volume 426 is transmitted to target volume 1 454, controller 420 can also receive, from cloud computing device 440, an identification (e.g., a volume ID or a disk ID) that identifies target volume 1 454, which stores volume 426. Controller 420 can store the identification of target volume 1 454 or further provide the identification to the client device or a client application that requested controller 420 to transmit volume 426. The identification can enable further operations based on target volume 1 454. Such further operations can be performed by controller 420 and/or by the client device or the client application that requested controller 420 to transmit volume 426. In some embodiments, controller 420 can also receive, from cloud computing device 440, an identification corresponding to each of the target volumes (e.g., target volume 1 454 through target volume N 456). Similarly, these identifications can enable further operations performed by controller 420 and/or client devices or client applications.
In some embodiments, based on the identification of the target volume, controller 420 can retrieve data from the target volume. As described above, after volume 426 is stored in target volume 1 454, controller 420 can receive an identification of target volume 1 454. Based on this identification, controller 420 can later retrieve the volume (e.g., volume 426) stored on target volume 1 454. For example, controller 420 can provide this identification to worker service 452, which identifies target volume 1 454 as the target volume that stores the requested volume (e.g., volume 426). Worker service 452 can thus transmit the requested volume to controller 420.
In some embodiments, controller 420 can instruct cloud computing device 440 to terminate worker VM 442. Controller 420 can also terminate or instruct cloud computing device 440 to terminate secured connection 406. Controller 420 can instruct cloud computing device 440 to terminate worker VM 442 and/or secured connection 406 based on a connection-terminating event such as a preconfigured idling time period, a preconfigured expiry time of one or more authentication certificates associated with the authentication information, a loss of power, a loss of connectivity, or a request for termination. As an example, after the transmission of volume 426 (or any additional volumes) is complete, controller 420 can determine that no additional volume is required to be transmitted and identifications of the corresponding target volumes are received. As a result, controller 420 can instruct cloud computing device 440 to terminate worker VM 442 and/or secured connection 406. As another example, controller 420 can determine that a pre-configured idling time period (e.g., a period of time that no data communication occurred between controller 420 and worker VM 442) has lapsed, and therefore instruct cloud computing device 440 to terminate worker VM 442 and/or secured connection 406. Controller 420 can also directly terminate secured connection 406. In some embodiments, controller 420 can determine that a pre-configured expiry time of one or more certificates (e.g., certificates 422A′ and 422B′) have lapsed or are about to lapse. As an example, if controller 420 determines that pre-configured expiry time is about to lapse, controller 420 can terminate or instruct cloud computing device 440 to terminate worker VM 442 and/or secured connection 406. Controller 420 can make an API call to terminate worker VM 442. Further, if controller 420 determines that the pre-configured expiry time has lapsed, which can indicate that the certificates have expired and are no longer valid, controller 420 can prevent a new secured connection from being initiated or established based on the expired or invalid certificates.
In some embodiments, if secured connection 406 is lost undesirably (e.g., due to loss of power, loss of network connectivity, or any other connection-terminating events), controller 420 can instruct, similarly as discussed above, cloud computing device 440 to generate a new worker VM and establish a new secured connection. The details of generating the new worker VM and establishing the new secured connection are similar to those discussed above.
The methods disclosed herein may be implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
A portion or all of the methods disclosed herein may also be implemented by an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), a printed circuit board (PCB), a digital signal processor (DSP), a combination of programmable logic components and programmable interconnects, a single central processing unit (CPU) chip, a CPU chip combined on a motherboard, or any other combination of devices or modules capable of performing media format substitution disclosed herein.
In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the subject matter disclosed herein. It is intended that the specification and examples be considered as exemplary only. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
This patent application is a continuation of, and claims priority to and the benefit of U.S. patent application Ser. No. 14/609,033, titled “METHOD AND SYSTEM FOR SECURELY TRANSMITTING VOLUMES INTO CLOUD,” and filed Jan. 29, 2015, which claims priority to and the benefit of U.S. Provisional Patent Application No. 61/953,599, titled “METHOD AND SYSTEM FOR SECURELY TRANSMITTING VOLUMES INTO CLOUD,” and filed Mar. 14, 2014, the contents of all of which are hereby incorporated herein by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
9058118 | Urkude et al. | Jun 2015 | B1 |
20100169497 | Klimentiev et al. | Jul 2010 | A1 |
20110029973 | Hara et al. | Feb 2011 | A1 |
20120096461 | Goswami | Apr 2012 | A1 |
20130064305 | Karlsson | Mar 2013 | A1 |
20130298210 | Wright et al. | Nov 2013 | A1 |
20140068603 | Tsirkin | Mar 2014 | A1 |
20140082614 | Klein | Mar 2014 | A1 |
20140244808 | Axelrod et al. | Aug 2014 | A1 |
20140331309 | Spiers | Nov 2014 | A1 |
20150244707 | Bowen | Aug 2015 | A1 |
20150244802 | Simoncelli | Aug 2015 | A1 |
20150264026 | Firth et al. | Sep 2015 | A1 |
20170168866 | Kono | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
2 017 711 | Jan 2009 | EP |
WO2011159284 | Jun 2010 | WO |
WO-2011159284 | Dec 2011 | WO |
Entry |
---|
EP Examination Report for EP Patent Application No. 15709019.2, dated Dec. 12, 2018. |
Final Office Action for U.S. Appl. No. 14/609,033, dated Sep. 21, 2016. |
First CN Office Action for CN Patent Application No. 201580014062.0, dated Mar. 6, 2019. |
Non-Final Office Action for U.S. Appl. No. 14/609,033, dated Mar. 10, 2016. |
Notice of Allowance for U.S. Appl. No. 14/609,033, dated Aug. 11, 2017. |
International Search Report and Written Opinion for International Application No. PCT/US2015/017376, dated Jun. 17, 2015 (10 pages). |
Second Office Action for CN Patent Appl. No. 201580014062.0, dated Aug. 14, 2019. |
Number | Date | Country | |
---|---|---|---|
20180097791 A1 | Apr 2018 | US |
Number | Date | Country | |
---|---|---|---|
61953599 | Mar 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14609033 | Jan 2015 | US |
Child | 15820085 | US |