This disclosure is related generally to computer networking and more particularly to implementation of an on-demand computing network environment.
A computing network typically includes a plurality of computing devices that are connected with one another, either physically or wirelessly, such that those computing devices can communicate with one another. A network is typically constructed by acquiring, either physically or via contractual agreement, the resources necessary to implement a desired framework. Typically, such components are acquired on a component by component basis.
Systems and methods are provided for a computer-implemented method of implementing an on-demand computing network environment. A network specification is received from a user. Resources from one or more resource providers are provisioned. The on-demand computing network is configured, where configuring comprises assigning a first provisioned resource as a hub device and assigning one or more second provisioned resources as rim devices, where rim devices are configured to communicate with one another only via the hub device.
As another example, a computer-implemented system for implementing an on-demand computing network environment includes a provisioned resource data store configured to store records associated with resources provisioned from one or more resource providers, where records in the provisioned resources data store include an identification of a particular resource and a particular on-demand computing network to which the particular resource has been assigned. A network implementation engine is configured to receive a network specification from a user, assign a first provisioned resource as a hub device to the particular on-demand computing network and to update the provisioned resource data store, and assign one or more second provisioned resources as rim devices to the particular on-demand computing network and to update the provisioned resource data store, wherein rim devices are configured to communicate with one another only via the hub device.
The network implementation engine 106 is configured to examine the network specification 104 and provision resources necessary to implement the user's desired network configuration. The pool of resources 108 may contain a variety of resources of different types, which may also come from different providers. For example, a first resource 112 may be a cloud processing resource (“third party compute service provider processing resource”) acquired from a first provider who provides servers with processing capabilities available for accessing. A second resource 114 may be mail server or file server resource provided by the same provider or from a different provider. A third resource 116 may be a cellular communication resource from a third provider, where that cellular communication resource enables acquisition of voice or video conference data from a party via a device of that party's data communication capabilities. Other resources can include proxy server resources for forwarding traffic, media servers for providing media (e.g., video, audio, image), as well as others.
The network implementation engine 106 interacts with the pool of acquired resources 108 to provision resources needed to create the desired on-demand computing network 110. The network implementation engine 106 assigns the provisioned resources to the network and configures the network topology. In one implementation, the on-demand network 110 is configured as a wheel network having a hub device 118 (e.g., a server) and one or more rim devices 120, 122, 124, 126 that can take the form of servers of different types or other computing components. The rim devices communicate with one another, in one embodiment, only through the hub device 118, where communications between the hub device 118 and the rim devices can be via secure connections, such as a VPN connection. Certain of the rim devices (e.g., rim devices 120, 124, 126) can be configured as exit points that are permitted to communicate with computing resources outside of the on-demand network 110, such as to the Internet. These external communications can be via a variety of protocols, some of which may not be secure, such as Http, Ftp, cellular protocols, or otherwise. Rim device 124 is configured to provide a secure link from the user 102 to the hub device 118 and other resources of the network 110, such as via a VPN connection. Rim devices that are not identified as exit points are, in one embodiment, not permitted to communicate outside the network 110. Such rim devices (e.g., rim device 122) can be assigned other computing duties, such as providing a file server or a mail server.
In addition to direct connections between the hub 118 and rim devices, such connections can be implemented using a plurality of links (“joints”) connected by joint relays. In the example of
To provision a resource to add it to the network 204, the network implementation engine 202 accesses the resource from the pool of acquired resources 206 if the needed resource is available. For example, the pool of acquired resources 206 may include a number of accounts with different third party computing service providers, online e-mail accounts, and file sharing accounts that the network implementation engine 202 can access and assign to the on-demand computing network to generate a desired user network topology. If a desired resource is not in the pool of acquired resources 206, then the network implementation engine 202 can acquire the desired resource or direct another entity to acquire the desire resource, with the resource then being assigned to the on-demand computing network 204. The network implementation engine 202 assigns the hub device 208, the rim devices 210, and communication links among them (e.g., identifying addresses with which the hub device 208 and rim devices 210 are configured to communicate with in the network 2014) to the network 204.
Following network 204 setup, in one embodiment, the network implementation engine 202 takes a hands off approach, where the network implementation engine 202 does not monitor or communicate with the network 204 while the network is in operation. In this configuration, the network implementation engine 202 receives no data about operations performed using the network 204 beyond knowledge of resources assigned to the network (e.g., as stored in records of a configuration data store). Upon receipt of a user request to add resources to the network 204 or remove resources therefrom, the network implementation engine 202 again interacts with the network 204 to implement the newly desired topology.
In one embodiment, de-provisioning of resources by the network implementation engine 202, such as at the end use of the network, is performed without direct communication with the network 204. To de-provision resources, the network implementation engine 202 communicates with providers of the resources, indicating to those providers that the resource is to be de-provisioned. The de-provisioned resource can then be recycled for use with another on-demand computing network, such as a network associated with a different user. In one embodiment, upon receipt of a de-provisioning request for a resource, a provider resets the resource (e.g., deletes stored data, such as e-mails or files) to an initial state so that it is ready for reuse. In this manner, the network implementation engine 202 acquires no further data associated with operation of the network 204.
In one embodiment, routing of traffic through rim devices 308, 310 designated as exit points is user configurable during network 302 operation. The hub device 304 includes a service broker operating thereon. The service broker is configured to enable configuration changes to be made to resources currently assigned to the network 302. For example, the service broker is tasked with changing routing of traffic to and from the network 302 via rim devices 308, 310 designated for communications outside of the network, on command. In one embodiment, the service broker provides a user interface to a user 314 for designation of traffic routing. The user interface includes a listing of one or more types of traffic (e.g., e-mail, Http requests) that can be transmitted from the network 302 via one of the exit point rim devices 308, 310. The user interface further includes a listing of available exit point rim devices 308, 310. The user 314 selects a traffic type and an exit point 308, 310 with which to associate that type of traffic. That type of traffic is then directed out of the network 302 through that selected exit point 308, 310. Transitions between exit points for different types of traffic can be performed on user command without requiring a user to reconnect (e.g., via a VPN connection) to the network 302.
Such operation enables a disguising of a source of data to a party receiving traffic from the network. For example, if rim device 308 is positioned in Asia, while rim device 310 is positioned in South America, user selection of rim device 308 for Http traffic instead of rim device 310 will change an apparent source of the next Http request to be Asia instead of South America. Such operations can circumvent certain computing devices external to the network 302 from blocking communications with the network 302, where those external computing devices are configured to restrict communications based on the geographic location of incoming communications.
Upon connection of the parties a telephone conversation or video conference can occur via the network. For example, the second rim device 712 is configured to communicate data with a cellular user 706 via a cellular network (e.g., via a data link of the cellular network). The second rim device 712 is configured to transmit that data within the on-demand computing network via the hub device 708 and possibly other devices internal to the network (e.g., one or more non-cellular proxy server relays) to the first rim device 710. In the example of
In addition to provisioning resources (e.g., 808, 810, 812) for the on-demand computing network, the network implementation engine also provisions resources for communicating connection information to the cellular user 806. A provisioned anonymous e-mail address is used to communicate a connection address to an e-mail address of the cellular user. A provisioned anonymous twitter account is used to communicate a first portion of authentication data (e.g., a password) to the cellular user 806. A provisioned anonymous Facebook account is used to communicate a second portion of the authentication data to the cellular user 806. Upon receipt of the three connection data pieces, the cellular user 806 can successfully establish a connection to the first rim device 810 and communication with the user 802 can begin. The network implementation engine 804 can then de-provision the resources utilized to transmit the connection information to the cellular user 806.
Any desktop servers 1208 being built in this project will be able to use exit points in Brazil 1204 or Europe 1206, and have their logs sent to the Chicago 1206 rim device. Access to the desktops will be provided by an in project authentication server, and all deployed resources, from the hub to the exit points will be monitored by the monitoring server deployed in Chicago 1206.
In one example, a service at 1304 communicates with user clients, indicating a set of proxy servers 1302 with which to communicate. A pool of proxy servers 1302 receive client stream requests and pass the requests back to the server 1304. The pool of proxy servers can be cycled aggressively with minimal service disruption. The proxy servers 1302 present the requests to the server 1304, with streamed data being provided to the users through the proxy servers 1302. An ISP is unable to ascertain an original source of the streaming data as being the server 1304 instead of the pool of proxy servers 1302.
In a second example, a network implementation engine provisions dynamic proxy servers in various clouds, connects those servers to fixed brokers, and publishes the list to the server 1304. The portal server, over SSL, directs clients to retrieve data from the dynamic proxy servers located in the various clouds. The dynamic proxy servers receive client stream requests and pass them to fixed broker servers known only by service server 1304. The fixed brokers pass traffic requests back to the data cache. The brokers act as a fixed point for minimal disruption to streaming operations.
Examples have been used to describe the invention herein, and the scope of the invention may include other examples. For example, an on-demand computing network environment could be implemented without any hubs or joints in a wheel, such as a single rim-to-rim network or a one-rim-to-many-rims network, or a many-rims-to-many-rims network. As a further example, an on-demand computing network environment could include one or more of a first rim device connected to a second rim device via one or more joints; multiple joints connected without inclusion of a hub device; a one-to-many joint connection; and a many-to-many joint connection.
This application is a continuation application of U.S. patent application Ser. No. 15/902,066, filed Feb. 22, 2018, entitled “Systems and Methods for Implementing an On-Demand Computing Network Environment,” which is a continuation application of U.S. patent application Ser. No. 14/937,978, filed Nov. 11, 2015, entitled “Systems and Methods for Implementing an On-Demand Computing Network Environment,” which claims priority to U.S. Provisional Application No. 62/081,047, filed Nov. 18, 2014, entitled “Systems and Methods for Implementing an On-Demand Computing Network Environment,” the entireties of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8660129 | Brendel et al. | Feb 2014 | B1 |
9319913 | Raleigh et al. | Apr 2016 | B2 |
9450817 | Bahadur et al. | Sep 2016 | B1 |
9507630 | Addepalli et al. | Nov 2016 | B2 |
9813379 | Shevade | Nov 2017 | B1 |
20060184998 | Smith et al. | Aug 2006 | A1 |
20060190570 | Booth, III et al. | Aug 2006 | A1 |
20090276771 | Nickolov et al. | Nov 2009 | A1 |
20100142410 | Huynh Van et al. | Jun 2010 | A1 |
20100150120 | Schlicht et al. | Jun 2010 | A1 |
20100248719 | Scholaert | Sep 2010 | A1 |
20100332615 | Short | Dec 2010 | A1 |
20120005745 | Wei et al. | Jan 2012 | A1 |
20120078643 | Nagpal et al. | Mar 2012 | A1 |
20120084184 | Raleigh et al. | Apr 2012 | A1 |
20120155325 | Eichen et al. | Jun 2012 | A1 |
20120180122 | Yan | Jul 2012 | A1 |
20120185925 | Barkie et al. | Jul 2012 | A1 |
20120239792 | Banerjee et al. | Sep 2012 | A1 |
20130019089 | Guidotti et al. | Jan 2013 | A1 |
20130054962 | Chawla et al. | Feb 2013 | A1 |
20130117459 | Haynes | May 2013 | A1 |
20130159378 | Misovski | Jun 2013 | A1 |
20130297933 | Fiducia et al. | Nov 2013 | A1 |
20130311778 | Cherukuri et al. | Nov 2013 | A1 |
20130332614 | Brunk et al. | Dec 2013 | A1 |
20130339949 | Spiers et al. | Dec 2013 | A1 |
20140032691 | Barton et al. | Jan 2014 | A1 |
20140040343 | Nickolov et al. | Feb 2014 | A1 |
20140047434 | Lam et al. | Feb 2014 | A1 |
20140098671 | Raleigh | Apr 2014 | A1 |
20140109177 | Barton et al. | Apr 2014 | A1 |
20140280515 | Wei et al. | Sep 2014 | A1 |
20150052599 | Champagne et al. | Feb 2015 | A1 |
20150113123 | Yeung et al. | Apr 2015 | A1 |
20150163206 | McCarthy et al. | Jun 2015 | A1 |
20150188949 | Mahaffey | Jul 2015 | A1 |
20150237114 | McGrath | Aug 2015 | A1 |
20150256984 | Patel et al. | Sep 2015 | A1 |
20150263865 | Rangarajan et al. | Sep 2015 | A1 |
20150281181 | Albisu | Oct 2015 | A1 |
20150350377 | Iyer | Dec 2015 | A1 |
20160044035 | Huang | Feb 2016 | A1 |
20160094560 | Stuntebeck | Mar 2016 | A1 |
20160212012 | Young et al. | Jul 2016 | A1 |
20160241623 | Zoulias et al. | Aug 2016 | A1 |
20170075719 | Scallan | Mar 2017 | A1 |
20170272554 | Kwan et al. | Sep 2017 | A1 |
Number | Date | Country | |
---|---|---|---|
62081047 | Nov 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15902066 | Feb 2018 | US |
Child | 16587212 | US | |
Parent | 14937978 | Nov 2015 | US |
Child | 15902066 | US |