Policy-driven management of application traffic for providing services to cloud-based applications

Abstract
Policy-driven management of application traffic is provided for services to cloud-based applications. A steering policy refers to a set of rules is generated for a deployment from a current code environment to one or more replicated code environment differing in some key respect. The steering policy can guide steering decisions between the current and updated code environments. A steering server uses the steering policy to make decisions about whether to send service requests to the current code environment or the updated code environment. Feedback concerning actual steering decisions made by the steering server is received (e.g., performance metrics). The steering policy is automatically adjusted in response to the feedback.
Description
FIELD OF THE INVENTION

The invention relates generally to computer networking, and more specifically, to policy-driven steering/management of network traffic to replicated deployments, which may differ in some specific feature such as software version numbers, to cloud-based applications or remotely executed applications for providing application services such as automated version updates, feature evaluation etc.


BACKGROUND

Remote applications are accessed by users of an end device through a network. The application can be executed remotely, or be downloaded for local execution (e.g., using Java or Citrix). During upgrades of codets in remote applications using a continuous deployment model, it is common to have a production environment with a current version of the application code (e.g., a blue environment), and a separate production environment with an updated version of the application code (e.g., a green environment). The typical process is to fully deploy a well-tested new version of code but if problems ensue the code is rolled back to a previous version. A more prudent approach is to divert a small or non-critical production traffic from the default blue environment to the green environment in order to update and verify the new codet. A small percentage of traffic can be sent to the green environment and, based on the updated results, and more or less of production traffic can be sent in a sequence of phases. Once the codet is verified to satisfaction, all application traffic can be steered to the green environment and the blue environment can be retired.


Current approaches for traffic steering or splitting between blue and green environments are performed manipulating DNS end points of an application. This approach is done by configuring the application server IP addresses for both blue and green environment in the DNS entry, and then controlling the number of application server entries to steer/split the traffic proportionally between the two environments.


While this approach will split between the environments, it is very difficult to control the percentage of traffic that is split, or to split the traffic based on some application related conditions, or based on some user or user device conditions. Typically, conditions are based on some attributes of the application traffic. When the application traffic uses HTTP or HTTPS (SSL) as the transport, the HTTP header values can be used for the conditions. However, today, there is no service or product that is readily available to conditionally split traffic based on HTTP header values between completely different application environments.


Furthermore, it is difficult to ascertain performance or functional correctness of a split for verification. After tedious configuration changes to split or steer traffic between different application environments, understanding the effect of the application changes by comparing various performance and functional metrics involves manually reviewing multiple metrics dashboards and log files.


What is needed is a robust technique to improve traffic steering to a second environment. Further, improved feedback of performance and functionality at different splits is desired.


SUMMARY

The above-mentioned shortcomings are addressed by methods, computer program products, and systems for policy-driven management of application traffic for providing services to cloud-based applications.


In one embodiment, a steering policy comprising a set of rules is generated for a deployment from a current code environment to an updated code environment. The steering policy can guide steering decisions between the current and updated code environments. Generally, traffic steering or management refers to dropping, mirror, redirecting, splitting and rate limiting between replicated application deployments of traffic based on rules. Traffic rules can split traffic based on a logical expression of domain, path and headers of an HTTP request, for example. Other embodiments also steer based on smartflows, which further group traffic flows directed to a specific compute resources that may require specific services to be applied on them, such as specific policies to be differentially applied to these traffic flows for the purpose of securing them, collecting metrics and statistics measuring effectiveness of alternative implementations, etc, Code environments tend to be replica of each other differing in some key attributes such as software version, alternative implementation of features, or between a production and staging or test deployment.


In an embodiment, the steering policy is sent to a steering server. The steering server uses the steering policy to make decisions about whether to send service requests to the current code environment or the updated code environment. Feedback concerning actual steering decisions made by the steering server is received (e.g., performance metrics). The steering policy is automatically adjusted in response to the feedback.


Advantageously, deployments of new versions of remotely executing software are improved, among other improvements.





BRIEF DESCRIPTION OF THE DRAWINGS

In the following drawings, like reference numbers are used to refer to like elements. Although the following figures depict various examples of the invention, the invention is not limited to the examples depicted in the figures.



FIG. 1 is a high-level block diagram illustrating a system for policy-driven management of application traffic for providing services to cloud-based applications, according to an embodiment.



FIG. 2 is a more detailed block diagram illustrating an analytics server of the system in FIG. 1, according to an embodiment.



FIG. 3 is a more detailed block diagram illustrating a steering server of the system in FIG. 1, according to an embodiment.



FIG. 4 is a more detailed block diagram illustrating an end user device of the system in FIG. 1, according to an embodiment.



FIG. 5 is a sequence diagram illustrating interactions between components of FIG. 1, according to an embodiment.



FIG. 6 is a high-level flow diagram illustrating a method for policy-driven management of application traffic for providing services to cloud-based applications, according to an embodiment.



FIG. 7 is a high-level flow diagram illustrating a method for adjusting policy-driven management of traffic for providing services to cloud-based applications, according to one embodiment.



FIG. 8 is a more detailed flow diagram for a step of forwarding service request to either a blue or green environment selected according to steering policy, according to an embodiment.



FIG. 9 is a block diagram illustrating an exemplary computing device, according to an embodiment.





DETAILED DESCRIPTION

In the following disclosure, methods, computer program products, and systems for policy-driven management of application traffic for providing services to cloud-based applications are described. Generally, users are steered to either a current version of applications to an updated version of those applications, during a period or deployment.


Systems for Policy-Driven Application Traffic Management (FIGS. 1 to 5)



FIG. 1 is a high-level block diagram illustrating a system 100 for policy-driven management of application traffic for providing services to cloud-based applications, according to an embodiment. The system 100 comprises an analytics server 110 and a steering server 120 connected through a network 199 to end user devices 130A-C and current code environment (blue) 140A and updated code environment (green) 140B. Generally, the analytics server 110 employs the steering server 120 to gradually steer traffic from the current environment 140A to the updated environment 140B, while gathering analytics for automatic adjustments and reporting to an enterprise user device 135 (e.g., development operator or a network administrator). Policy-driven application traffic management can be provided as a service for green deployments by clients. For example, an Amazon or Google data center hosting client web sites can provide improved transitions to from blue to green environments.


The network architecture of the system 100 includes the analytics server 110 coupled to the steering server 120 either directly, as showing, or indirectly through the network 199, as in other embodiments. Similarly, the blue and green environments 140A,B can be located within a LAN along with the steering server 120, as shown, or indirectly through the network 199, in other embodiments. The end user devices 130A-C and the enterprise user device 135 can access components through the network 199, via wired or wireless connections. The enterprise user device 135 can also be a network administrator that connects directly to the analytics server or the steering server 120 to make command line adjustments through a wired or wireless connection. Generally, connections can be wired (e.g., Ethernet, serial port, power lines, analog telephone lines), wireless (Wi-Fi, 3G/4G, Bluetooth), or a combination of both. Other network architectures are possible with additional network components such as access points, routers, Wi-Fi or SDN (software defined networking) controllers, firewalls, gateways, and the like.


The analytics server 110 provides control and feedback to a user of the system 100 during deployment of updated code for cloud-based applications. The analytics server 110 can include a user interface for access through the Internet, USB connection, mobile app, or the like. A configuration engine, in some instances, saves creates a user profile in order to like the analytics server 110 to a particular cloud-based application with log-in credentials, IP addresses, end point or destination LAN information, or the like, and also information for creating a policy for traffic steering between the current and updated environments 140A,B. The policy can describe a profile of desired traffic to be sent to either environment and be based on a variety of parameters, such as end user device type, time of day, wireless or mobile requests, type of application, user history, application conditions or traffic conditions, just to illustrate a few examples. The analytics server 110 outputs configuration information to the steering server 120 in order to implement policies.


The analytics server 110 receives feedback from the steering server 120 and highlights specific metrics based on policies. For example, a performance score and/or a functional score that summarizes the success of the updated environment 140B at the current traffic split. In another example, very detailed metrics about different types of user devices 130A-C, application response times, amount of errors, processor usage, bandwidth usage, memory usage, or the like are provided, in the form of a matrix.


Responsive to real-time performance metrics, in one embodiment, the analytics engine 110 can automatically adjust the steering policies. If a current stage of deployment is successful, the analytic server 110 can more fully activate the updated environment 140B. Some embodiments automatically adjust ratios based on preconfigured thresholds of performance.


The analytics server 110 can comprise a server blade, a personal computer (PC), a virtualized cloud-based service device, or any other appropriate processor-based device. In some embodiments, the analytics server 110 is operated by a service provider, such as a data center providing virtual hosting services for the cloud-based applications. In other embodiments, the analytics server 110 is self-implemented by an application developer that purchases and owns a software product.


More detailed examples of the steering server 120 are described below with respect to FIG. 4.


The steering server 120 implements policies of the analytics server 110 to selectively steer network traffic for a cloud-based application between the current and the updated cpde environments 140A,B. In more liberal implementations, a mere ratio of traffic splitting is provided by policies, leaving a large amount of selection discretion to the steering server 120. In more granular implementations, a strict demographic of traffic diversity provides more direction to the steering server 120. In one instance, a certain percentage of mobile traffic or guest log-ins is sent to the updated code environment 140B. Other instances, percentages are based on real-time application conditions, performance, or error rates. Even if actual incoming traffic loads deviate from predicted or desired traffic loads, the steering server 120 has the capability of discriminating actual traffic loads to produce desired traffic loads on either environment.


In one embodiment, the steering server 120 automatically implements steering policy adjustments. The policy adjustments can be immediately implemented, or current sessions can be completed and subsequent sessions are adjusted. In some situations, existing user devices continue with earlier steering policies while new user devices are treated under updated steering policies.


The steering server 120 can comprise any of the devices described in relation to the analytics server 110. In one embodiment, the steering server 120 is physically integrated with the analytics engine and operated by a common entity (e.g., commonly operated by a data center, or commonly manufactured by a vendor). In another embodiment, the steering server 120 is manufactured by a first vendor and hosted by a first entity, and the analytics server 110 is manufactured by a second vendor and is hosted by a second entity.


Generally, the steering policy can guide steering decisions between the current and updated code environments. Traffic steering or management refers to dropping, mirror, redirecting, splitting and rate limiting between replicated application deployments of traffic based on rules. Traffic rules can split traffic based on a logical expression of domain, path and headers of an HTTP request, for example. Other embodiments also steer based on smartflows, which further group traffic flows directed to a specific compute resources that may require specific services to be applied on them, such as specific policies to be differentially applied to these traffic flows for the purpose of securing them, collecting metrics and statistics measuring effectiveness of alternative implementations, etc.


More detailed examples of the steering server 120 are described below with respect to FIG. 3.


The user devices 130A-C and the current and updated environments 140A,B can comprise any of the processor-based devices described herein. The user devices 130A-C can have human or machine users that access cloud-based applications, for example, through a smart phone, laptop, tablet, phablet or personal computer, or java or web interface. Execution can occur completely in the cloud, completely on the user devices 130A-C, or in cooperation. In some cases, a user device 130A-C is profiled by the system 100 in order to meet traffic diversity requirements of a policy. The environments 140A,B can comprise, for example, a subset of a data center, an individual server, or a virtualized group of network locations, for example. The current and updated types of environments are mere illustrations as any type of first and second environments can be implemented for various purposes in continuous deployment (e.g., legacy and testing environments).


More detailed examples of the steering server 120 are described below with respect to FIG. 5.


The current and updated code environments 140A,B can store and execute cloud-based applications. Code environments tend to be replica of each other differing in some key attributes such as software version, alternative implementation of features, or between a production and staging or test deployment. In one case, only one application provided by a single entity resides on an environment. In another case, many different applications provided by many different entities resides on an environment. The current and updated code environments 140A,B can be physically located on different servers or virtual servers, or alternatively, be located on a single device.



FIG. 2 is a more detailed block diagram illustrating the analytics server 110 of the system in FIG. 1, according to an embodiment. The analytics server 110 comprises a user interface 210, an analytics engine 220, and reports module 230.


The user interface 210 allows the enterprise user device 135 access to the analytics server 110 for configuring deployments and for making deployment adjustments. User accounts are established to secure and customize implementations. Preferences can be entered by checkboxes or responsive to a script of questions presented to an admin, and be converted to specific steering rules.


The analytics engine 220 automatically implements steering policies at the steering server 120. A steering rules database 222 stores steering rules for deployments and download to the steering server 120. The steering performance database 224 downloads performance metrics from the steering server 110 based on actual steering decisions. Further analytics can be performed, for example, by aggregating deployment metrics for several applications or several different clients. A policy adjustment module 226 can implement deployment adjustments responsive to analytics. In some embodiments, adjustments are automatically completed using customer steering rules or general business processes. In other embodiments, adjustments are manually entered by the enterprise user device 135.


The reports module 230 can present various views of analytic data on-demand to an enterprise server device 135. Additionally, reports can be periodically generated. Moreover, alarms can be raised based on a current deployment situation such as a server failure.



FIG. 3 is a more detailed block diagram illustrating the steering server 120 of the system in FIG. 1, according to an embodiment. The steering server 120 includes an API (application programming interface) module 310, a steering engine 320, and a service request queue 330.


The API module 310 provides an I/O interface for the analytics server 110 and the end user device 130. The analytics server 110 sends commands and data the steering engine 320 to affect steering policy, and the steering server sends data back to the analytics server 110. Separately, service requests are received and stored into the service request queue 330.


In an embodiment, the steering engine 320 makes real-time decisions on whether to redirect requests for service to a blue or a green environment (or other related type of environment). A steering rules database 322 stores rules sent from the steering rules database 222. Metrics associated with environment performance are collected by a steering performance database 324.


The service request queue 330 stores service requests until redirected. There can be one queue or separate queues per client, per application, or per environment, for example.



FIG. 4 is a more detailed block diagram illustrating the end user device 130 (generically representing the end user devices 130A-C) of the system in FIG. 1, according to an embodiment. The end user device 130 comprises a steering daemon 410, a remote app 420 and a Wi-Fi radio 430.


The steering daemon 410 executes locally for communicating data back to the steering engine 320. General environmental characteristics can be sent, such as device type, operating system type and version, static and dynamic computing resources (e.g., memory and processor usage). Additional characteristics available concern execution feedback of the remote app 420b being routed to blue or green environments (e.g., service delays and application performance). The remote app 420 is a locally executed version of the service provided by blue or green environments. The Wi-Fi radio 430 is just one example of a communication module based on the device and network connection type.



FIG. 5 is a sequence diagram illustrating interactions 500 between components of FIG. 1, according to an embodiment.


Initially, the analytics server 110 sends policy steering rules to the steering server 120 to start a deployment (interaction 501). The end user device 502 sends a service request (interaction 502). The steering server 110 redirects service request to code environment 140, either blue or green (interaction 503), and awaits a response (interaction 504) for returning to the end user device 130 (interaction 505). Performance metrics are periodically sent from the steering server 120 back to the analytics server 110 (interaction 506). Based on analytics updated steering policies are sent (interaction 507).


Many variations are possible. The end user interactions 130 remain the same on the front end although environment selection on the back end can be handled differently at different times.


Methods for Policy-Driven Application Traffic Management (FIGS. 6 to 8)



FIG. 6 is a high-level flow diagram illustrating a method 600 for policy-driven management of application traffic for providing services to cloud-based applications, according to an embodiment. The method 600 can be performed by an analytics component (e.g., the analytics server 110). Many different embodiments of the following methods are possible, such as more or fewer steps, steps occurring in different orders, and varied grouping of functionalities.


Application configuration and policy data is received from enterprise users (step 610). Policy data is sent to a steering server for traffic steering during code deployment (step 620). Metrics from traffic steering of actual traffic are received from the steering server (step 630). Based on the metrics, steering policies can be automatically updated at the steering server (steps 640, 641). Otherwise, metrics continue to be received even if no steering policies are adjusted (steps 640, 642).



FIG. 7 is a high-level flow diagram illustrating a method 700 for adjusting policy-driven management of traffic for providing services to cloud-based applications, according to one embodiment. The method 700 can be performed by an steering component (e.g., the steering server 120).


Policy data for traffic steering can be received from an analytics server (step 710). When service request are received from user devices (step 720), they are forwarded to either blue or green environments selected according to policies (step 730), as further described in FIG. 8. Metrics collected from servicing requests are sent to the analytics server (step 740). If, in response, a policy update is received from the analytics server (step 750), the steering policy is updated (step 755) before continuing to receive service requests. Otherwise, service requests continue to be handled under the existing policy.



FIG. 8 is a more detailed flow diagram for the step 730 of forwarding service request to either a blue or green environment selected according to steering policy, according to an embodiment.


A profile of a user device is determined by, for example, a daemon executing on a user device and communicating with a steering server (step 810). The profile is applies to a steering policy (step 820). As a result, service requests are forwarded to either blue or green environments according to the policy (step 830). Performance metrics for service requests are stored (step 840).


General Computing Devices (FIG. 9)



FIG. 9 is a block diagram illustrating an exemplary computing device 900 for use in the system 100 of FIG. 1, according to one embodiment. The computing device 900 is an exemplary device that is implementable for each of the components of the system 100, including the analytics server 110, the steering server 120, the end user device 130, the enterprise user device 135 or the current or updated code environments 140A,B. The computing device 900 can be a mobile computing device, a laptop device, a smartphone, a tablet device, a phablet device, a video game console, a personal computing device, a stationary computing device, a server blade, an Internet appliance, a virtual computing device, a distributed computing device, a cloud-based computing device, or any appropriate processor-driven device.


The computing device 900, of the present embodiment, includes a memory 910, a processor 920, a storage drive 930, and an I/O port 940. Each of the components is coupled for electronic communication via a bus 999. Communication can be digital and/or analog, and use any suitable protocol.


The memory 910 further comprises network applications 912 and an operating system 914. The network applications 912 can include the modules of the analytics server 110, the steering server 120 and the end user device 130, as illustrated in FIGS. 2-4. Other network applications 912 can include a web browser, a mobile application, an application that uses networking, a remote application executing locally, a network protocol application, a network management application, a network routing application, or the like.


The operating system 914 can be one of the Microsoft Windows® family of operating systems (e.g., Windows 95, 98, Me, Windows NT, Windows 2000, Windows XP, Windows XP x64 Edition, Windows Vista, Windows CE, Windows Mobile, Windows 9 or Windows 8), Linux, HP-UX, UNIX, Sun OS, Solaris, Mac OS X, Alpha OS, AIX, IRIX32, or IRIX64. Other operating systems may be used. Microsoft Windows is a trademark of Microsoft Corporation.


The processor 920 can be a network processor (e.g., optimized for IEEE 802.11), a general purpose processor, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a reduced instruction set controller (RISC) processor, an integrated circuit, or the like. Qualcomm Atheros, Broadcom Corporation, and Marvell Semiconductors manufacture processors that are optimized for IEEE 802.11 devices. The processor 920 can be single core, multiple core, or include more than one processing elements. The processor 920 can be disposed on silicon or any other suitable material. The processor 920 can receive and execute instructions and data stored in the memory 910 or the storage drive 930


The storage drive 930 can be any non-volatile type of storage such as a magnetic disc, EEPROM, Flash, or the like. The storage drive 930 stores code and data for applications.


The I/O port 940 further comprises a user interface 942 and a network interface 944. The user interface 942 can output to a display device and receive input from, for example, a keyboard. The network interface 944 (e.g. RF antennae) connects to a medium such as Ethernet or Wi-Fi for data input and output.


Many of the functionalities described herein can be implemented with computer software, computer hardware, or a combination.


Computer software products (e.g., non-transitory computer products storing source code) may be written in any of various suitable programming languages, such as C, C++, C#, Oracle® Java, JavaScript, PHP, Python, Perl, Ruby, AJAX, and Adobe® Flash®. The computer software product may be an independent application with data input and data display modules. Alternatively, the computer software products may be classes that are instantiated as distributed objects. The computer software products may also be component software such as Java Beans (from Sun Microsystems) or Enterprise Java Beans (EJB from Sun Microsystems).


Furthermore, the computer that is running the previously mentioned computer software may be connected to a network and may interface to other computers using this network. The network may be on an intranet or the Internet, among others. The network may be a wired network (e.g., using copper), telephone network, packet network, an optical network (e.g., using optical fiber), or a wireless network, or any combination of these. For example, data and other information may be passed between the computer and components (or steps) of a system of the invention using a wireless network using a protocol such as Wi-Fi (IEEE standards 802.11, 802.11a, 802.11b, 802.11e, 802.11g, 802.11i, 802.11n, and 802.11ac, just to name a few examples). For example, signals from a computer may be transferred, at least in part, wirelessly to components or other computers.


In an embodiment, with a Web browser executing on a computer workstation system, a user accesses a system on the World Wide Web (WWW) through a network such as the Internet. The Web browser is used to download web pages or other content in various formats including HTML, XML, text, PDF, and postscript, and may be used to upload information to other parts of the system. The Web browser may use uniform resource identifiers (URLs) to identify resources on the Web and hypertext transfer protocol (HTTP) in transferring files on the Web.


This description of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications. This description will enable others skilled in the art to best utilize and practice the invention in various embodiments and with various modifications as are suited to a particular use. The scope of the invention is defined by the following claims.

Claims
  • 1. A computer-implemented method for incremental transition between a current cloud-based code environment and an updated cloud-based code environment comprising: receiving, by a steering server, from an analytics server, a steering policy comprising a first set of rules for steering less than a first percentage of first requests from the current cloud-based code environment to the updated cloud-based code environment;sending, by the steering server, the less than the first percentage of the first requests to the updated cloud-based code environment using the steering policy, the less than the first percentage of the first requests being associated with first clients;getting, by the steering server, from the updated cloud-based code environment and user devices of the first clients, metrics associated with the less than the first percentage of the first requests, the metrics including metrics associated with an operation of the updated cloud-based code environment and metrics communicated by the user devices of the first clients to the steering server, wherein the metrics associated with the operation of the updated cloud-based code environment include two or more of first updated cloud-based code environment response times, first updated cloud-based code environment errors, first updated cloud-based code environment processor usage, first updated cloud-based code environment memory usage, and first updated cloud-based code environment bandwidth usage, wherein the metrics communicated by the user devices of the first clients include at least dynamic computing resources associated with the user devices of the first clients, the dynamic computing resources associated with the user devices of the first clients including one or more of: a memory usage of the user devices of the first clients when processing sessions associated with the less than the first percentage of the first requests and a processor usage of the user devices of the first clients when processing the sessions associated with the less than the first percentage of the first requests;computing, by the analytics server, a first performance score using the metrics;receiving, by the steering server, from the analytics server, an updated steering policy when the first performance score fulfills a first predetermined criteria, the updated steering policy comprising a second set of rules for steering greater than the first percentage and less than a second percentage of the second service requests from the current cloud-based code environment to the updated cloud-based code environment;sending, by the steering server, the greater than the first percentage and the less than the second percentage of the second service requests to the updated cloud-based code environment using the updated steering policy, the greater than the first percentage and less than the second percentage of the second service requests being associated with second clients;getting, by the steering server, from the updated cloud-based code environment and user devices of the second clients, further metrics associated with the greater than the first percentage and less than the second percentage of the second service requests, the further metrics including metrics associated with a further operation of the updated cloud-based code environment and metrics communicated by the user devices of the second clients to the steering server, wherein the metrics associated with the further operation of the updated cloud-based code environment include two or more of: second updated cloud-based code environment response times, second updated cloud-based code environment errors, second updated cloud-based code environment processor usage, second updated cloud-based code environment memory usage, and second updated cloud-based code environment bandwidth usage, wherein the metrics communicated by the user devices of the second clients to the steering server include at least dynamic computing resources associated with the user devices of the second clients, the dynamic computing resources associated with the user devices of the second clients including one or more of: a memory usage of the user devices of the second clients when processing sessions associated with the greater than the first percentage and less than the second percentage of the second service requests and a processor usage of the user devices of the second clients when processing sessions associated with the greater than the first percentage and less than the second percentage of the second service requests;computing, by the analytics server, a second performance score using the further metrics; andreceiving, by the steering server, a further updated steering policy when the second performance score fulfills a second predetermined criteria, the further updated steering policy comprising a third set of rules for steering subsequent service requests from the current cloud-based code environment to the updated cloud-based code environment;sending, by the steering server, subsequent service requests to the updated cloud-based code environment using the further updated steering policy.
  • 2. The method of claim 1, wherein at least one of the first and second metrics comprises performance metrics.
  • 3. The method of claim 1, wherein at least one of the first and second metrics comprises the duration of time between service requests and responses to service requests.
  • 4. The method of claim 1, wherein at least one of the first and second metrics comprises application delay time experienced by an end user device, as reported by a daemon executing on the end user device to the steering server.
  • 5. The method of claim 1, further comprising: receiving user input of preferences as a basis for at least one of the first and second set of rules.
  • 6. The method of claim 1, further comprising: automatically adjusting at least one of the first and second steering policies in response to input from a network administrator.
  • 7. The method of claim 1, further comprising: receiving a fourth steering policy when the first performance score does not fulfill the first predetermined criteria, the fourth steering policy comprising a fourth set of rules for steering fourth service requests to the current cloud-based code environment;sending fourth service requests to the current cloud-based code environment using the fourth steering policy.
  • 8. The method of claim 1, further comprising: sending less than the first percentage of fifth service requests to the updated cloud-based code environment using the further updated steering policy.
  • 9. The method of claim 1, further comprising: receiving specifications for a deployment from the current cloud-based code environment to the updated cloud-based code environment.
  • 10. The method of claim 1, further comprising: receiving specifications for a deployment from the current cloud-based code environment to the updated cloud-based code environment; andautomatically generating rules that implement the specifications for the deployment.
  • 11. The method of claim 1, wherein the current cloud-based code environment implements a current version of a remotely hosted application and the updated cloud-based code environment implements an updated version of the remotely hosted application.
  • 12. The method of claim 1, wherein the service requests are from end user devices and pertain to a plurality of different remotely hosted applications including the remotely hosted application.
  • 13. The method of claim 1, wherein the service requests comprise variables submitted to the remotely hosted application, and sending responses to the service requests.
  • 14. The method of claim 1, wherein at least one of the first and second steering policies comprises a type of device to send to the current cloud-based code environment and a type of device to send to the updated cloud-based code environment.
  • 15. The method of claim 1, wherein at least one of the first and second steering policies comprises a characteristic of a user of the end user device to forward a service request to the current cloud-based code environment and a second characteristic of a user of the end user device to forward a second service request to the updated cloud-based code environment.
  • 16. A non-transitory computer-readable storage medium having embodied thereon a program, the program being executable by a processor to perform a method for incremental transition between a current cloud based code environment and an updated cloud-based code environment comprising: receiving, by a steering server, from an analytics server, a first steering policy comprising a first set of rules for steering less than a first percentage of first requests from the current cloud-based code environment to the updated cloud-based code environment;sending, by the steering server, the less than the first percentage of the first requests to the updated cloud-based code environment using the first steering policy, the less than the first percentage of the first requests being associated with first clients;getting, by the steering server, from the updated cloud-based code environment and user devices of the first clients, metrics associated with the less than the first percentage of the first requests, the metrics including metrics associated with an operation of the updated cloud-based code environment and metrics communicated by the user devices of the first clients to the steering server, wherein the metrics associated with the operation of the updated cloud-based code environment include two or more of first updated cloud-based code environment response times, first updated cloud-based code environment errors, first updated cloud-based code environment processor usage, first updated cloud-based code environment memory usage, and first updated cloud-based code environment bandwidth usage, wherein the metrics communicated by the user devices of the first clients include at least dynamic computing resources associated with the user devices of the first clients, the dynamic computing resources associated with the user devices of the first clients including one or more of: a memory usage of the user devices of the first clients when processing sessions associated with the less than the first percentage of the first requests and a processor usage of the user devices of the first clients when processing the sessions associated with the less than the first percentage of the first requests;computing, by the analytics server, a first performance score using the metrics;receiving, by the steering server, from the analytics server, an updated steering policy when the first performance score fulfills a first predetermined criteria, the updated steering policy comprising a second set of rules for steering greater than the first percentage and less than a second percentage of the second service requests from the current cloud-based code environment to the updated cloud-based code environment;sending, by the steering server, the greater than the first percentage and the less than the second percentage of the second service requests to the updated cloud-based code environment using the updated steering policy, the greater than the first percentage and less than the second percentage of the second service requests being associated with second clients;getting, by the steering server, from the updated cloud-based code environment and user devices of the second clients, further metrics associated with the greater than the first percentage and less than the second percentage of the second service requests, the further metrics including metrics associated with a further operation of the updated cloud-based code environment and metrics communicated by the user devices of the second clients to the steering server, wherein the metrics associated with the further operation of the updated cloud-based code environment include two or more of: second updated cloud-based code environment response times, second updated cloud-based code environment errors, second updated cloud-based code environment processor usage, second updated cloud-based code environment memory usage, and second updated cloud-based code environment bandwidth usage, wherein the metrics communicated by the user devices of the second clients to the steering server include at least dynamic computing resources associated with the user devices of the second clients, the dynamic computing resources associated with the user devices of the second clients including one or more of: a memory usage of the user devices of the second clients when processing sessions associated with the greater than the first percentage and less than the second percentage of the second service requests and a processor usage of the user devices of the second clients when processing sessions associated with the greater than the first percentage and less than the second percentage of the second service requests;computing, by the analytics server, a second performance score using the further; andreceiving, by the steering server, a further updated steering policy when the second performance score fulfills a second predetermined criteria, the further updated steering policy comprising a third set of rules for steering subsequent service requests from the current cloud-based code environment to the updated cloud-based code environment;sending, by the steering server, subsequent service requests to the updated cloud-based code environment using the further updated steering policy.
US Referenced Citations (279)
Number Name Date Kind
4403286 Fry et al. Sep 1983 A
4495570 Kitajima et al. Jan 1985 A
4577272 Ballew et al. Mar 1986 A
4720850 Oberlander et al. Jan 1988 A
4864492 Blakely-Fogel et al. Sep 1989 A
4882699 Evensen Nov 1989 A
5031089 Liu et al. Jul 1991 A
5218676 Ben-Ayed et al. Jun 1993 A
5293488 Riley et al. Mar 1994 A
5341477 Pitkin et al. Aug 1994 A
5432908 Heddes et al. Jul 1995 A
5537542 Eilert et al. Jul 1996 A
5563878 Blakeley et al. Oct 1996 A
5603029 Aman et al. Feb 1997 A
5675739 Eilert et al. Oct 1997 A
5740371 Wallis Apr 1998 A
5751971 Dobbins et al. May 1998 A
5754752 Sheh et al. May 1998 A
5774660 Brendel et al. Jun 1998 A
5774668 Choquier et al. Jun 1998 A
5796936 Watabe et al. Aug 1998 A
5812771 Fee et al. Sep 1998 A
5828847 Gehr et al. Oct 1998 A
5835724 Smith Nov 1998 A
5867636 Walker Feb 1999 A
5867661 Bittinger et al. Feb 1999 A
5875296 Shi et al. Feb 1999 A
5917997 Bell et al. Jun 1999 A
5918017 Attanasio et al. Jun 1999 A
5923854 Bell et al. Jul 1999 A
5931914 Chiu Aug 1999 A
5935207 Logue et al. Aug 1999 A
5935215 Bell et al. Aug 1999 A
5941988 Bhagwat et al. Aug 1999 A
5944794 Okamoto et al. Aug 1999 A
5946686 Schmuck et al. Aug 1999 A
5951650 Bell et al. Sep 1999 A
5951694 Choquier et al. Sep 1999 A
6006264 Colby et al. Dec 1999 A
6006269 Phaal Dec 1999 A
6031978 Cotner et al. Feb 2000 A
6041357 Kunzelman et al. Mar 2000 A
6076108 Courts et al. Jun 2000 A
6088728 Bellemore et al. Jul 2000 A
6098093 Bayeh et al. Aug 2000 A
6104717 Coile et al. Aug 2000 A
6119174 Borowsky et al. Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6141759 Braddy Oct 2000 A
6185598 Farber et al. Feb 2001 B1
6223205 Harchol-Balter et al. Apr 2001 B1
6223287 Douglas et al. Apr 2001 B1
6247057 Barrera, III Jun 2001 B1
6249820 Dobbins et al. Jun 2001 B1
6252878 Locklear, Jr. et al. Jun 2001 B1
6262976 McNamara Jul 2001 B1
6286039 Van Horne et al. Sep 2001 B1
6314463 Abbott et al. Nov 2001 B1
6317786 Yamane et al. Nov 2001 B1
6324177 Howes et al. Nov 2001 B1
6330560 Harrison et al. Dec 2001 B1
6339423 Sampson et al. Jan 2002 B1
6353614 Borella et al. Mar 2002 B1
6363075 Huang et al. Mar 2002 B1
6363081 Gase Mar 2002 B1
6374300 Masters Apr 2002 B2
6374359 Shrader et al. Apr 2002 B1
6381632 Lowell Apr 2002 B1
6393475 Leong et al. May 2002 B1
6397261 Eldridge et al. May 2002 B1
6430622 Aiken, Jr. et al. Aug 2002 B1
6445704 Howes et al. Sep 2002 B1
6446225 Robsman et al. Sep 2002 B1
6490682 Vanstone et al. Dec 2002 B2
6496866 Attanasio et al. Dec 2002 B2
6510464 Grantges, Jr. et al. Jan 2003 B1
6515988 Eldridge et al. Feb 2003 B1
6542926 Zalewski et al. Apr 2003 B2
6564215 Hsiao et al. May 2003 B1
6567857 Gupta et al. May 2003 B1
6578066 Logan et al. Jun 2003 B1
6587866 Modi Jul 2003 B1
6591262 MacLellan et al. Jul 2003 B1
6594268 Aukia et al. Jul 2003 B1
6598167 Devine et al. Jul 2003 B2
6606315 Albert et al. Aug 2003 B1
6609150 Lee et al. Aug 2003 B2
6611498 Baker et al. Aug 2003 B1
6650641 Albert et al. Nov 2003 B1
6657974 Britton et al. Dec 2003 B1
6697354 Borella et al. Feb 2004 B1
6701377 Burmann et al. Mar 2004 B2
6704317 Dobson Mar 2004 B1
6711618 Danner et al. Mar 2004 B1
6714979 Brandt et al. Mar 2004 B1
6718383 Hebert Apr 2004 B1
6742126 Mann et al. May 2004 B1
6745229 Gobin et al. Jun 2004 B1
6748413 Bournas Jun 2004 B1
6760758 Lund et al. Jul 2004 B1
6763370 Schmeidler et al. Jul 2004 B1
6763468 Gupta et al. Jul 2004 B2
6772333 Brendel Aug 2004 B1
6779017 Lamberton et al. Aug 2004 B1
6877095 Allen Apr 2005 B1
6886044 Miles et al. Apr 2005 B1
6892307 Wood et al. May 2005 B1
6941384 Aiken, Jr. et al. Sep 2005 B1
6952728 Alles et al. Oct 2005 B1
6954784 Aiken, Jr. et al. Oct 2005 B2
6963917 Callis et al. Nov 2005 B1
6965930 Arrowood et al. Nov 2005 B1
6996617 Aiken, Jr. et al. Feb 2006 B1
6996631 Aiken, Jr. et al. Feb 2006 B1
7058600 Combar et al. Jun 2006 B1
7058789 Henderson et al. Jun 2006 B2
7120697 Aiken, Jr. et al. Oct 2006 B2
7188181 Squier et al. Mar 2007 B1
7225249 Barry et al. May 2007 B1
7430611 Aiken, Jr. et al. Sep 2008 B2
7463648 Eppstein et al. Dec 2008 B1
7509369 Tormasov Mar 2009 B1
7703102 Eppstein et al. Apr 2010 B1
7792113 Foschiano et al. Sep 2010 B1
7948952 Hurtta et al. May 2011 B2
7970934 Patel Jun 2011 B1
7991859 Miller et al. Aug 2011 B1
8019870 Eppstein et al. Sep 2011 B1
8032634 Eppstein et al. Oct 2011 B1
8179809 Eppstein et al. May 2012 B1
8191106 Choyi et al. May 2012 B2
8224971 Miller et al. Jul 2012 B1
8234650 Eppstein et al. Jul 2012 B1
8239445 Gage et al. Aug 2012 B1
8255644 Sonnier et al. Aug 2012 B2
8296434 Miller et al. Oct 2012 B1
8312507 Chen et al. Nov 2012 B2
8543644 Gage et al. Sep 2013 B2
8584199 Chen et al. Nov 2013 B1
8595791 Chen et al. Nov 2013 B1
8813180 Chen et al. Aug 2014 B1
8826372 Chen et al. Sep 2014 B1
8885463 Medved et al. Nov 2014 B1
9118618 Davis Aug 2015 B2
9118620 Davis Aug 2015 B1
9219751 Chen et al. Dec 2015 B1
9253152 Chen et al. Feb 2016 B1
9270705 Chen et al. Feb 2016 B1
9338225 Jalan et al. May 2016 B2
9350744 Chen et al. May 2016 B2
9356910 Chen et al. May 2016 B2
9497201 Chen et al. Nov 2016 B2
9544364 Jalan et al. Jan 2017 B2
20010015812 Sugaya Aug 2001 A1
20020010783 Primak et al. Jan 2002 A1
20020091831 Johnson Jul 2002 A1
20020124089 Aiken, Jr. et al. Sep 2002 A1
20020133491 Sim et al. Sep 2002 A1
20020141448 Matsunaga Oct 2002 A1
20020143953 Aiken Oct 2002 A1
20020143954 Aiken, Jr. et al. Oct 2002 A1
20020166080 Attanasio et al. Nov 2002 A1
20020178265 Aiken, Jr. et al. Nov 2002 A1
20020178268 Aiken, Jr. et al. Nov 2002 A1
20020191575 Kalavade et al. Dec 2002 A1
20020194335 Maynard Dec 2002 A1
20020199000 Banerjee Dec 2002 A1
20030023711 Parmar et al. Jan 2003 A1
20030023873 Ben-Itzhak Jan 2003 A1
20030031180 Datta et al. Feb 2003 A1
20030035420 Niu Feb 2003 A1
20030061402 Yadav Mar 2003 A1
20030079146 Burstein Apr 2003 A1
20030081624 Aggarwal et al. May 2003 A1
20030131245 Linderman Jul 2003 A1
20030152078 Henderson et al. Aug 2003 A1
20030202536 Foster et al. Oct 2003 A1
20040001497 Sharma Jan 2004 A1
20040128312 Shalabi et al. Jul 2004 A1
20040139057 Hirata et al. Jul 2004 A1
20040139108 Tang et al. Jul 2004 A1
20040141005 Banatwala et al. Jul 2004 A1
20040143599 Shalabi et al. Jul 2004 A1
20040184442 Jones et al. Sep 2004 A1
20040210623 Hydrie et al. Oct 2004 A1
20040253956 Collins Dec 2004 A1
20050009520 Herrero et al. Jan 2005 A1
20050021949 Izawa et al. Jan 2005 A1
20050125276 Rusu Jun 2005 A1
20050141506 Aiken, Jr. et al. Jun 2005 A1
20050249225 Singhal Nov 2005 A1
20050259586 Hafid et al. Nov 2005 A1
20060036733 Fujimoto et al. Feb 2006 A1
20060064478 Sirkin Mar 2006 A1
20060069804 Miyake et al. Mar 2006 A1
20060077926 Rune Apr 2006 A1
20060092950 Arregoces et al. May 2006 A1
20060098645 Walkin May 2006 A1
20060112170 Sirkin May 2006 A1
20060190997 Mahajani et al. Aug 2006 A1
20060209789 Gupta et al. Sep 2006 A1
20060230129 Swami et al. Oct 2006 A1
20070086382 Narayanan et al. Apr 2007 A1
20070094396 Takano et al. Apr 2007 A1
20070118881 Mitchell et al. May 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070165622 O'Rourke et al. Jul 2007 A1
20070259673 Willars et al. Nov 2007 A1
20070274285 Werber et al. Nov 2007 A1
20070286077 Wu Dec 2007 A1
20070288247 Mackay Dec 2007 A1
20070294209 Strub et al. Dec 2007 A1
20080109870 Sherlock et al. May 2008 A1
20080134332 Keohane et al. Jun 2008 A1
20080162679 Maher et al. Jul 2008 A1
20080263209 Pisharody et al. Oct 2008 A1
20080271130 Ramamoorthy Oct 2008 A1
20080320151 McCanne et al. Dec 2008 A1
20090037361 Prathaban Feb 2009 A1
20090106830 Maher Apr 2009 A1
20090141634 Rothstein et al. Jun 2009 A1
20090213858 Dolganow et al. Aug 2009 A1
20090262741 Jungck et al. Oct 2009 A1
20090271472 Scheifler et al. Oct 2009 A1
20090313379 Rydnell et al. Dec 2009 A1
20100162378 Jayawardena et al. Jun 2010 A1
20100188975 Raleigh Jul 2010 A1
20100235880 Chen et al. Sep 2010 A1
20100312740 Clemm et al. Dec 2010 A1
20110013525 Breslau et al. Jan 2011 A1
20110064083 Borkenhagen et al. Mar 2011 A1
20110099403 Miyata et al. Apr 2011 A1
20110110294 Valluri et al. May 2011 A1
20110153834 Bharrat Jun 2011 A1
20110185073 Jagadeeswaran et al. Jul 2011 A1
20110276695 Maldaner Nov 2011 A1
20110292939 Subramaian et al. Dec 2011 A1
20120023231 Ueno Jan 2012 A1
20120066371 Patel et al. Mar 2012 A1
20120084460 McGinnity et al. Apr 2012 A1
20120117571 Davis et al. May 2012 A1
20120144014 Natham et al. Jun 2012 A1
20120151353 Joanny Jun 2012 A1
20120155495 Clee et al. Jun 2012 A1
20120239792 Banerjee et al. Sep 2012 A1
20120240185 Kapoor et al. Sep 2012 A1
20130007225 Gage et al. Jan 2013 A1
20130074177 Varadhan et al. Mar 2013 A1
20130083725 Mallya et al. Apr 2013 A1
20130089099 Pollock et al. Apr 2013 A1
20130091273 Ly et al. Apr 2013 A1
20130148500 Sonoda et al. Jun 2013 A1
20130166731 Yamanaka Jun 2013 A1
20130191548 Boddukuri et al. Jul 2013 A1
20130262702 Davis Oct 2013 A1
20130282791 Kruglick Oct 2013 A1
20130311686 Fetterman et al. Nov 2013 A1
20140047115 Lipscomb Feb 2014 A1
20140164617 Jalan et al. Jun 2014 A1
20140258465 Li Sep 2014 A1
20140258536 Chiong Sep 2014 A1
20140269728 Jalan et al. Sep 2014 A1
20140330982 Jalan et al. Nov 2014 A1
20140334485 Jain et al. Nov 2014 A1
20140379901 Tseitlin Dec 2014 A1
20150085650 Cui Mar 2015 A1
20150215436 Kancherla Jul 2015 A1
20150281087 Jalan et al. Oct 2015 A1
20150350383 Davis Dec 2015 A1
20150381465 Narayanan Dec 2015 A1
20160036778 Chen et al. Feb 2016 A1
20160050233 Chen et al. Feb 2016 A1
20160105395 Chen et al. Apr 2016 A1
20160105446 Chen et al. Apr 2016 A1
20160112497 Koushik Apr 2016 A1
20160119382 Chen et al. Apr 2016 A1
20160173579 Jalan et al. Jun 2016 A1
20160261642 Chen et al. Sep 2016 A1
20170041350 Chen et al. Feb 2017 A1
Foreign Referenced Citations (20)
Number Date Country
1725702 Jan 2006 CN
101094225 Dec 2007 CN
101567818 Oct 2009 CN
102104548 Jun 2011 CN
102918801 Feb 2013 CN
103365654 Oct 2013 CN
102918801 May 2016 CN
0648038 Apr 1995 EP
1770915 Apr 2007 EP
1885096 Feb 2008 EP
2577910 Apr 2013 EP
1183569 Dec 2013 HK
1188498 May 2014 HK
2001298449 Oct 2001 JP
2013528330 Jul 2013 JP
5946189 Jul 2016 JP
WO2011149796 Dec 2011 WO
WO2014088741 Jun 2014 WO
WO2014144837 Sep 2014 WO
WO2014179753 Nov 2014 WO
Non-Patent Literature Citations (23)
Entry
Cardellini, et al., “Dynamic Load Balancing on Web-Server Systems”, IEEE Internet Computing, 1999, vol. 3(3), pp. 28-29.
Samar, V., “Single Sign-On Using Cookies for Web Applications,” IEEE 8th International Workshop, 1999, pp. 158-163.
“Allot Announces the General Availability of its Directory Services-Based NetPolicy™ Manager,” Allot Communications, Tel Aviv, Israel, Feb. 28, 2000, 2 pages.
“Allot Communications Announces Business-Aware Network Policy Manager,” Allot Communications, Sophia Antipolis, France, Sep. 20, 1999, 2 pages.
“Allot Communications Announces Directory Services Based Network Policy Manager,” Allot Communications, Los Gatos, California, Apr. 5, 1999, 2 pages.
“Allot Communications Announces the Netenforcer Family of IP Traffic Management Products: Fault-Tolerant, Scaleable, Policy-Based Bandwidth Management, QOS, SLA Solutions,” Allot Communications, Burlingame, California, Dec. 13, 1999, 2 pages.
“Allot Communications Launches NetEnforcer with NetWizard, the Fastest Way to Implement Accurate and Reliable Network QoS Policies,” Allot Communications, Burlingame, California, Jan. 25, 2001, 2 pages.
“Allot Introduces Turnkey Next Generation IP Service and Creation Solution—the Virtual Bandwidth Manager,” Allot Communications, Atlanta, Georgia, SUPERCOMM 2000, Booth #8458, Jun. 5, 2000, 2 pages.
“Data Communications Awards Allot Communications ‘Hot Product’ in Internetworking/IP Tools Category,” Allot Communications, Los Gatos, California, Jan. 18, 1999, 2 pages.
“Policy-Based Network Architecture,” Allot Communications, 2001, 12 pages.
Dahlin, A. et al, “EDDIE A Robust and Scalable Internet Server,” Ericsson Telecom AB, Stockholm, Sweden, pp. 1-7 (May 1998). Copy unavailable.
Aron, Mohit et al., “Efficient Support for P-HTTP in Cluster-Based Web Servers,” Proceedings of 1999 Annual Usenix Technical Conference, Monterey, California, Jun. 1999, 14 pages.
Aron, Mohit et al., “Scalable Content-aware Request Distribution in Cluster-based Network Servers,” Proceedings of the 2000 Annual Usenix Technical Conference, San Diego, California, Jun. 2000, 15 pages.
Aron, Mohit, “Scalable Content-aware Request Distribution in Cluster-based Network Servers,” Department of Computer Science, Rice University [Online, retreived on Mar. 13, 2001], Retreived from the Internet: <URL:http://softlib.rice.edu/softlib/scalableRD.html>, 8 pages.
“ACEdirector™: 8-Port 10/100 MBPS Ethernet Switch,” Alteon WebSystems, San Jose, California (1999), 2 pages.
“Enhancing Web User Experience with Global Server Load Balancing,” Alteon WebSystems, San Jose, California, Jun. 1999, 8 pages.
“The Next Step in Server Load Balancing,” Alteon WebSystems, San Jose, California, Nov. 1999, 16 pages.
“1.3.1.2.5 Virtual IP Addressing (VIPA),” excerpt from “IP Configuration” [online], IBM Corporation, 1998 [retreived on Sep. 8, 1999], retreived from the Internet: <http://w3.enterlib.ibm.com:80/cgi-bin/bookmgr/books/F1AF7001/1.3.1.2>, 4 pages.
“1.3.20 Device and Link Statement—Virtual Devices (VIPA),” excerpt from “IP Configuration” [online], IBM Corporation, 1998 [retrieved on Sep. 8, 1999], retrieved from the Internet: <URL:http://w3.enterlib.ibm.com:80/cgi-bin/bookmgr/books/F1AF7001/1.3.2>, 3 pages.
“1.3.23 Home Statement,” excerpt from “IP Configuration” [online], IBM Corporation, 1998 [retrieved on Sep. 8, 1999], retrieved from the Internet: <URL:http://w3.enterlib.ibm.com:80/cgi-bin/bookmgr/books/F1AF7001/1.3.2>, 6 pages.
Devine, Mac, “TCP/IP Application Availability and Workload Balancing in the Parallel Sysplex,” SHARE Technical Conference, Aug. 22-27, 1999, 17 pages.
Pai, Vivek S. et al., “Locality-Aware Request Distribution in Cluster-based Network Servers,” Proceedings of the 8th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS VIII), San Jose, CA, Oct. 1998, 12 pages.
Apostolopoulos, G. et al., “Design, Implementation and Performance of a Content-Based Switch,” INFOCOM 2000, Nineteenth Annual Joint Conference of the IEEE Computer and Communication Societies, IEEE, Mar. 2000, pp. 1117-1126, vol. 3.
Related Publications (1)
Number Date Country
20160139910 A1 May 2016 US
Provisional Applications (1)
Number Date Country
62078400 Nov 2014 US