Service providers, such as telecommunications service providers (TSPs), application service providers (ASPs), storage service providers (SSPs), and internet service providers (ISPs) may provide services in the form of application services to clients. One criteria by which a service provider is judged is the speed of delivery of their service. One factor which adversely effects application performance arises when the resources that are allocated to an application service by a service provider become overloaded.
Service providers typically allocate dedicated resources to application services in a manner that ensures that application servicing is not delayed due to resource bottlenecks. However, the demand for application services may vary over time, undergoing periods of high and low resource utilization. In an ideal world a service provider would secure resources sufficient to handle any peak demand, but such a solution is expensive and wasteful.
To overcome these drawbacks, enterprises may scale their resources in accordance with current application service traffic volume, increasing resources as needed and removing resources when peak demand has ebbed. However, typical scaling methods may require a significant amount of time to provision and boot new resources before a new resource can support application traffic.
These delays with bringing resources online may cause delays and errors in application processes. As a result, service providers may over-allocate resources to an application. While increasing application servicing expense, such over allocation may be insufficient to address rogue spikes in application demand.
According to one aspect, a resource management system includes a plurality of resources and an application server to manage requests to an application supported by a service provider. The resource management system includes a pool of servers comprising a virtual server generated by the application server to support requests to access the application. The virtual server may be mapped to a subset of the plurality of resources of the resource management system and booted to an initialized application state. A storage device is included for storing a custom machine image of the initialized application state of the virtual server. The system includes load balancing logic, configured to distribute requests for the application to the pool of servers and to monitor a metric data of the pool of servers to detect a performance issue. The resource management system includes a serverless instance interface operable in response to detection of the performance issue to forward a serverless instance request including the custom machine image of the initialized application server state. The serverless instance interface may be configured to receive a serverless instance in response to the serverless instance request for addition to the pool of servers, the serverless instance comprising a copy of the custom machine image.
According to a further aspect, a method for managing application performance includes the steps of launching an application including generating a pool of servers configured to support the application, the pool of servers comprising one or more virtual servers mapped to one or more resources of a service provider. The method may include initializing the one or more virtual servers using application-specific program code to provide a custom machine image corresponding to an initialized state of at least one virtual server configured to support the application and storing the custom machine image. The method includes collecting a performance metric related to an execution of the application using the pool of servers, monitoring the performance metric to detect a performance issue for the pool of servers and, in response to detecting the performance issue, generating a serverless instance using the custom machine image. The method further comprises the steps of updating the pool of servers by adding the serverless instance to the pool of servers and forwarding access requests for the application to the pool of servers to balance a distribution of access requests between the one or more virtual servers and the serverless instance.
According to a further aspect, an application management system of a service provider includes a plurality of resources for supporting an application service of the service provider including storage resources, processing resources, program code and data. The system may include a scaling controller for generating a pool of servers to support the application service of the service provider, the scaling controller configured to generate a virtual server specific to the application service by provisioning a subset of resources of the plurality of resources to the application service and executing application specific boot code using the subset of resources to provide the virtual server in an initialized state, the scaling control being configured to the initialized state of the virtual server as a custom machine image. The system includes a memory to store the custom machine image, monitoring logic coupled to each pool of servers to collect performance information related to the application service and scaling logic, coupled to the monitoring logic and configured to selectively scale a size of at least one pool of virtual servers in response to the performance information to add a serverless instance to the pool of servers, the serverless instance comprising a copy of the custom machine image. The system may include load balancing logic configured to balance application access requests among the virtual server and the serverless instance.
With such an arrangement, applications service support may be quickly scaled up or down in accordance with network traffic load, incurring minimal delay during the generation and deployment of the serverless instance. Because the serverless instances use shared resources whose cost is generally determined according to actual use, service providers may more closely tailor the cost of application service support to application load. As a result, the need to purchase and maintain shadow resources to handle rogue spikes in application requests to maximize performance may be reduced and/or eliminated. The systems and methods disclosed herein may be practically applied by service providers to provide increased application performance at reduced cost.
As used herein, unless specifically indicated otherwise, the word “or” is used in the inclusive sense of “and/or” and not the exclusive sense of “either/or.”
Any issued U.S. patents, allowed applications, published foreign applications, and references that are cited herein are hereby incorporated by reference to the same extent as if each was specifically and individually indicated to be incorporated by reference.
In order for the present invention to be more readily understood, certain terms are first defined below. Additional definitions are set forth throughout the specification.
Application Service means a computer-based service managed by an application hosted by a service provider and accessible to clients and/or customers via the network.
Application Machine Image (AMI) means static program code and data supporting an application, including but not limited to an operating system, application program code, application configuration and data files, such as application libraries, initialized application parameters). AMIs may be used to launch instances for supporting the respective application(s).
Instance means a software object instantiated using a copy of a machine image associated with an application and configured to perform the functions of the application.
Virtual server/Virtual Machine means a server/machine comprised of a combination of hardware and/or software resources of the service provider that together mimic a dedicated machine/server. For example, a virtual server may include program code, comprise data, and be physically mapped to hardware resources dedicated to support application requests.
System/Component/Unit the terms “system,” “component” and “unit” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are described herein. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server/virtual server/serverless instance and the server/virtual server/serverless instance can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. Components may be communicatively coupled to each other by various types of communications media to coordinate operations.
A resource management system disclosed herein overcomes performance and cost issues associated with prior art application scaling methods using a resource management system that quickly and dynamically tailors application resource support to real-time application resource consumption. One practical application of such resource management system would include management of application resources by a service provider hosting application services. According to one aspect, the resource management system may service application requests using resources selected from a pool of application servers, the pool of application servers including both virtual server resources and serverless instance resources. In one aspect, serverless instance resources comprise software objects programmed using an application-specific machine image representing an initialized state of a virtual application server that has been provisioned and booted using application-specific program code. In some embodiments the serverless instances may be quickly scaled up and down in accordance with application loading to accommodate rogue spikes in network traffic. In some embodiments the serverless instances may be supported in part using third-party resources, such as licensed resources and cloud-based resources, enabling costs to be more closely aligned with actual application resource utilization.
The systems disclosed herein provide both performance and cost benefits over prior art solutions. For example,
Client network 150 is shown to include a plurality of client device 151-156, where a client device for the purposes of this example is any device capable of requesting access to an application service managed by application server 125. According to one aspect, as the volume of application service requests received from the client network increases, the application server 125 deploys additional agents to service the requests.
One problem with autoscaling virtual servers involves the amount of time used to complete the boot process before the virtual server is available to service application requests; depending upon the application service, it may take over an hour for a fully operable virtual server to be added to the server pool. In situations resulting in a spike in application service request activity over network 160, servicing may be delayed, for example as virtual server 135 is booted and made available.
During the boot process, available resources (memory, disk drives, processors, etc.) are dedicated for use by the virtual server 254. This process may also be referred to as ‘provisioning’. An application machine image 225, for example identifying data structures and other attributes of the application may be accessed by the auto scaling controller 220. Boot code 230 comprising, for example, operating systems and application service initialization program code may be executed by the auto scaling controller 220 to initialize the state of the virtual server 254, for example by downloading software, libraries and data for use in supporting the associated application service. During provisioning and initialization of the virtual server 254, the load balancing unit 210 may monitor the status of the virtual server 254 to identify when the virtual server is available for use. Once provisioning and initialization is complete, the virtual server 254 is available to load balancing unit 210 for servicing of application requests.
The process of booting a virtual server may be time consuming, in some systems taking more than an hour to complete. Because of the delays associated with booting the virtual server, many service providers may boot and maintain reserve virtual servers that may quickly offload traffic to handle spikes in service requests. This solution undesirably increases the cost of application support and is only as effective as the size of the reserve; tradeoffs are often made between cost and performance in prior art auto-scaling solutions.
The resource management platform disclosed herein provides a low-cost autoscaling solution that is capable of quickly responding to fluctuations in application service traffic. Referring now to
Service provider enterprise 320 may, in one embodiment, be associated with a business providing computer-based services to clients over a network 370. Almost all modern service providers use the internet to provide service offerings to potential consumers. The service offerings are generally provided in the form of software applications which operate using dedicated resources of the service provider. The combination of the software and hardware that provides a particular service to a client is referred to herein as a ‘server’. The servers may communicate over public networks 370, or private networks such as enterprise network 360. Private networks of a service provider are often referred to as a corporate or enterprise network.
Service provider enterprise 320 is shown to include an application server 323, an application server 325 and a data store 330 each communicatively coupled to exchange information and data over enterprise network 360. Although each server and/or data store are illustrated as discrete devices, it is appreciated that the servers and data may be comprised of multiple devices distributed throughout the enterprise or, in the case of distributed resources such as ‘cloud’ resources, throughout the network 370.
The data store 330 may comprise data storage resources that may be used, for example, to store customer accounts, customer data, application data other information for use by the application servers 323, 325. The data store 330 may be comprised of coupled data resources comprising any combination of local storage, distributed data center storage or cloud-based storage.
According to one aspect, the service provider enterprise may also include a pool of servers 355. The pool of servers 355 advantageously includes a combination of virtual servers such as virtual server 361 and virtual server 363 and serverless instances 362, 364 and 365. The pool of servers 355 thus includes a plurality of resources for supporting the application, wherein a portion of the servers in the server pool are mapped to underlying resources of the service provider, and another portion of the servers in the server pool are ‘serverless’ instances; i.e., program code objects that have been configured using a machine image of an initialized, booted, provisioned virtual server. In embodiments, the serverless instances comprise snapshots of machine images of a virtual server taken at a specific instances in time, such as following initialization, including all of the program code, libraries, data stores etc. of a fully initialized virtual server such that the serverless instance object may execute application service requests as though it was the virtual server.
In one aspect, the serverless instance 362 may be generated using an instance generation service that translates the application machine image into an object comprising a sequence of one or more functions that emulate the operation of the initialized application virtual server. The instance generation logic may be a service provided as part of the service provider enterprise or may be a third-party licensed service. An example of a third-party agent building logic service is the Amazon Web Service (AWS) Lambda product provided by Amazon® Corporation. The AWS Lambda service may be invoked to run software code without provisioning servers, allowing the service provider enterprise to pay only for those resources actually consumed by the generated agent. Serverless instances may be quickly deployed and/or removed, providing high availability to application services of the service provider without the cost and delays of prior art resource management systems. With such an arrangement, applications service support may be quickly scaled up or down in accordance with network traffic load, incurring minimal delay during the generation and deployment of the serverless instance. Because the serverless instances use shared resources whose cost is generally determined according to actual use, service providers may more closely tailor the cost of application service support to application load. As a result, the need to purchase and maintain shadow resources to handle rogue spikes in application requests to maximize performance may be reduced and/or eliminated.
The server pool 355 thus includes a collection of resources for handling application service requests received from a plurality of clients 381-387 in client network 380 over network 370.
As referred to herein, a ‘client’ is any device that is configured to access an application service of the service provider enterprise 320. Client devices may include any network-enabled computer including, but not limited to: e.g., a mobile device, a phone, a ‘smart’ watch, a handheld PC, a personal digital assistant (PDA), an iPhone, iPod, iPad from Apple® or any other mobile device running Apple's iOS operating system, any device running Microsoft's Windows® Mobile operating system, and/or any other smartphone or like wearable mobile device, such as an Apple® watch or a Garmin® device.
Clients 381-387 may include a plurality of thin client applications specifically adapted for communication with the various applications of the service provider. The thin client applications may be stored in a memory of the client device and be operable when executed upon by the client device to control an interface between the client device and the respective service provider application, permitting a user at the client device to access service provider content and services.
In some examples, network 360 and network 370 may be one or more of a wireless network, a wired network or any combination of wireless network and wired network and may be configured to connect client devices 381-387 to applications of the service provider enterprise 320. As mentioned above, network 360 may comprise an enterprise network; i.e., a network specifically for use in exchanging communications between components of the service provider enterprise 320. Enterprise networks may include additional security to protect enterprise communications, and may include resources specifically dedicated to the enterprise, thereby providing performance advantages to enterprise communications.
Networks 360 and/or 370 may include one or more of a fiber optics network, a passive optical network, a cable network, an Internet network, a satellite network, a wireless local area network (LAN), a Global System for Mobile Communication (“GSM”), a Personal Communication Service (“PCS”), a Personal Area Network (“PAN”), Wireless Application Protocol (WAP), Multimedia Messaging Service (MMS), Enhanced Messaging Service (EMS), Short Message Service (SMS), Time Division Multiplexing (TDM) based systems, Code Division Multiple Access (CDMA) based systems, D-AMPS, Wi-Fi, Fixed Wireless Data, IEEE 802.11b, 802.15.1, 802.11n and 802.11g, Bluetooth, Near Field Communication (NFC), Radio Frequency Identification (RFID), Wi-Fi, and/or the like.
In addition, networks 360, 370 may include, without limitation, telephone lines, fiber optics, IEEE Ethernet 902.3, a wide area network (“WAN”), a wireless personal area network (“WPAN”), a local area network (“LAN”), or a global network such as the Internet. In addition, networks 360, 370 may support an Internet network, a wireless communication network, a cellular network, or the like, or any combination thereof. Networks 360, 370 may further include one network, or any number of the exemplary types of networks mentioned above, operating as a stand-alone network or in cooperation with each other. Networks 360, 370 may utilize one or more protocols of one or more network elements to which they are communicatively coupled. Networks 360, 370 may translate to or from other protocols to one or more protocols of network devices.
In one embodiment, the resource management system 400 may include a load balancer 420 coupled to a pool of servers 480, where the pool of servers may include one or more virtual servers such as virtual server 470 and one or more serverless instances, such as serverless instance 472, 474 and 476. The load balancer 420 is shown to receive application access requests which are selectively forwarded to servers within the pool of servers 480 based on attributes of the servers, including but not limited to server capacity, server load and server ‘health’, where the health may be measured according to a deviation in performance (i.e., response time for example) from an expected performance.
In one embodiment, the load balancer may collect performance metrics related to the servers within the pool of servers and forward the performance metrics to the resource manager 430. The performance metrics may include, for example, the delay associated with servicing the application requests, duration of application servicing requests, the delays in accessing storage devices used by the application, etc. The resource manager 430 may monitor the performance metrics and determine when additional servers need to be added to the server pool, forwarding communications to an application support server 450 to instruct the application support server 450 to add or remove supporting servers for an application.
In some embodiments, the application support server 450 may comprise functionality for generating two types of application support servers, including both booted and provisioned virtual servers 470 and serverless instances 472 and 476. For example, application support server 450 is shown to include a provisioned server builder 456 and instance generation logic 460.
Similar to the prior art system, the provisioned server builder 456 generates virtual servers by provisioning service provider resources to an application and booting the computer using boot code 457. In some embodiments, the boot code may include, for example, operating system and initialization program code for download onto those resources. Booting the virtual server initializes the resources to a state where the resources are capable of managing application service requests. However, in contrast to prior art systems, the time-consuming task of building virtual servers for scaling purposes to dynamically support application scaling is removed through the generation of serverless instances using custom machine images that represent snapshots of virtual server state at a particular point in time, such as following initialization. With such an arrangement, application support may be rapidly scaled in accordance with application loading.
According to one aspect, once the virtual server resources are provisioned and booted, an application machine image of the virtual server is captured by the application support server and stored for later use. In
Referring now to
If not, then at step 505 a service provider resources are allocated to a virtual server for the application, and boot code, including operating system program code, application program code, libraries, data and the like are executed on the virtual server to prepare it for support of application service requests. Following provisioning and booting, an application machine image (AMI) of the virtual server is stored at step 506 and at step 507 the virtual server is deployed into the server pool for use by an application server to manage requests to the application service.
If, at step 504 it is determined that a machine image has been previously generated for the application, then at step 510, the virtual server AMI is retrieved from memory, and used at step 512 to build a serverless instance. As mentioned above the serverless instance may, for example, be comprised of initialized program code, libraries, data structures, etc., obtained from the machine image of the virtual server.
At step 514, the serverless instance is deployed to the pool of servers and is thus available for use for servicing application requests. Depending upon the complexity of the application, using the approach of
The resource management platform of
In one embodiment, the Virtual Server Builder 680 may be comprised of an Amazon® Elastic Container Service® (ECS). ECS is a managed service for running containers on AWS, designed to make it easy to run applications in the cloud without worrying about configuring the environment for your code to run in. The ECS builds Elastic Cloud Computing agents (EC2). EC2 agents comprise virtual servers, such as virtual server 692, which may comprise an emulation of a computer system. Virtual servers are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination.
An administrator 660 at a workstation 665 manages application launch using tools provided by a graphic user interface 670 of the resource management system 600. For example, the administrator 660 may select a machine image 675 that may be used as a starting component for building EC2 virtual machines. In one embodiment, machine image 675 may provide the information to launch an agent, including: root volume, launch permissions, and volume-attachment specifications. The generated virtual server 692 is made available in the server pool 690 for use by load balancing service 610. As mentioned previously, a snapshot of the machine state of the initialized virtual server 690 is also captured as custom application machine image 675 and stored in memory for later use by instance generation service 640.
The load balancing service 610 may automatically distribute traffic across multiple resources within the pool of servers, may detect unhealthy servers and rebalance loads in response. The load balancing service 610 may be, for example, an application load balancing service such as the Elastic Load Balancer service (ELB) or the Application Load Balancer (ALB) service provided as part of the Amazon Web Service toolkit.
In one embodiment, the load balancing service may communicate with a monitoring service 620, for example the Amazon CloudWatch® service provided as part of the AWS® toolkit. In exemplary embodiments, the monitoring service may deliver a near real-time stream of system events that describe changes in AWS resources. CloudWatch permits the definition of simple rules which may be used to detect and react to operational changes in application support. For example, the CloudWatch service may be configured to monitor performance metrics received from the load balancing service 610 to identify degradations in performance caused by insufficient or incapable resources, indicating that additional scaling of application resources should be performed.
In one embodiment, upon detection of a scaling opportunity, a messaging service 630 may be used to control an interface with an instance generation service 640. The messaging service in one embodiment may be a high-throughput, push-based, many-to-many messaging system, that is configured to interface with the instance generation service 640. For example, in one embodiment the messaging service 630 may comprise an AWS Simple Notification Service® (SNS), and the instance generation service 640 may comprise an AWS Lambda® service.
AWS Lambda® executes program code as a “Lambda function”. That is, the information from the custom application machine image 675 may be used by the instance generation service 640 to generate program code for a serverless instance object 650. Each function includes initialized application program code state as well as some associated configuration information, including the function name and resource requirements. Lambda functions/objects are “stateless,” with no affinity to the underlying infrastructure, so that Lambda can rapidly launch as many copies of the function/object/instance as needed to scale to the rate of incoming events.
Once program code is uploaded to AWS Lambda, Lambda may execute the function and manage the compute resources as needed in order to keep up with incoming requests. In one embodiment, the virtual server 692 may be accessed by reference to associated AWS Lambda functions via the load balancing service 610.
Accordingly, a resource management system has been described that quickly and dynamically tailors application resource provisioning to real-time application resource consumption using a combination of virtual servers and serverless instances. By using serverless instances in addition to provisioned servers, the resource management system disclosed herein overcomes performance and cost issues associated with prior art resource management methods.
Some embodiments may be described using the expression “one embodiment” or “an embodiment” along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Moreover, unless otherwise noted the features described above are recognized to be usable together in any combination. Thus, any features discussed separately may be employed in combination with each other unless it is noted that the features are incompatible with each other.
With general reference to notations and nomenclature used herein, the detailed descriptions herein may be presented in terms of functional blocks or units that might be implemented as program procedures executed on a computer or network of computers. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art.
A procedure is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. These operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to those quantities.
Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein, which form part of one or more embodiments. Rather, the operations are machine operations. Useful machines for performing operations of various embodiments include general purpose digital computers or similar devices.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but still co-operate or interact with each other.
Various embodiments also relate to apparatus or systems for performing these operations. This apparatus may be specially constructed for the required purpose or it may comprise a general-purpose computer as selectively activated or reconfigured by a computer program stored in the computer. The procedures presented herein are not inherently related to a particular computer or other apparatus. Various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these machines will appear from the description given.
It is emphasized that the Abstract of the Disclosure is provided to allow a reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features are grouped together in a single embodiment to streamline the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively. Moreover, the terms “first,” “second,” “third,” and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.
What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodology, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
This application is a continuation of U.S. patent application Ser. No. 16/430,888, entitled “SYSTEM AND METHOD FOR FAST APPLICATION AUTO-SCALING” filed on Jun. 4, 2019. The contents of the aforementioned application are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
8260893 | Bandhole | Sep 2012 | B1 |
8572612 | Kern | Oct 2013 | B2 |
8856339 | Mestery | Oct 2014 | B2 |
8997107 | Jain | Mar 2015 | B2 |
9069606 | Donahue | Jun 2015 | B2 |
9110600 | Brooker | Aug 2015 | B1 |
9116715 | von Eicken | Aug 2015 | B2 |
9405593 | McGrath | Aug 2016 | B2 |
9448824 | Fitzgerald | Sep 2016 | B1 |
9461969 | Watt | Oct 2016 | B2 |
9578063 | Iyer | Feb 2017 | B1 |
9594598 | Brouwer | Mar 2017 | B1 |
9705785 | Gember-Jacobson | Jul 2017 | B2 |
9722945 | Siciliano | Aug 2017 | B2 |
9740583 | Brandwine | Aug 2017 | B1 |
9927980 | LeCrone | Mar 2018 | B1 |
10033832 | Werth | Jul 2018 | B2 |
10095545 | Gupta | Oct 2018 | B1 |
10120714 | Cabrera | Nov 2018 | B1 |
10341355 | Niemoller | Jul 2019 | B1 |
10346618 | Ah Kun | Jul 2019 | B1 |
10506026 | Brown | Dec 2019 | B1 |
10637738 | McDowell | Apr 2020 | B1 |
10649861 | Natanzon | May 2020 | B1 |
10761893 | Bhadauria | Sep 2020 | B1 |
10860380 | Kowalski | Dec 2020 | B1 |
10891140 | Levin | Jan 2021 | B1 |
10999234 | Shveykin | May 2021 | B1 |
11080041 | Ah Kun | Aug 2021 | B1 |
11385919 | Gafton | Jul 2022 | B1 |
20070074208 | Ling | Mar 2007 | A1 |
20090113423 | Hiltgen | Apr 2009 | A1 |
20090199116 | von Eicken | Aug 2009 | A1 |
20090271472 | Scheifler | Oct 2009 | A1 |
20100005175 | Swildens | Jan 2010 | A1 |
20100125855 | Ferwerda | May 2010 | A1 |
20100220622 | Wei | Sep 2010 | A1 |
20110093847 | Shah | Apr 2011 | A1 |
20110258621 | Kern | Oct 2011 | A1 |
20120233315 | Hoffman | Sep 2012 | A1 |
20130007753 | Jain | Jan 2013 | A1 |
20130014107 | Kirchhofer | Jan 2013 | A1 |
20130174149 | Dasgupta | Jul 2013 | A1 |
20130227699 | Barak | Aug 2013 | A1 |
20130239109 | Ferwerda | Sep 2013 | A1 |
20130268799 | Mestery | Oct 2013 | A1 |
20130297802 | Laribi | Nov 2013 | A1 |
20140040343 | Nickolov | Feb 2014 | A1 |
20140040885 | Donahue | Feb 2014 | A1 |
20140052864 | Van Der Linden | Feb 2014 | A1 |
20140173591 | Lin | Jun 2014 | A1 |
20140201572 | Dancy | Jul 2014 | A1 |
20140201642 | Vicat-Blanc | Jul 2014 | A1 |
20140280433 | Messerli | Sep 2014 | A1 |
20140304404 | Marr | Oct 2014 | A1 |
20140330948 | Dunn | Nov 2014 | A1 |
20140365626 | Radhakrishnan | Dec 2014 | A1 |
20150058467 | Douglas | Feb 2015 | A1 |
20150106520 | Breitgand | Apr 2015 | A1 |
20150113120 | Jacobson | Apr 2015 | A1 |
20150154057 | McGrath | Jun 2015 | A1 |
20150169329 | Barrat | Jun 2015 | A1 |
20150222705 | Stephens | Aug 2015 | A1 |
20150261559 | Sliwa | Sep 2015 | A1 |
20150281113 | Siciliano | Oct 2015 | A1 |
20150331708 | Bala | Nov 2015 | A1 |
20150363278 | Harris | Dec 2015 | A1 |
20160092202 | Filali-Adib | Mar 2016 | A1 |
20160110215 | Bonilla | Apr 2016 | A1 |
20160139885 | Dube | May 2016 | A1 |
20160182360 | Gember-Jacobson | Jun 2016 | A1 |
20170034012 | Douglas | Feb 2017 | A1 |
20170060558 | Koushik | Mar 2017 | A1 |
20170147243 | Kowalski | May 2017 | A1 |
20170228248 | Bala | Aug 2017 | A1 |
20180095997 | Beveridge | Apr 2018 | A1 |
20180101425 | McChord | Apr 2018 | A1 |
20180109421 | Laribi | Apr 2018 | A1 |
20180196827 | Sundaram | Jul 2018 | A1 |
20180260125 | Botes | Sep 2018 | A1 |
20180260251 | Beveridge | Sep 2018 | A1 |
20180359192 | Salle | Dec 2018 | A1 |
20180359311 | Paramasivam | Dec 2018 | A1 |
20180373879 | Lango | Dec 2018 | A1 |
20190018715 | Behrendt | Jan 2019 | A1 |
20190050235 | Buswell | Feb 2019 | A1 |
20190146780 | Barrat | May 2019 | A1 |
20190149480 | Singhvi | May 2019 | A1 |
20190155699 | Luo | May 2019 | A1 |
20190163580 | Pandey | May 2019 | A1 |
20190163763 | Pandey | May 2019 | A1 |
20190213044 | Cui | Jul 2019 | A1 |
20190258953 | Lang | Aug 2019 | A1 |
20190273654 | Liguori | Sep 2019 | A1 |
20190306692 | Garty | Oct 2019 | A1 |
20190310875 | Noll | Oct 2019 | A1 |
20190332368 | Allen | Oct 2019 | A1 |
20190370075 | Cui | Dec 2019 | A1 |
20200034167 | Parthasarathy | Jan 2020 | A1 |
20200110878 | Hu | Apr 2020 | A1 |
20200183755 | Dao | Jun 2020 | A1 |
20200356443 | Pawar | Nov 2020 | A1 |
20200379648 | Kumarasamy | Dec 2020 | A1 |
20200389516 | Parekh | Dec 2020 | A1 |
20210056002 | Ashraf | Feb 2021 | A1 |
20210152423 | Singh | May 2021 | A1 |
20210209060 | Kottomtharayil | Jul 2021 | A1 |
20210240513 | Shevade | Aug 2021 | A1 |
20210357361 | Porath | Nov 2021 | A1 |
20210365185 | Shtarkman | Nov 2021 | A1 |
Number | Date | Country | |
---|---|---|---|
20220124145 A1 | Apr 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16430888 | Jun 2019 | US |
Child | 17564936 | US |