The present disclosure generally relates to networking systems and methods. More particularly, the present disclosure relates to systems and methods for bandwidth management in Software Defined Networking (SDN) controlled multi-layer networks.
Software Defined Networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of higher-level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). In addition to SDN, networks are moving towards the use of Virtualized Network Functions (VNFs) and the like. As part of VNFs, cloud-based networking, and the like, software containers and the like are downloaded over networks. A software container includes an entire runtime environment, namely an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Conventional techniques for distribution of software containers or other images include downloading an image's layers first and depending on processor resources, downloading the image's layers in parallel, once all layers are downloaded, the layers are extracted (uncompressed), and finally, once all layers are extracted, the image is ready to run. These conventional techniques do not take into account a state of network resources between a container service cloud and image registries. Once there is congestion in the network resources, the parallel downloads are slowed down regardless of the amount of remaining download size.
In an embodiment, a bandwidth management method performed in a Software Defined Networking (SDN) controlled network includes, responsive to detecting congestion on a network service with identifiable data therein, obtaining policy associated with the congested network service and causing bandwidth on demand in the network to mitigate the congestion when the bandwidth on demand is possible in the network and permitted based on the policy of the congested network service; responsive to the congestion remaining subsequent to the bandwidth on demand or when the bandwidth on demand is not possible or permitted, orchestrating bandwidth for the congested network service based on the associated priority in the policy; and, responsive to the congestion remaining subsequent to the orchestrating bandwidth based on priority, orchestrating bandwidth for the congested network service based on an amount remaining to download for the network service and one or more additional network services. The orchestrating bandwidth for the congested network service based on the associated priority in the policy can include pausing lower priority Service Layer Agreement (SLA) network services in favor of higher priority SLA network services. The orchestrating bandwidth for the congested network service based on the amount remaining to download can include pausing congested network services with greater amounts remaining to download in favor of congested network services with lesser amounts remaining to download.
The bandwidth management method can be performed by a bandwidth management system communicatively coupled to the SDN controller network via an SDN controller. The network service can provide distribution of uniquely identifiable images or software containers between a source and destination location in the network. The network service can provide distribution of uniquely identifiable content between a source and destination location in the network, wherein the uniquely identifiable content is identifiable over the network based on one of a manifest file and a hash signature. The detecting congestion can be responsive to continually monitoring the network and the network service with identifiable data therein.
In an embodiment, a server adapted to perform bandwidth management associated with a Software Defined Networking (SDN) controlled network includes a network interface communicatively coupled to an SDN controller in the network; a processor communicatively coupled to the network interface; and memory storing instructions that, when executed, cause the processor to, responsive to detection of congestion on a network service with identifiable data therein, obtain policy associated with the congested network service and cause bandwidth on demand in the network to mitigate the congestion when the bandwidth on demand is possible in the network and permitted based on the policy of the congested network service, responsive to the congestion remaining subsequent to the bandwidth on demand or when the bandwidth on demand is not possible or permitted, orchestrate bandwidth for the congested network service based on the associated priority in the policy, and, responsive to the congestion remaining subsequent to the orchestrating bandwidth based on priority, orchestrate bandwidth for the congested network services based on an amount remaining to download for the network service and one or more additional network services
The bandwidth can be orchestrated for the congested network services based on the associated priority in the policy can include a pause for lower priority Service Layer Agreement (SLA) network services in favor of higher priority SLA network services. The bandwidth can be orchestrated for the congested network service based on the amount remaining to download can include a pause of congested network services with greater amounts remaining to download in favor of congested network services with lesser amounts remaining to download. The server can be part of a bandwidth management system communicatively coupled to the SDN controlled network via an SDN controller. The network service can provide distribution of uniquely identifiable images or software containers between a source and destination location in the network. The network service can provide distribution of uniquely identifiable content between a source and destination location in the network, wherein the uniquely identifiable content is identifiable over the network based on one of a manifest file and a hash signature. The congestion can be detected responsive to continual monitor of the network and the network service with identifiable data therein.
In a further embodiment, a bandwidth management system communicatively coupled to a Software Defined Networking (SDN) controlled network includes a data collector system adapted to obtain data from the SDN controlled network and one or more data sources; a container runtime analyzer adapted to identify uniquely identifiable content downloaded between two points including source and destination locations in the network and to monitor for congestion associated therewith; and a bandwidth orchestrator adapted to cause orchestration to mitigate congestion marked by the container runtime analyzer, wherein the orchestration includes additional bandwidth including bandwidth on demand, prioritization of network services based on Service Layer Agreement (SLA) policy, and prioritization of network services based on remaining amounts to be downloaded.
The prioritization of network services based on the SLA policy can include a pause of lower priority SLA services in favor of higher priority SLA network services, and wherein the prioritization of network services based on remaining amounts to be downloaded can include a pause of congested network services with greater amounts remaining to download in favor of congested network services with lesser amounts remaining to download. The data collector system can be communicatively coupled to the SDN controller network via an SDN controller. The network services can provide distribution of uniquely identifiable images or software containers between a source and destination location in the network. The network services can provide distribution of uniquely identifiable content between a source and destination location in the network, wherein the uniquely identifiable content is identifiable over the network based on one of a manifest file and a hash signature. The container runtime analyzer can be adapted to monitor continually the network and the network services with identifiable data therein to detect the congestion.
The present disclosure is illustrated and described herein with reference to the various drawings, in which like reference numbers are used to denote like system components/method steps, as appropriate, and in which:
Again, in various embodiments, the present disclosure relates to systems and methods for bandwidth management in Software Defined Networking (SDN) controlled multi-layer networks. The systems and method orchestrate dynamic bandwidth management in an SDN-controlled network. Organizations are moving to micro-services and continuous delivery as a way of delivering software to end users. In a micro-services architecture, an application is built using a combination of loosely coupled and service-specific software containers and images. The image management can be viewed as the Content Distribution Network (CDN) with the storage backend. The CDN is moving to the manifest driven, content addressable and overlay process of consuming network resources. This approach is particularly important when downloading software images and streaming High Definition (HD) video (e.g., 4K/8K) content. The orchestration systems and methods provide dynamic bandwidth allocation as an actionable response to data analytics on the container runtime data and network resources data. The orchestration systems and methods control container runtime daemon processes to achieve the most effective image download and extraction.
SDN Network
Referring to
Again, for illustration purposes, the network 10 includes an OpenFlow-controlled packet switch 70, various packet/optical switches 72, and packet switches 74 with the switches 70, 72 each communicatively coupled to the SDN controller 60 via the OpenFlow interface 62 and the mediation software 64 at any of Layers 0-3 (L0 being DWDM, L1 being OTN, and L2 being Ethernet). The switches 70, 72, 74, again for illustration purposes only, are located at various sites, including an Ethernet Wide Area Network (WAN) 80, a carrier cloud Central Office (CO) and data center 82, an enterprise data center 84, a Reconfigurable Optical Add/Drop Multiplexer (ROADM) ring 86, a switched OTN site 88, another enterprise data center 90, a central office 92, and another carrier cloud Central Office (CO) and data center 94. The network 10 can also include IP routers 96 and a network management system (NMS) 98. Note, there can be more than one of the NMS 98, e.g., an NMS for each type of equipment—communicatively coupled to the SDN controller 60. Again, the network 10 is shown just to provide context and typical configurations at Layers 0-3 in an SDN network for illustration purposes. Those of ordinary skill in the art will recognize various other network configurations are possible at Layers 0-3 in the SDN network.
The switches 70, 72, 74 can operate, via SDN, at Layers 0-3. The OpenFlow packet switch 70, for example, can be a large-scale Layer 2 Ethernet switch that operates, via the SDN controller 60, at Layer 2 (L2). The packet/optical switches 72 can operate at any of Layers 0-3 in combination. At Layer 0, the packet/optical switches 72 can provide wavelength connectivity such as via DWDM, ROADMs, etc., at Layer 1, the packet/optical switches 72 can provide time division multiplexing (TDM) layer connectivity such as via Optical Transport Network (OTN), Synchronous Optical Network (SONET), Synchronous Digital Hierarchy (SDH), etc., at Layer 2, the packet/optical switches 72 can provide Ethernet or Multi-Protocol Label Switching (MPLS) packet switching and at Layer 3 the packet/optical switches can provide IP packet forwarding. The packet switches 74 can be traditional Ethernet switches that are not controlled by the SDN controller 60. The network 10 can include various access technologies 100, such as, without limitation, cable modems, digital subscriber loop (DSL), wireless, fiber-to-the-X (e.g., home, premises, curb, etc.), and the like. In an embodiment, the network 10 is a multi-vendor (i.e., different vendors for the various components) and multi-layer network (i.e., Layers L0-L3).
Referring to
Server
Referring to
The processor 202 is a hardware device for executing software instructions. The processor 202 may be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the server 200, a semiconductor-based microprocessor (in the form of a microchip or chip set), or generally any device for executing software instructions. When the server 200 is in operation, the processor 202 is configured to execute software stored within the memory 210, to communicate data to and from the memory 210, and to generally control operations of the server 200 pursuant to the software instructions (216). The I/O interfaces 204 may be used to receive user input from and/or for providing system output to one or more devices or components. User input may be provided via, for example, a keyboard, touchpad, and/or a mouse. The system output may be provided via a display device and a printer (not shown). I/O interfaces 204 may include, for example, a serial port, a parallel port, a small computer system interface (SCSI), a serial ATA (SATA), a fibre channel, Infiniband, iSCSI, a PCI Express interface (PCI-x), an infrared (IR) interface, a radio frequency (RF) interface, and/or a universal serial bus (USB) interface.
The network interface 206 may be used to enable the server 200 to communicate over a network, such as the Internet, a wide area network (WAN), a local area network (LAN), and the like, etc. The network interface 206 may include, for example, an Ethernet card or adapter (e.g., 10BaseT, Fast Ethernet, Gigabit Ethernet, 10 GbE) or a wireless local area network (WLAN) card or adapter (e.g., 802.11a/b/g/n/ac). The network interface 206 may include address, control, and/or data connections to enable appropriate communications on the network. A data store 208 may be used to store data. The data store 208 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, and the like)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, and the like), and combinations thereof. Moreover, the data store 208 may incorporate electronic, magnetic, optical, and/or other types of storage media. In one example, the data store 208 may be located internal to the server 200 such as, for example, an internal hard drive connected to the local interface 212 in the server 200. Additionally, in another embodiment, the data store 208 may be located external to the server 200 such as, for example, an external hard drive connected to the I/O interfaces 204 (e.g., SCSI or USB connection). In a further embodiment, the data store 208 may be connected to the server 200 through a network, such as, for example, a network attached file server.
The memory 210 may include any of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)), nonvolatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.), and combinations thereof. Moreover, the memory 210 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 210 may have a distributed architecture, where various components are situated remotely from one another, but can be accessed by the processor 202. The software in memory 210 may include one or more software programs (216), each of which includes an ordered listing of executable instructions for implementing logical functions. The software in the memory 210 includes a suitable operating system (O/S) 214 and one or more programs 216. The operating system 214 essentially controls the execution of other computer programs, such as the one or more programs 216, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. The one or more programs 216 may be configured to implement the various processes, algorithms, methods, techniques, etc. described herein.
Orchestration Platform
Referring to
Dynamic Bandwidth Management Orchestration in SDN System
Referring to
The bandwidth management orchestration system 400 is configured to determine which images, containers, etc. are being distributed over the network 10. This is done through manifest files and associated data, processed by the container runtime analyzer 408 and stored in the scalable storage 412. A manifest file in computing is a file containing metadata for a group of accompanying files that are part of a set or coherent unit. For example, the files of a computer program may have a manifest describing the name, version number, and the constituting files of the program. In an embodiment, manifest files are obtained in a JavaScript Object Notation (JSON) format for processing by the container runtime analyzer 408.
For example, consider an Ubuntu 14.04 image from the public registry (hosted by Docker.io). The content of a software image is described in a manifest. The Ubuntu 14.04 image is composed of four (4) layers. The manifest of the Ubuntu 14.04 image contains information about the download size of each layer as shown in
For example, when an organization or user wants to run this software image (Ubuntu 14.04) in a container hosted by a cloud provider (e.g., Amazon's Elastic Compute Cloud (EC2) container service), the image has to be first downloaded by a container daemon process to the cloud data center. The daemon can perform the parallel download of all the four layers. When all the layers are downloaded, they are extracted locally and ready to be run. Note, the daemon has no knowledge of the condition of network resources between the cloud data center and the registry, in the network 10.
Consider another organization, within the same cloud data center, is deploying a VNF software image. The container runtime daemon performs another set of parallel downloads of the layers. Note, again the daemon has no knowledge of the condition of network resource between the cloud data center and the registry, in the network 10. In a typical container service, like Amazon's EC2 container service, there will be many instances of parallel downloads by the daemons. When there is congestion in the network resources, these parallel downloads are slowed down. A software image can only be run when all the layers are fully downloaded and extracted.
Dynamic Bandwidth Management Orchestration Process
Referring to
The bandwidth management orchestration process 500 is implemented with respect to the distribution of identifiable data over the network when there is network congestion. The bandwidth management orchestration process 500 can be implemented when a network service encounters congestion in the network 10 (step 502). The distribution process is an example network service, i.e., a congested network service detected in step 502. Further, the network service can include transmission of any data (e.g., images, software containers, etc.) that is uniquely tracked when distributed through the network 10. The unique tracking can be through a manifest file, cryptographic hash functions (e.g., Secure Hash Algorithm (SHA)), and the like. For example, SHA-256 generates an effectively-unique, fixed-size 256-bit (32-byte) hash that can uniquely identify data over the network 10. Such unique identifiers can be stored in the container runtime data source 404 and detected over the network 10 by the container runtime analyzer 408, with data provided from the data collector 406.
The process 500 generally includes three steps 504, 506, 508 which each includes higher levels of orchestration to mitigate the congestion. If each step 504, 506, 508 succeeds, the process 500 ends (back to step 502), and if not, the process 500 proceeds to a next step. After detecting network congestion, the process 500 includes obtaining policy associated with congested network service and providing bandwidth on demand to mitigate the congestion if possible and permitted based on the policy (step 504). A policy file can be retrieved for the congested network service. The policy file can be located in one of the data sources 402, 404 and includes business and service preferences. The policy for the congested network service dictates whether or not the process 500 can utilize the step 504. That is, the policy can include whether or not it is possible to provide bandwidth on demand to address the congested network service. When permitted based on the policy and when there are network resources available for dynamic bandwidth allocation, step 504 can include the bandwidth orchestrator 410 utilizing the SDN controllers 60 to perform bandwidth on demand to mitigate the congestion.
If step 504 mitigates congestion, the process 500 returns to step 502 (step 510); else, the process 500 proceeds to step 506. The process 500 includes, based on policy and associated service layer agreements, orchestrating bandwidth for the congested network service along with other network services (step 506). When there are no resources available for dynamic bandwidth allocation or if it is not permitted based on policy, the container runtime analyzer 408 can get the policy files associated with the container runtime systems, i.e., the network service. The policy file can include the Service Level Agreement (SLA) for the container runtime system. For example, each network service that is uniquely identifiable can be associated with a container runtime system. The container runtime analyzer 408 can provide the SLA information to the bandwidth orchestrator 410. The bandwidth orchestrator 410 orchestrates daemons associated with the network service and other network services, according to the SLAs. The daemons are computing processes responsible for the network services. For example, the highest SLA daemon can continue with downloads while other, lower SLA daemons are instructed to suspend their downloads. As such, the SLA preferences can be used to mitigate network congestion.
If step 506 mitigates congestion, the process 500 returns to step 502 (step 512); else, the process 500 proceeds to step 508. The process 500 include, independent of policy, performing real-time data analytics on a plurality of network services and orchestrate bandwidth for the plurality of network services to mitigate the congestion (step 508). When there are no network resources available for dynamic allocation and no policies are specified for the container runtime systems, the container runtime analyzer 408 performs real-time data analytics on the states of all the images (network services). For example, the container runtime analyzer 408 can order the image manifests according to the amount of data left to be downloaded.
For example, assume the Ubuntu 14.04 image has about 12 Mb left to be downloaded by a daemon #1. Whereas the VNF software image has more than 168 Mb left to be downloaded by a daemon #2. The container runtime analyzer 408 provides the result of analytics to the bandwidth orchestrator 410. The bandwidth orchestrator 410 orchestrates the daemons to either continue the download of the image layers or to suspend the download. In this example, the bandwidth orchestrator 410 instructs the daemon #1 to continue the download and instructs the daemon #2 to pause the download, based on analysis. Instead the bandwidth orchestrator 410 instructs the daemon #2 to start extracting all the layers that are already downloaded. When the daemon #1 completes the image download, the bandwidth orchestrator 410 instructs the daemon #2 to resume the suspended downloads. The outcome of the orchestration through the process 500 is to get the most effective completion of image download and extraction.
Container Runtime Analyzer Process
Referring to
Bandwidth Orchestrator Process
Referring to
In
In
In
Thus, in the bandwidth management orchestration system 400, the data collector 406 is adapted to obtain data, from the network 10 through the controller 60 and to manage data sources 404, 406. The container runtime analyzer 408 is adapted to identify images, software containers, or any other uniquely identifiable content downloaded between two points (source and destination locations) in the network 10 and to monitor for congestion associated therewith. Finally, the bandwidth orchestrator 410 is adapted to cause orchestration to mitigate the congestion marked by the container runtime analyzer 408. The orchestration can include adding bandwidth (bandwidth on demand, dynamic bandwidth allocation, etc.), prioritizing containers based on SLA policy, and/or prioritizing containers based on remaining amounts to be downloaded.
It will be appreciated that some embodiments described herein may include one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the aforementioned approaches may be used. For some of the embodiments described herein, a corresponding device such as hardware, software, firmware, and a combination thereof can be referred to as “circuitry configured or adapted to,” “logic configured or adapted to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
Moreover, some embodiments may include a non-transitory computer-readable storage medium having computer readable code stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. each of which may include a processor to perform functions as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory), Flash memory, and the like. When stored in the non-transitory computer readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims.
The present patent/application is a continuation of U.S. patent application Ser. No. 15/052,094, filed on Feb. 24, 2016, issued on Nov. 7, 2017 as U.S. Pat. No. 9,813,299, and entitled “SYSTEMS AND METHODS FOR BANDWIDTH MANAGEMENT IN SOFTWARE DEFINED NETWORKING CONTROLLED MULTI-LAYER NETWORKS,” the contents are incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5638359 | Peltola | Jun 1997 | A |
6341304 | Engbersen | Jan 2002 | B1 |
7984179 | Huang | Jul 2011 | B1 |
8307111 | Micalizzi, Jr. | Nov 2012 | B1 |
8887991 | Htay et al. | Nov 2014 | B2 |
8959244 | Lin | Feb 2015 | B2 |
9003038 | Micalizzi, Jr. | Apr 2015 | B1 |
9112838 | Wallace, Jr. | Aug 2015 | B2 |
20020002611 | Vange | Jan 2002 | A1 |
20020087723 | Williams et al. | Jul 2002 | A1 |
20020089930 | Aceves | Jul 2002 | A1 |
20020095493 | Byrnes | Jul 2002 | A1 |
20020099844 | Baumann et al. | Jul 2002 | A1 |
20020174437 | Mano et al. | Nov 2002 | A1 |
20030053463 | Vikberg | Mar 2003 | A1 |
20030067872 | Harrell | Apr 2003 | A1 |
20040199635 | Ta | Oct 2004 | A1 |
20050083924 | Dillinger | Apr 2005 | A1 |
20050086364 | Muti | Apr 2005 | A1 |
20050091696 | Wolfe | Apr 2005 | A1 |
20050107091 | Vannithamby | May 2005 | A1 |
20050220035 | Ling | Oct 2005 | A1 |
20050240671 | Beyer | Oct 2005 | A1 |
20050273644 | Herley | Dec 2005 | A1 |
20060031537 | Boutboul | Feb 2006 | A1 |
20060088036 | De Prezzo | Apr 2006 | A1 |
20060109864 | Oksman | May 2006 | A1 |
20070064716 | Sachs et al. | Mar 2007 | A1 |
20070121507 | Manzalini | May 2007 | A1 |
20080084826 | Ong | Apr 2008 | A1 |
20080144661 | Ali | Jun 2008 | A1 |
20080151817 | Fitchett | Jun 2008 | A1 |
20080187279 | Gilley | Aug 2008 | A1 |
20080244673 | Matsuo | Oct 2008 | A1 |
20080247314 | Kim et al. | Oct 2008 | A1 |
20080291827 | Xiong | Nov 2008 | A1 |
20090003209 | Kalkunte et al. | Jan 2009 | A1 |
20090070482 | Hickmott | Mar 2009 | A1 |
20090147680 | Liu | Jun 2009 | A1 |
20090319613 | Hjelm | Dec 2009 | A1 |
20090319681 | Freelander | Dec 2009 | A1 |
20090323524 | Kuhn | Dec 2009 | A1 |
20100023974 | Shiragaki | Jan 2010 | A1 |
20100103820 | Fuller | Apr 2010 | A1 |
20100107078 | Hayashi | Apr 2010 | A1 |
20100169502 | Knowlson | Jul 2010 | A1 |
20100319004 | Hudson et al. | Dec 2010 | A1 |
20110158095 | Alexander et al. | Jun 2011 | A1 |
20110191414 | Ma | Aug 2011 | A1 |
20110216648 | Mehrotra | Sep 2011 | A1 |
20110222405 | Bugenhagen | Sep 2011 | A1 |
20110276688 | Qian | Nov 2011 | A1 |
20120063493 | Hasegawa | Mar 2012 | A1 |
20120089781 | Ranade | Apr 2012 | A1 |
20120106571 | Jeon | May 2012 | A1 |
20120120818 | Lientz | May 2012 | A1 |
20120276867 | Mcnamee et al. | Nov 2012 | A1 |
20120284756 | Kotecha | Nov 2012 | A1 |
20120307634 | Zhu | Dec 2012 | A1 |
20120308231 | Martinelli et al. | Dec 2012 | A1 |
20130114411 | Aboul-Magd et al. | May 2013 | A1 |
20130182575 | McLean | Jul 2013 | A1 |
20130246582 | Lee | Sep 2013 | A1 |
20130290492 | Elarabawy | Oct 2013 | A1 |
20140043970 | Lientz | Feb 2014 | A1 |
20140082192 | Wei | Mar 2014 | A1 |
20140108502 | Lai | Apr 2014 | A1 |
20140119184 | Harrang | May 2014 | A1 |
20140119210 | Bansal | May 2014 | A1 |
20140161449 | Doerr | Jun 2014 | A1 |
20140177450 | Chou et al. | Jun 2014 | A1 |
20140187239 | Friend | Jul 2014 | A1 |
20140258463 | Winterrowd | Sep 2014 | A1 |
20140287758 | Shumaker | Sep 2014 | A1 |
20140355442 | Isobe | Dec 2014 | A1 |
20140362688 | Zhang | Dec 2014 | A1 |
20140364104 | Wood | Dec 2014 | A1 |
20140368734 | Hoffert | Dec 2014 | A1 |
20140380299 | Nakamura | Dec 2014 | A1 |
20150063800 | Htay et al. | Mar 2015 | A1 |
20150117195 | Toy | Apr 2015 | A1 |
20150127805 | Htay et al. | May 2015 | A1 |
20150271232 | Luby | Sep 2015 | A1 |
20150281006 | Kasturi | Oct 2015 | A1 |
20150301981 | Huang | Oct 2015 | A1 |
20160014237 | Kamahora | Jan 2016 | A1 |
20160036704 | Xiao | Feb 2016 | A1 |
20160050470 | Swinkels et al. | Feb 2016 | A1 |
20160057061 | Avci | Feb 2016 | A1 |
20160065476 | Reddy | Mar 2016 | A1 |
20160080207 | Prakash | Mar 2016 | A1 |
20160080237 | Halepovic | Mar 2016 | A1 |
20160149815 | Cotter | May 2016 | A1 |
20160182329 | Armolavicius | Jun 2016 | A1 |
20160212032 | Tsuruoka | Jul 2016 | A1 |
20160226703 | Grinshpun | Aug 2016 | A1 |
20160234099 | Jiao | Aug 2016 | A1 |
20160254959 | Arndt | Sep 2016 | A1 |
20160261510 | Burnette | Sep 2016 | A1 |
20160262044 | Calin | Sep 2016 | A1 |
20160294718 | Sun | Oct 2016 | A1 |
20170070436 | Lubenski | Mar 2017 | A1 |
20170079053 | Zhang | Mar 2017 | A1 |
20170142029 | Xia | May 2017 | A1 |
20170171103 | Gouache | Jun 2017 | A1 |
20170187822 | Thomee | Jun 2017 | A1 |
20170207976 | Rovner | Jul 2017 | A1 |
20170310593 | Kong | Oct 2017 | A1 |
20170325129 | Zhu | Nov 2017 | A1 |
20170331871 | Bradley | Nov 2017 | A1 |
20170373950 | Szilagyi | Dec 2017 | A1 |
20180034894 | Jin | Feb 2018 | A1 |
20180367410 | Ljung | Dec 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
20180054360 A1 | Feb 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15052094 | Feb 2016 | US |
Child | 15798898 | US |