The embodiments relate to computing systems, network appliances, smart network interface cards (NICS), channel adapters, network interface cards, routers, switches, load balancers, virtual machines, cloud computing, distributed applications, distributed application profiling, upgrading systems and applications, and to scheduling upgrades based on nonconstant metrics.
Upgrading computers and applications that run on those computers is a familiar process. In many computing environments, such as cloud computing environments, applications can be implemented as numerous cooperating processes that run within virtual machines (VMs) running on host computers. By today's standards, a modest cloud computing environment can have thousands of computers running tens of thousands of VMs. The computers themselves can have specialized hardware such as smart network interface cards (NICS). The smart NICs are among the plethora of network appliances handing communications within the data center. The many thousands of computers, NICs, network appliances, VMs, and software applications necessitate a managed approach to upgrading computers. Performing upgrades often results in taking resources out of service, which reduces quality of service (QoS) provided by the data center. The data center must therefore attempt to perform upgrades while maintaining QoS levels.
The following presents a summary of one or more aspects of the present disclosure, in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated features of the disclosure and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in a form as a prelude to the more detailed description that is presented later.
One aspect of the subject matter described in this disclosure can be implemented in a method. The method can include storing a critical state policy that uses current values for nonconstant node attributes to determine a criticality state of a node, receiving a directive to perform an upgrade that transitions the node to a new version, determining the criticality state of the node after receiving the directive and before performing the upgrade, and performing the upgrade only when the criticality state of the node indicates that the node is in a noncritical state.
Another aspect of the subject matter described in this disclosure can be implemented by a system. The system can include an upgrade manager configured to communicate over a network with a node, store a critical state policy that uses current values for nonconstant node attributes to determine a criticality state of the node, receive a directive to upgrade the node to a new version, determine the criticality state of the node after receiving the directive and before upgrading the node, and upgrade the node only when the criticality state of the node indicates that the node is in a noncritical state.
Yet another aspect of the subject matter described in this disclosure can be implemented by a system. The system can include a means for receiving a directive for upgrading a node to a new version, a means for determining whether the node is in a noncritical state after the directive is received and before the node is upgraded, and a means for performing the upgrade only when the node is in the noncritical state.
In some implementations of the methods and devices, the critical state policy uses values of constant node attributes to determine the criticality state of the node. In some implementations of the methods and devices, the critical state policy includes a node executable code, and executing the node executable code on the node produces one of the current values. In some implementations of the methods and devices, the method can include maintaining a critical state database that stores a plurality of time series entries that include a timestamp indicating a time, and the criticality state of the node at the time, obtaining a test run result by performing a test run of the critical state policy, and storing the test run result as one of the time series entries. In some implementations of the methods and devices, an upgrade manager is configured to upgrade the node, the upgrade manager communicates over a network with the node, and the upgrade manager determines the criticality state of the node. In some implementations of the methods and devices, the node is configured to periodically provide one of the current values to the upgrade manager.
In some implementations of the methods and devices, the critical state policy includes a node executable code, and one of the current values is produced by executing the node executable code on the node. In some implementations of the methods and devices, a network interface card (NIC) is installed in the node, the NIC is configured to periodically determine one of the current values, and the NIC provides the one of the current values to the upgrade manager. In some implementations of the methods and devices, a network interface card (NIC) is installed in the node, the critical state policy includes a NIC executable code, and one of the current values is produced by executing the NIC executable code on the NIC. In some implementations of the methods and devices, the node is a virtual machine (VM) running on a host computer, a network interface card (NIC) is installed in the host computer, the NIC is configured to periodically determine one of the current values, and the NIC provides the one of the current values to the upgrade manager.
In some implementations of the methods and devices, the node is a virtual machine (VM) running on a host computer, a network interface card (NIC) is installed in the host computer, the critical state policy includes a NIC executable code, and one of the current values is produced by executing the NIC executable code on the NIC. In some implementations of the methods and devices, the current values include a CPU usage statistic, a memory usage statistic, a non-volatile memory input/output statistic, a long-lived network session statistic, a short-lived statistic, and a process identifier that identifies a process running on the node. In some implementations of the methods and devices, the upgrade manager is configured to upgrade a plurality of nodes that are identified by a plurality of node identifiers, the upgrade manager stores a plurality of critical state policies that are associated with the nodes via the plurality of node identifiers, and each of the nodes is upgraded only when in the noncritical state according to the critical state policies. In some implementations of the methods and devices, the method includes producing criticality state time series data that indicates the criticality state of each of the nodes as a function of time, determining upgrade windows for the nodes, and scheduling node upgrades based on the upgrade windows.
In some implementations of the methods and devices, the method includes storing time series data that includes a plurality of time series entries that include a timestamp indicating a time, and the criticality state of the node at the time, obtaining test run data produced by a test run of the critical state policy, and storing the test run data as one of the time series entries. In some implementations of the methods and devices, the critical state policy uses values of constant node attributes to determine the criticality state of the node, the critical state policy includes a node executable code, executing the node executable code on the node produces a first current value, the node is configured to periodically provide the first current value to the upgrade manager, a network interface card (NIC) is installed in the node, the NIC is configured to periodically determine a second current value, the NIC provides the second current value to the upgrade manager, the critical state policy includes a NIC executable code, the second current value is produced by executing the NIC executable code on the NIC, a second node is a virtual machine (VM) running on a host computer, a second NIC is installed in the host computer, the second NIC is configured to periodically determine a third current value, the second NIC provides the third current value to the upgrade manager, the third current value is produced by executing the NIC executable code on the second NIC, and the current values include a CPU usage statistic, a memory usage statistic, a non-volatile memory input/output statistic, a long-lived network session statistic, a short-lived statistic, and a process identifier that identifies a process running on the node.
In some implementations of the methods and devices, the node is configured to periodically provide one of the current values to the upgrade manager. In some implementations of the methods and devices, the critical state policy includes a node executable code, and one of the current values is produced by executing the node executable code on the node. In some implementations of the methods and devices, the node is a virtual machine (VM) running on a host computer, a network interface card (NIC) is installed in the host computer, the NIC is configured to periodically determine one of the current values, and the NIC provides the one of the current values to the upgrade manager. In some implementations of the methods and devices, the node is a virtual machine (VM) running on a host computer, a network interface card (NIC) is installed in the host computer, the critical state policy includes a NIC executable code, and one of the current values is produced by executing the NIC executable code on the NIC.
These and other aspects will become more fully understood upon a review of the detailed description, which follows. Other aspects, features, and embodiments will become apparent to those of ordinary skill in the art, upon reviewing the following description of specific, exemplary embodiments in conjunction with the accompanying figures. While features may be discussed relative to certain embodiments and figures below, all embodiments can include one or more of the advantageous features discussed herein. In other words, while one or more embodiments may be discussed as having certain advantageous features, one or more of such features may also be used in accordance with the various embodiments discussed herein. In similar fashion, while exemplary embodiments may be discussed below as device, system, or method embodiments such exemplary embodiments can be implemented in various devices, systems, and methods.
Throughout the description, similar reference numbers may be used to identify similar elements.
It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Cloud computing services operate data warehouses that are used for providing infrastructure-as-a-service to tenants. A data warehouse can have tens of thousands of host computers communicating via networks implemented with massive amounts of networking equipment. The networking equipment can include routers, switches, and network interface cards (NICs). The NICs can be installed in the host computers and can connect the hosts to the network. Smart NICs are NICs that can perform processing related to network communications that, in the past, was performed by the hosts. Each of the hosts can run numerous virtual machines (VMs). The tenants can use the VMs to serve their customers' needs and their own needs. Managing upgrades in such a large environment is problematic.
The most common upgrade process implemented by data centers and information technology (IT) organizations is to upgrade nodes (e.g., computers, VMs, networking equipment, etc.) on a schedule. For example, the scheduled upgrades are performed outside of business hours, on weekends, etc. Given the scale of computing infrastructure and today's 24 hour per day operations schedule, scheduled upgrades can interfere with the tenants' operations and the cloud service provider's operations. In fact, data centers often give uptime guarantees to their tenants in the form of service level agreements (SLAs). Experience has shown that upgrading according to a set schedule often results in taking down a node that is critical to a tenant's current operations. The scheduled upgrades anger tenants and negatively impact the cloud provider's ability to comply with the SLAs.
One solution to the problems arising from scheduled upgrades is to upgrade equipment only when it is in a noncritical state. Equipment is in a noncritical state when it can be taken offline without causing major disruption to the tenants' operations. Tenants, however, typically do not inform the cloud service provider that nodes are currently critical such as when certain operations are being performed (e.g., rebuilding database indexes, reconciling transactions, having a holiday sales event, etc.) The cloud service provider may therefore use a data driven approach to infer when a particular node is in a critical state or in a noncritical state. The inference can be automated based on policies that use data collected from the nodes. For example, if a node's central processing unit (CPU) usage or input/output (I/O) bandwidth exceeds thresholds, then it may be inferred that the node is currently doing something important and is therefore in a critical state. An upgrade manager, which can be implemented by another one of the nodes, can delay upgrading the node until the node is in a noncritical state.
The advantages of upgrading each node only when it is in a noncritical state include not disrupting critical operations, not angering tenants, providing more reliable services for the tenants' customers, improving uptime, meeting the requirements of the SLAs, being able to offer higher availability via more stringent SLAs, and being able to meet the requirements of those more stringent SLAs.
Cloud applications can be implemented using numerous nodes that may perform different functions (load balancer, web server, database server, etc.) and may duplicate the functions of other nodes. A work node manager 111, such as Kubernetes, can manage the nodes by launching and halting nodes as required by the application. The work node manager may also have attribute data for the nodes such as nonconstant node attributes 112 and constant node attributes 113.
The running process data 206 can include a list of processes and specific process data 207. For example, the list of processes currently being run by a node can include a structured query language (SQL) server, an online transaction processing (OLTP) server, etc. Running a particular process may be an indicator that the node is in a critical state. Alternatively, the specific process data 207 for a process can indicate how active the process is. If the specific process data 207 for a specific process exceeds predefined thresholds, then the node running that process may be in a critical state.
The enable upgrade method can be a web hook, remote procedure call (RPC), HTTP request, etc. that can be used to enable upgrades at the node. The disable upgrade method can be a web hook, RPC, HTTP request, etc. that can be used to disable upgrades at the node. For example, the enable/disable upgrade methods can cause a nonconstant attribute to indicate that upgrading is currently allowed or not allowed. The nonconstant attribute can be a value stored in a file, a filename of a file in a filesystem, a value stored in a memory, etc. When an upgrade is triggered on the node, the local upgrade mechanism 107 can exit without upgrading the node if upgrades are disabled. When an upgrade is triggered on the node, the local upgrade mechanism 107 can attempt upgrading the node if upgrades are enabled. The upgrade method can be a web hook, RPC, HTTP request, etc. that causes the upgrade mechanism 107 to execute. The upgrade download method can use a file download mechanism implemented by the node to download the upgrade to the node. In some implementations, the upgrade may be stored in a filesystem or other data store that can be accessed by numerous nodes.
The ASIC 401 is a semiconductor chip having many core circuits interconnected by an on-chip communications fabric, sometimes called a network on a chip (NOC) 402. The NOC can be an implementation of a standardized communications fabric such as the widely used advanced extensible interface (AXI) bus. The ASIC's core circuits can include a PCIe interface 427, central processing unit (CPU) cores 403, P4 packet processing pipeline 408 elements, memory interface 415, on ASIC memory (e.g., SRAM) 416, service processing offloads 417, a packet buffer 422, extended packet processing pipeline 423, and packet ingress/egress circuits 414. A PCIe interface 427 can be used to communicate with a host computer via the PCIe connection 431. The CPU cores 403 can include numerous CPU cores such as CPU 1405, CPU 2406, and CPU 3407. The P4 packet processing pipeline 408 can include a pipeline ingress circuit 413, a parser circuit 412, match-action units 411, a deparser circuit 410, and a pipeline egress circuit 409. The service processing offloads 417 are circuits implementing functions that the ASIC uses so often that its designers have chosen to provide hardware for offloading those functions from the CPUs. The service processing offloads can include a compression circuit 418, decompression circuit 419, a crypto and public key authentication (PKA) circuit 420, and a cyclic redundancy check (CRC) calculation circuit 421. The specific core circuits implemented within the nonlimiting example of ASIC 401 have been selected such that the ASIC implements many, perhaps all, of the functionality of an InfiniB and channel adapter, of an NVMe card, and of a network appliance that processes network traffic flows carried by IP (internet protocol) packets.
The P4 packet processing pipeline 408 is a specialized set of elements for processing network packets such as IP packets, NVMe protocol data units (PDUs), and InfiniBand PDUs. The P4 pipeline can be configured using a domain-specific language. The concept of a domain-specific language for programming protocol-independent packet processors, known simply as “P4,” has developed as a way to provide some flexibility at the data plane of a network appliance. The P4 domain-specific language for programming the data plane of network appliances is defined in the “P416 Language Specification,” version 1.2.0, as published by the P4 Language Consortium on Oct. 23, 2019. P4 (also referred to herein as the “P4 specification,” the “P4 language,” and the “P4 program”) is designed to be implementable on a large variety of targets including network switches, network routers, programmable NICs, software switches, FPGAs, and ASICs. As described in the P4 specification, the primary abstractions provided by the P4 language relate to header types, parsers, tables, actions, match-action units, control flow, extern objects, user-defined metadata, and intrinsic metadata.
The network appliance 430 can include a memory 432 for running Linux or some other operating system. The memory 432 can also be used to store node executable critical state policy code 440, constant node attributes 441, nonconstant node attributes 442, a NIC upgrade mechanism 443, a NIC upgrade 444, and critical state policy parameters 445. The node executable critical state policy code 440 can be code that the NIC can execute to determine if the NIC is in a critical state or a noncritical state, to determine node attributes, etc. The results of running the node executable critical state policy code 440 can be returned to the upgrade manager. The constant node attributes 441 can include values for constant attributes of the NIC itself.
The CPU cores 403 can be general purpose processor cores, such as reduced instruction set computing (RISC) processor cores, advanced RISC machine (ARM) processor cores, microprocessor without interlocked pipelined stages (MIPS) processor cores, and/or x86 processor cores, as is known in the field. Each CPU core can include a memory interface, an ALU, a register bank, an instruction fetch unit, and an instruction decoder, which are configured to execute instructions independently of the other CPU cores. The CPU cores may be programmable using a general-purpose programming language such as C.
The CPU cores 403 can also include a bus interface, internal memory, and a memory management unit (MMU) and/or memory protection unit. For example, the CPU cores may include internal cache, e.g., L1 cache and/or L2 cache, and/or may have access to nearby L2 and/or L3 cache. Each CPU core may include core-specific L1 cache, including instruction-cache and data-cache and L2 cache that is specific to each CPU core or shared amongst a small number of CPU cores. L3 cache may also be available to the CPU cores.
There may be multiple CPU cores 403 available for control plane functions and for implementing aspects of a slow data path that includes software implemented packet processing functions. The CPU cores may be used to implement discrete packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), certain InfiniB and channel adapter functions, flow table insertion or table management events, connection setup/management, multicast group join, deep packet inspection (DPI) (e.g., URL inspection), storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, compression, and decompression, which may not be readily implementable through a domain-specific language such as P4, in a manner that provides fast path performance as is expected of data plane processing.
The packet buffer 422 can act as a central on-chip packet switch that delivers packets from the network interfaces 433 to packet processing elements of the data plane and vice-versa. The packet processing elements can include a slow data path implemented in software and a fast data path implemented by packet processing circuitry 408, 423.
The packet processing circuitry 408, 423 can be a specialized circuit or part of a specialized circuit implementing programmable packet processing pipelines. Some embodiments include a P4 pipeline as a fast data path within the network appliance. The fast data path is called the fast data path because it processes packets faster than a slow data path that can also be implemented within the network appliance. An example of a slow data path is a software implemented data path wherein the CPU cores 403 and memory 432 are configured via software to implement a slow data path.
The ASIC 401 is illustrated with a P4 packet processing pipeline 408 and an extended packet processing pipeline 423. The extended packet processing pipeline is a packet processing pipeline that has a direct memory access (DMA) output stage 424. The extended packet processing pipeline has match-action units 425 that can be arranged as a match-action pipeline. The extended packet processing pipeline has a pipeline input stage 426 that can receive packet header vectors (PHVs) or directives to perform operations. A PHV can contain data parsed from the header and body of a network packet by the parser 412.
All memory transactions in the NIC 430, including host memory transactions, on board memory transactions, and registers reads/writes may be performed via a coherent interconnect 402. In one nonlimiting example, the coherent interconnect can be provided by a network on a chip (NOC) “IP core” (in this one context, “IP” is an acronym for intellectual property). Semiconductor chip designers may license and use prequalified IP cores within their designs. Prequalified IP cores may be available from third parties for inclusion in chips produced using certain semiconductor fabrication processes. A number of vendors provide NOC IP cores. The NOC may provide cache coherent interconnect between the NOC masters, including the packet processing pipeline circuits 408, 423, CPU cores 403, memory interface 415, and PCIe interface 427. The interconnect may distribute memory transactions across a plurality of memory interfaces using a programmable hash algorithm. All traffic targeting the memory may be stored in a NOC cache (e.g., 1 MB cache). The NOC cache may be kept coherent with the CPU core caches.
The critical state policy 520 provides an example of the critical state policy parameters incorporated in executable code. Although the code is pseudocode, one practiced in the art would understand that it shows how the critical state policy parameters 501 could be incorporated into executable code that can be run by a node, an upgrade manager, or any other machine that has access to the node's constant and nonconstant node attributes. The executable code's return value indicates “Critical” or “NonCritical” to thereby indicate the node's criticality state according to this particular policy. One or more policies may be checked for a single node where a “Critical” result from any one policy indicates that the node is in a critical state.
The host computer 601 can have a host computer local criticality state calculator 606 such that the host can infer its own criticality state and can report that inferred state to the upgrade manager. The host computer local criticality state calculator 606 can be executable code downloaded to the host by another node such as the upgrade manager 620. The VM 602 can have a VM local criticality state calculator 603 such that the VM can infer its own criticality state and can report that inferred state to the upgrade manager. The VM local criticality state calculator 603 can be executable code downloaded to the VM by another node such as the upgrade manager 620. The smart NIC 604 can have a smart NIC local criticality state calculator 605 such that the smart NIC can infer its own criticality state and can report that inferred state to the upgrade manager. The smart NIC local criticality state calculator 605 can be executable code downloaded to the smart NIC by another node such as the upgrade manager 620. The criticality state calculator for any particular node can be run on that particular node or on any other node that has access to that particular node's constant and nonconstant attributes. The criticality states can be reported to the upgrade manager and to other nodes that may track, store, or use the criticality state data of the nodes.
The upgrade manager 702 may receive an upgrade directive 704 from an administrator 703 such as a node administrator or data center administrator. The upgrade directive can provide a list of nodes to upgrade or can otherwise identify the nodes to upgrade. For example, the upgrade directive can specify nodes using parameters such as a tenant identifier, a node name, a node type, a node role, a smart NIC type, etc. For example, all of a tenet's nodes can be specified via the tenant identifier associated with the tenant. Another example is that the tenant's SQL servers can be specified using the tenant identifier and one or more node types. The upgrade manager may select a set of nodes and upgrade the ones in a noncritical state. The upgrade manager may schedule a later attempt to upgrade the remaining nodes, some of which will still not be upgraded because they are again in a critical state. Nodes that are not upgraded after a specific time, elapsed time, or number of upgrade attempts may be reported via service request 705 to an administrator 703. Service requests may also be sent when another one of the methods, such as the “download upgrade onto node” method fails. The upgrade manager may automatically send service requests 705 that can be received by administrators 703.
The upgrade manager can also store the criticality policies data 1010. Criticality policies data 1010 can include criticality policy 1 data 1011, criticality policy 2 data, criticality policy 3 data, and data for additional criticality policies. Criticality policy data can include a critical state policy 1013 stored in association with a criticality state policy identifier 1012.
To determine a node's criticality state, a criticality state calculator finds the node's data in nodes data 1002. The node's data includes current values for the node's attributes and a criticality state policy identifier. The critical state policy associated with the criticality state policy identifier in the criticality policies data 1010 and the current values for the node's attributes can be used to determine the node's current criticality state. As such, the upgrade manager 1001 can determine the criticality states of a host 1020, a VM 1021, a NIC 1022, and other nodes.
The critical state database 1110 can include node data such as node 1 data 1111, node 2 data, and node M data. Node data, such as node 1 data 1111, can include a node identifier 1112 and critical state time series data 1113. Critical state time series data 1113 can include time series entries such as time series entry 11114, time series entry 21118, and time series entry P 1119. Time series entries can include a timestamp 1115 indicating a time, the node's criticality state at that time 1116, and other timestamped data 1117 such as values for attributes.
Aspects described above can be ultimately implemented in a network appliance that includes physical circuits that implement digital data processing, storage, and communications. The network appliance can include processing circuits, ROM, RAM, CAM, and at least one interface (interface(s)). The CPU cores described above are implemented in processing circuits and memory that is integrated into the same integrated circuit (IC) device as ASIC circuits and memory that are used to implement the programmable packet processing pipeline. For example, the CPU cores and ASIC circuits are fabricated on the same semiconductor substrate to form a System-on-Chip (SoC). The network appliance may be embodied as a single IC device (e.g., fabricated on a single substrate) or the network appliance may be embodied as a system that includes multiple IC devices connected by, for example, a printed circuit board (PCB). The interfaces may include network interfaces (e.g., Ethernet interfaces and/or InfiniB and interfaces) and/or PCI Express (PCIe) interfaces. The interfaces may also include other management and control interfaces such as I2C, general purpose IOs, USB, UART, SPI, and eMMC.
Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. Instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner.
It should also be noted that at least some of the operations for the methods described herein may be implemented using software instructions stored on a computer usable storage medium for execution by a computer. As an example, an embodiment of a computer program product includes a computer usable storage medium to store a computer readable program.
The computer-usable or computer-readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device). Examples of non-transitory computer-usable and computer-readable storage media include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD).
Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.