Apparatus and method for virtualizing a connection to a node in an industrial control and automation system

Information

  • Patent Grant
  • 10536526
  • Patent Number
    10,536,526
  • Date Filed
    Thursday, December 18, 2014
    9 years ago
  • Date Issued
    Tuesday, January 14, 2020
    4 years ago
Abstract
An apparatus includes first hardware configured to communicate, via a first interface, over a supervisory control network with one or more components of an industrial process control and automation system. The apparatus also includes second hardware configured to communicate, via a second interface, with a computing platform that virtualizes at least one other component of the industrial process control and automation system. The apparatus further includes a third interface configured to transport information between the first and second hardware.
Description
TECHNICAL FIELD

This disclosure relates generally to industrial process control and automation systems. More specifically, this disclosure relates to an apparatus and method for virtualizing a connection to a node in an industrial control and automation system.


BACKGROUND

Industrial process control and automation systems are routinely used to automate large and complex industrial processes. Distributed process control and automation systems are routinely arranged in different “levels.” For example, controllers in lower levels are often used to receive measurements from sensors and generate control signals for actuators. Controllers in higher levels are often used to support higher-level functions, such as scheduling, planning, and optimization operations.


In a distributed control system, devices on certain levels are often connected to associated communication networks using special hardware interfaces. Because of such tight couplings with the hardware interfaces, these devices often cannot be virtualized, which may be needed or desired for older legacy devices or other types of devices.


SUMMARY

This disclosure provides an apparatus and method for virtualizing a connection to a node in an industrial control and automation system.


In a first embodiment, an apparatus includes first hardware configured to communicate, via a first interface, over a supervisory control network with one or more components of an industrial process control and automation system. The apparatus also includes second hardware configured to communicate, via a second interface, with a computing platform that virtualizes at least one other component of the industrial process control and automation system. The apparatus further includes a third interface configured to transport information between the first and second hardware.


In a second embodiment, a system includes a network interface device and a computing platform. The network interface device includes first hardware configured to communicate, via a first interface, over a supervisory control network with one or more components of an industrial process control and automation system. The network interface device also includes second hardware and a third interface configured to transport information between the first and second hardware. The computing platform is configured to virtualize at least one other component of the industrial process control and automation system. The second hardware is configured to communicate, via a second interface, with the computing platform.


In a third embodiment, a method includes using first hardware of a network interface device to communicate, via a first interface, over a supervisory control network with one or more components of an industrial process control and automation system. The method also includes using second hardware of the network interface device to communicate, via a second interface, with a computing platform that virtualizes at least one other component of the industrial process control and automation system. The method further includes transporting information between the first and second hardware using a third interface of the network interface device.


Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:



FIGS. 1A and 1B illustrate an example industrial process control and automation system according to this disclosure



FIG. 2 illustrates an example network interface device according to this disclosure;



FIG. 3 illustrates example software architectures of components in an industrial control and automation system according to this disclosure;



FIGS. 4 through 7 illustrate example systems using the network interface device according to this disclosure; and



FIG. 8 illustrates an example method for virtualization of a connection to a node in an industrial control and automation system according to this disclosure.





DETAILED DESCRIPTION


FIGS. 1A through 8, discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the invention. Those skilled in the art will understand that the principles of the invention may be implemented in any type of suitably arranged device or system.



FIGS. 1A and 1B illustrate an example industrial process control and automation system 100 according to this disclosure. As shown in FIG. 1A, the system 100 includes one or more controllers 102, which are often said to reside within or form a part of a “Level 1” controller network in a control and automation system. Each controller 102 is capable of controlling one or more characteristics in an industrial process system. A process system generally represents any system or portion thereof configured to process one or more products or other materials in some manner. For instance, the controllers 102 could receive measurements from one or more sensors and use the measurements to control one or more actuators.


The controllers 102 communicate via a network 103 with at least one gateway 104. The network 103 represents one or more communication paths that support interactions with the controllers 102 using an industrial communication protocol. The network 103 represents any suitable industrial process control network.


In this example, the controllers 102 communicate with higher-level devices and systems via the gateway(s) 104. Here, each gateway 104 facilitates communication between the network 103 and a supervisory network 106, such as a local control network (LCN). Each gateway 104 includes any suitable structure facilitating communication with one or more devices via a supervisory network. The supervisory network 106 represents a network facilitating communication among higher-level process control and automation devices and systems.


The system 100 could also include one or more advanced controllers 108 that communicate over an advanced control network 110. The advanced controllers 108 represent controllers that are newer, more technologically advanced, or more feature-rich than the controllers 102. Similarly, the control network 110 could represent a newer, more technologically advanced, or more feature-rich network for transporting control information, such as an Internet Protocol (IP)-based network. In particular embodiments, the advanced controllers 108 could represent C300 controllers from HONEYWELL INTERNATIONAL INC., and the control network 110 could represent a FAULT TOLERANT ETHERNET (FTE) or other redundant IP-based network.


Various other components in the system 100 support a wide range of process control and automation-related functions. For example, one or more operator consoles 112 can be used by operators to interact with the system 100. At least one supervisory controller 114 and at least one server 116 provide higher-level control in the system 100. For instance, the supervisory controller 114 and/or server 116 could perform more advanced planning or scheduling operations, execute higher-level control strategies, or perform other functions. At least one application processing platform 118 can be used to automate various procedures in the system 100. At least one historian 120 can be used to collect and store data associated with operation of the system 100 over time. Various ones of these components are often said to reside within or form a part of a “Level 2” supervisory network in a control and automation system.


Each operator console 112 includes any suitable structure for facilitating operator interactions, such as an EXPERION STATION TPS (EST) from HONEYWELL INTERNATIONAL INC. Each controller 114 includes any suitable structure for providing supervisory control, such as an APPLICATION CONTROL ENVIRONMENT-TPS (ACE-T) node from HONEYWELL INTERNATIONAL INC. Each server 116 represents any suitable computing device, such as an EXPERION SERVER TPS from HONEYWELL INTERNATIONAL INC. (or a redundant pair of such servers). Each application processing platform 118 includes any suitable structure for executing automated procedures, such as an APPLICATION MODULE (AM) from HONEYWELL INTERNATIONAL INC. Each historian 120 includes any suitable structure for storing data, such as a HISTORY MODULE (HM) from HONEYWELL INTERNATIONAL INC.


As described above, in a distributed control system, devices on certain levels are often connected to associated communication networks using special hardware interfaces. For example, operator consoles 112 and controllers 114 on “Level 2” of the system 100 are often coupled to an LCN or other supervisory network 106 using special hardware interfaces installed on their computer platforms. Because of this, these devices often cannot be virtualized. An example of this problem occurs with the attempted virtualization of HONEYWELL EXPERION TPS nodes, such as EST, ESVT, ACET, and EAPP nodes. These are WINDOWS-based nodes that are interfaced to an LCN via special PCI hardware interfaces known as LCNP4/LCNP4e interfaces.


This disclosure describes techniques that decouple hardware/platform dependency and enable the virtualization of legacy or other devices. This is accomplished by moving the LCN or other network interface out of a legacy or other device and installing the network interface as an independent “network interface device.” A mechanism is also provided to establish communications between the legacy or other device and the network interface device, such as via an Ethernet network. In this way, communications over an LCN or other supervisory network 106 are supported using the network interface device, but this functionality is decoupled from the legacy or other control device, allowing the legacy or other control device to be virtualized.


An example of this is shown in FIG. 1B, where a computing platform 150 is used in conjunction with a network interface device 152. The computing platform 150 represents any suitable computing device that can be used to virtualize at least one legacy or other device in an industrial process control and automation system. For example, the computing platform 150 could include one or more processing devices 154, one or more memories 156, and one or more network interfaces 158. Each processing device 154 includes any suitable processing or computing device, such as a microprocessor, microcontroller, digital signal processor, field programmable gate array (FPGA), application specific integrated circuit (ASIC), or discrete logic devices. Each memory 156 includes any suitable storage and retrieval device, such as a random access memory (RAM), Flash or other read-only memory (ROM), magnetic storage device, solid-state storage device, optical storage device, or other storage and retrieval device. Each interface 158 includes any suitable structure facilitating communication over a connection or network, such as a wired interface (like an Ethernet interface) or a wireless interface (like a radio frequency transceiver). In particular embodiments, the computing platform 150 could be used to virtualize any of the “Level 2” devices shown in FIG. 1A.


The network interface device 152 facilitates communication by the computing platform 150 over the LCN or other supervisory network 106, but the network interface device 152 communicates with the computing platform 150 over the advanced control network 110 (such as an Ethernet or FTE network). In some embodiments, the network interface device 152 includes or supports hardware 160, interface 162, hardware 164, interface 166, and interface 168. The hardware 160 is provided with the capability to host software performing the functionality of a “Level 2” device or other device. The interface 162 provides an interface for the hardware 160 to connect to and communicate on the LCN or other supervisory network 106. The hardware 164 acts as a bridge interface between the hardware 160 and the computing platform 150. The interface 166 provides an interface for the hardware 164 to connect to and communicate with the computing platform 150, such as via an Ethernet or FTE network. The interface 168 establishes communications between the hardware 160 and the hardware 164. The components 160-168 can be physically separate or operate on the same physical hardware but as distinct logical entities. In this scheme, the computing platform's applications access the data from the hardware 160 via the hardware 164.


The hardware 160 includes any suitable structure supporting “Level 2” or other supervisory functionality in an industrial process control and automation system. The hardware 164 includes any suitable structure supporting an interface between hardware providing supervisory functionality and a computing platform emulating a supervisory device. In some embodiments, each of the hardware 160, 164 is implemented using a printed circuit board (PCB). Each interface 162, 166, 168 includes any suitable structure for providing an interface to a network or device. In some embodiments, the interface 162 denotes an LCN interface, the interface 166 denotes an FTE interface, and the interface 168 denotes a communication bus. Additional details regarding the network interface device 152 are provided below.


In conventional systems, the virtualization of hardware can be prevented by the use of custom add-in cards, such as the LCNP4 card that performs the personality function of a TPS node. Moreover, the lifetime of a computing platform's support for internal interfaces (such as PCI bus physical connections) is typically shorter than the intended lifetime of an industrial control or automation device. Using the network interface device 152, which supports communications over a lower-level network, allows separation of the computing platform 150 from the network interface device 152. This approach therefore provides a solution that enables virtualization of one or more nodes by decoupling an operating system from the bus and LCN/supervisory network connections used to communicate with lower-level nodes while reducing obsolescence issues. This is accomplished by providing flexibility in terms of the structure of the computing platform 150 while maintaining the interface to the LCN or other supervisory network 106 via the network interface device 152.


Among other things, the approach described here can support any combination of the following features. First, this approach can increase the lifetime of support for an EXPERION or other software release by avoiding PC hardware obsolescence that requires OS updates. In the past, for example, moving from an obsolete server to a newer server with a newer peripheral bus often required extensive testing to ensure the newer server was compatible with existing control technology. The approach described here allows the computing platform 150 to be replaced without requiring extensive (or any) modifications to the network interface device 152. Second, this approach can increase system reliability by reducing the number of platforms through virtualization. With this approach, any number of higher-level components can be virtualized on one or more computing platforms 150, and the network interface device 152 provides a remote connection external of the computing platform hardware. Third, this approach can decrease support costs for replacing PC hardware when platforms are obsolete and replacements require software upgrades. For instance, major updates of a WINDOWS operating system may no longer require updates to or replacements of various “Level 2” devices since those “Level 2” devices can be virtualized. Fourth, this approach can decrease sensitivity to platform coupling by helping to virtualize the platform, thus making the introduction of newer technology simpler. Finally, this approach can decrease lifetime costs of a control system by extending the operational lifespan of and simplifying support for existing products.


Although FIGS. 1A and 1B illustrate one example of an industrial process control and automation system 100, various changes may be made to FIGS. 1A and 1B. For example, a control system could include any number of each component in any suitable arrangement. Components could be added, omitted, combined, or placed in any other suitable configuration according to particular needs. Also, particular functions have been described as being performed by particular components of the system 100. This is for illustration only. In general, process control and automation systems are highly configurable and can be configured in any suitable manner according to particular needs. In addition, FIGS. 1A and 1B illustrate an example environment in which a network interface device 152 can be used. The network interface device 152 can be used in any other suitable device or system.



FIG. 2 illustrates an example network interface device 152 according to this disclosure. The example implementation of the network interface device 152 shown here could be used to enable the virtualization of HONEYWELL's EST, ESVT, ACET, and EAPP nodes connected to a legacy TDC 3000 LCN. However, the network interface device 152 could be used to support any other suitable functionality.


In this specific example, the hardware 160 is implemented using a K4LCN hardware card from HONEYWELL INTERNATIONAL INC. The K4LCN hardware card can be mounted on a standard LCN chassis and hosts the LCN software personality of an APPLICATION MODULE (AM) or a UNIVERSAL STATION (US) with special load module components as per the network configuration. Also, in this specific example, the interface 162 is implemented using an LCN input/output (I/O) board, which could be a standard interface board through which the K4LCN card communicates over an LCN or other supervisory network 106. Further, in this specific example, the hardware 164 is implemented using a hardware card, which can be referred to as an ENHANCED TPS NODE INTERFACE (ETNI). The ETNI hardware card can be mounted on the standard LCN chassis along with the K4LCN hardware card. Moreover, in this specific example, the interface 166 is implemented using an FTE I/O board, which is an interface board through which the ETNI card communicates with the computing platform 150 over an FTE or other advanced control network 110. In addition, in this specific example, the interface 168 is implemented using a backplane module bus of the LCN chassis and is the interface through which the ETNI card communicates with the K4LCN card.


The overall structure shown in FIG. 2 may be referred as an ENHANCED TPS NODE (ETN). The computing platform 150 and the hardware 164 can communicate over FTE using any standard or proprietary protocol, such as HONEYWELL's ETN communication protocol.


In some embodiments, the network interface device 152 could support different mechanisms for accessing the computing platform 150. For example, the implementation of the network interface device 152 with a K4LCN hardware card could include support for different hardware slots, such as the following:

    • WSI2: To interface the K4LCN card with WINDOWS X-LAYER communication services running on the computing platform 150;
    • PDG: To interface the K4LCN with the NATIVE WINDOW DISPLAY applications running on the computing platform 150; or
    • SCSI: To enable the K4LCN card to access EMULATED DISK files residing on the computing platform 150.


      Though the ETNI can represent a single card occupying a single physical slot on an LCN chassis, an FPGA implementation of the ETNI card could act as three devices as per configuration needs on the module bus of the LCN chassis, namely a WSI2 (Work Station Interface), a PDG (Peripheral Display Generator), and a SCSI (Smart Computer Systems Interface). These three different hardware interfaces can be implemented on a single ETNI board. The ETNI card can also be implemented with a mechanism to enable or disable any of these slots, such as automatically during start-up, which makes it a common interface that can be used for EAPP, ESVT, EST, and ACET nodes without any hardware/firmware changes. The slots used to support the ETNI for different types of devices can include the following:
    • EST: PDG, SCSI and WSI2
    • EAPP, ESVT and ACET: Only WSI2


      These slots can be enabled or disabled dynamically based on the configuration from the computing platform 150.


Although FIG. 2 illustrates one example of a network interface device 152, various changes may be made to FIG. 2. For example, the types of networks and circuit cards shown in FIG. 2 are for illustration only. The use of particular protocols and interfaces (such as LCN, K4LCN, ETNI, and FTE interfaces) are for illustration only. Any other or additional interfaces can be used in or with one or more network interface devices 152 without departing from the scope of this disclosure. For instance, the supervisory network 106 could be implemented in the future as an IP/Ethernet-based network (such as an FTE-based LCN), and the network interface device 152 could support a suitable interface and hardware for that type of LCN.



FIG. 3 illustrates example software architectures of components in an industrial control and automation system according to this disclosure. More specifically, FIG. 3 illustrates an example high-level software architecture 300 of a computing platform 150 that virtualizes one or more legacy or other devices in a control and automation system. FIG. 3 also illustrates an example high-level software architecture 350 of a network interface device 152.


As shown in FIG. 3, the software architecture 300 includes a kernel space 302 and a user space 304. The kernel space 302 denotes the logical space within the software architecture 300 where the operating system's kernel executes for the computing platform 150. The user space 304 denotes the logical space within the software architecture 300 where other applications, such as applications 306-308, execute in the computing platform 150. Any suitable applications can be executed within the user space 304, such as a Local Control Network Processor (LCNP) status application or a NATIVE WINDOW DISPLAY application.


An emulator service 310 is also executed within the user space 304. The emulator service 310 operates to emulate the physical interfaces that virtualized devices expect to be present at the computing platform 150 (but that are actually implemented remotely at the network interface devices 152). For example, the emulator service 310 could emulate the circuit cards that are expected in different virtualized slots. As a particular example, slot 2 could be a WSI card, Slot 3 could be a SCSI card, and Slot 4 could be an Enhanced Peripheral Display Generator (EPDG) card.


The emulator service 310 also supports the use of one or more sockets 312, which are used to communicate via an FTE module 314 in the kernel space 302. The FTE module 314 supports various functions allowing redundant communications over the advanced control network 110, such as heartbeat signaling and processing. Although not shown, the kernel space 302 could support various other features, such as emulated TOTAL DISTRIBUTED CONTROL (TDC) memory, slot registers, and LCNP4e registers.


The software architecture 350 includes a TDC personality 352 in the hardware 160. The TDC personality 352 denotes one or more software routines that provide the personality function of a TPS node. From the perspective of other devices connected to the LCN or other supervisory network 106, the TDC personality 352 causes the network interface devices 152 to appear as a standard supervisory device, even though the actual functionality of the supervisory device is virtualized and executed by the computing platform 150. The TDC personality 352 communicates with slot registers 354 in the hardware 164, which could be implemented using an FPGA or other suitable structure. In particular embodiments, the slot registers 354 could include three standard slot register sets that are expected in any standard TDC device.


Firmware 356 executed by the hardware 164 includes slot emulation tasks 358 and an FTE module 360. Among other things, the slot emulation tasks 358 can emulate the operations needed to interact with the TDC personality 352. For example, the WSI subsystem of a conventional TDC personality 352 sets up WSI slot registers with mailboxes and semaphores and manages a linked list of TDC memory data buffers. The WSI subsystem also watches the mailboxes for data buffers going to the TDC personality 352 and places data buffers into the mailboxes coming from the TDC personality 352. The slot emulation tasks 358 could emulate these functions to support data transfers to and from the computing platform 150 via the network 110. The FTE module 360 supports various functions allowing redundant communications over the advanced control network 110, such as heartbeat signaling and processing.


Although FIG. 3 illustrates examples of software architectures 300, 350 of components in an industrial control and automation system, various changes may be made to FIG. 3. For example, the particular mechanisms shown here for supporting interactions between the computing platform 150 and the network interface device 152 are examples only. Also, the types of networks and circuit cards shown in FIG. 3 are for illustration only.



FIGS. 4 through 7 illustrate example systems using network interface devices 152 according to this disclosure. As shown in FIG. 4, the network interface device 152 is coupled to an LCN or other supervisory network 106. The network interface device 152 is also coupled to an FTE or other advanced control network 110, namely a network switch 402 that supports communications between the network interface device 152 and the computer platform 150. The network switch 402 supports data transport between the network interface device 152 and the computer platform 150 in accordance with an advanced control network protocol.


As shown in FIG. 5, the network interface device 152 is implemented using a chassis 502, which includes various dual node card file assemblies (CFAs) 504. The CFAs 504 can receive various circuit boards, including the hardware cards implementing the hardware 160, the interface 162, the hardware 164, and the interface 166. A backplane of the chassis 502 could be used as the interface 168.


The chassis 502 is coupled to two network switches 506a-506b, which are coupled to the computing platform 150. In this example, the computing platform 150 supports a virtualization environment in which various virtual machines (VMs) 508 can be executed. The virtual machines 508 can virtualize a variety of industrial control and automation applications. In this example, the virtual machines 508 virtualize HONEYWELL EXPERION PKS server, ACE/APP, PROCESS HISTORY DATABASE (PHD), and domain controller applications. A management application 510 supports various management functions for controlling the virtualization environment. In this particular implementation, the virtualization environment is supported using VMWARE ESXI SERVER software, and the management application 510 denotes the VMWARE VSPHERE MANAGEMENT application. Note, however, that any suitable virtualization software and management application could be used. Also note that the computing platform 150 could be implemented using a “bare metal” approach.


As shown in FIG. 6, the network interface device 152 is coupled to a redundant pair of “Level 2” network switches 602a-602b, which support an FTE network as the advanced control network 110. The switches 602a-602b are coupled to “Level 2” devices such as EXPERION TPS nodes 604. The switches 602a-602b are also coupled to a redundant pair of “Level 1” network switches 606a-606b and a redundant pair of firewalls 608a-608b, such as HONEYWELL CONTROL FIREWALL (CF9) devices. Various devices 610 are coupled to the “Level 1” network switches 606a-606b and the firewalls 608a-608b, and devices 610-612 are coupled to the LCN or other supervisory network 106.


As shown in FIG. 7, one or more network interface devices 152 are mounted to an operator console 702 and coupled to a switch 704 over an FTE or other advanced control network 110. The computing platform 150 is implemented here as a computing tower, although any other suitable form could be used. A thin client 706 denotes a platform that can be used as a way to remote the keyboard, mouse, custom keyboard, and monitor mounted in the console furniture of the operator console 702.


Although FIGS. 4 through 7 illustrate examples of systems using network interface devices 152, various changes may be made to FIGS. 4 through 7. For example, these figures are meant to illustrate general examples of the types of ways in which the network interface device 152 could be implemented and used. However, the network interface device 152 could be implemented and used in any other suitable manner.



FIG. 8 illustrates an example method 800 for virtualization of a connection to a node in an industrial control and automation system according to this disclosure. As shown in FIG. 8, a network interface device is coupled to a “Level 2” network and a supervisory network at step 802. This could include, for example, coupling the network interface device 152 to an FTE or other advanced control network 110 and an LCN or other supervisory network 106.


A computing platform is coupled to the “Level 2” network at step 804. This could include, for example, coupling a computing platform 150 to the FTE or other advanced control network 110. One or more virtual machines are executed on the computing platform at step 806. This could include, for example, executing virtual machines that virtualize various industrial process control and automation Level 2 devices on the computing platform 150.


The network interface device bridges the virtualized components executing on the computing platform and lower-level devices accessible via the LCN or other supervisory network. For example, first data is received from one or more virtual machines at the network interface device at step 808 and provided over the supervisory network by the network interface device at step 810. This could include, for example, the network interface device 152 receiving data from the computing platform 150 over the FTE or other advanced control network 110 (via the hardware 164 and interface 166) and providing the first data over the LCN or other supervisory network 106 (via the hardware 160 and interface 162). Similarly, second data is received from the supervisory network at the network interface device at step 812 and provided to one or more virtual machines over the Level 2 network at step 814. This could include, for example, the network interface device 152 receiving data from lower-level devices over the LCN or other supervisory network 106 (via the hardware 160 and interface 162) and providing the second data to the computing platform 150 over the FTE or other advanced control network 110 (via the hardware 164 and interface 166). The interface 168 supports data transport between the hardware 160 and the hardware 164.


In this way, the actual interface to an LCN or other supervisory network 106 is supported by the network interface device 152, which is separated from the computing platform 150 that executes virtualized control components. The virtualized control components can therefore be added, modified, or removed more easily since the control components are not tied to special hardware interfaces.


Although FIG. 8 illustrates one example of a method 800 for virtualization of a connection to a node in an industrial control and automation system, various changes may be made to FIG. 8. For example, while shown as a series of steps, various steps in FIG. 8 could overlap, occur in parallel, occur in a different order, or occur multiple times.


In some embodiments, various functions described above are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.


It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code). The term “communicate,” as well as derivatives thereof, encompasses both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.


While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.

Claims
  • 1. An apparatus comprising: at least one first hardware card configured to communicate, via a first hardware interface, over a local control network (LCN) with one or more components of an industrial process control and automation system;wherein the at least one first hardware card comprises an emulated personality of a supervisory device that would communicate over the LCN;at least one second hardware card configured to communicate, via a second hardware interface, over a redundant Ethernet network with a computing platform that virtualizes at least one other component of the industrial process control and automation systemwherein the at least one second hardware card comprises slot registers configured to store data being transferred to and from the emulated personality; anda third interface configured to transport information between the at least one first hardware card and the at least one second hardware card.
  • 2. The apparatus of claim 1, wherein the apparatus is configured to provide a remote connection into the LCN for the at least one other component virtualized by the computing platform.
  • 3. The apparatus of claim 1, wherein the at least one second hardware card is configured to perform one or more slot emulation tasks to support data transfer between the slot registers and the computing platform.
  • 4. The apparatus of claim 1, wherein the third interface comprises a backplane of an LCN chassis configured to receive the at least one first hardware card and the at least one second hardware card.
  • 5. The apparatus of claim 1, wherein: the at least one second hardware card is configured to occupy a single slot of a chassis; and the at least one second hardware card is configured to support multiple hardware interfaces and is configured to enable or disable each of the multiple hardware interfaces.
  • 6. A method comprising: using at least one first hardware card of a network interface device to communicate, via a first hardware interface, over a local control network (LCN) with one or more components of an industrial process control and automation system;emulating a personality of a supervisory device that would communicate over the LCN using the at least one first hardware card;using at least one second hardware card of the network interface device to communicate, via a second hardware interface, over a redundant Ethernet network with a computing platform that virtualizes at least one other component of the industrial process control and automation system;transporting information between the at least one first hardware card and the at least one second hardware card using a third interface of the network interface device; andstoring data being transferred to and from the emulated personality using slot registers of the at least one second hardware card.
  • 7. The method of claim 6, wherein the network interface device provides a remote connection into the LCN for the at least one other component virtualized by the computing platform.
  • 8. The method of claim 6, further comprising: performing one or more slot emulation tasks to support data transfer between the slot registers and the computing platform using the at least one second hardware card.
  • 9. The method of claim 6, wherein the third interface comprises a backplane of an LCN chassis configured to receive the at least one first hardware card and the at least one second hardware card.
  • 10. The method of claim 6, wherein: the at least one second hardware card occupies a single slot of a chassis;the at least one second hardware card is configured to support multiple hardware interfaces; andthe method further comprises enabling or disabling each of the multiple hardware interfaces.
  • 11. The apparatus of claim 1, wherein the first hardware interface comprises an LCN input/output (I/O) board.
CROSS-REFERENCE TO RELATED APPLICATION AND PRIORITY CLAIM

This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 62/016,938 filed on Jun. 25, 2014. This provisional patent application is hereby incorporated by reference in its entirety.

US Referenced Citations (183)
Number Name Date Kind
4679189 Olson et al. Jul 1987 A
4736797 Restarick, Jr. et al. Apr 1988 A
5537414 Takiyasu et al. Jul 1996 A
5566356 Taketsugu Oct 1996 A
5664195 Chatterji Sep 1997 A
5749053 Kusaki et al. May 1998 A
5898826 Pierce et al. Apr 1999 A
6141769 Petivan et al. Oct 2000 A
6192232 Iseyama Feb 2001 B1
6256297 Haferbeck et al. Jul 2001 B1
6292905 Wallach et al. Sep 2001 B1
6374352 Goldman et al. Apr 2002 B1
6427071 Adams et al. Jul 2002 B1
6437692 Petite et al. Aug 2002 B1
6522664 Kawahara Feb 2003 B1
6631416 Bendinelli et al. Oct 2003 B2
6694447 Leach et al. Feb 2004 B1
6701453 Chrabaszcz Mar 2004 B2
6751219 Lipp et al. Jun 2004 B1
6847316 Keller Jan 2005 B1
6850486 Salch et al. Feb 2005 B2
6917584 Kuwabara Jul 2005 B2
6961310 Cain Nov 2005 B2
6963781 Fehrer et al. Nov 2005 B2
7031308 Garcia-Luna-Aceves et al. Apr 2006 B2
7035937 Haas et al. Apr 2006 B2
7058848 Sicola et al. Jun 2006 B2
7062676 Shinohara et al. Jun 2006 B2
7111171 Collens et al. Sep 2006 B2
7152181 Fung et al. Dec 2006 B2
7158525 Daffner et al. Jan 2007 B2
7174564 Weatherspoon et al. Feb 2007 B1
7184413 Beyer et al. Feb 2007 B2
7190961 Burr Mar 2007 B2
7193991 Melpignano et al. Mar 2007 B2
7203743 Shah-Heydari Apr 2007 B2
7236477 Emeott et al. Jun 2007 B2
7236987 Faulkner et al. Jun 2007 B1
7240188 Takata et al. Jul 2007 B2
7269611 Miki Sep 2007 B2
7275157 Cam Winget Sep 2007 B2
7278049 Bartfai et al. Oct 2007 B2
7283494 Hammel et al. Oct 2007 B2
7289466 Kore et al. Oct 2007 B2
7313151 Gilsdorf et al. Dec 2007 B2
7342896 Ayyagari Mar 2008 B2
7363365 Ocko et al. Apr 2008 B2
7366114 Park et al. Apr 2008 B2
7421268 Lee et al. Sep 2008 B2
7436797 Shepard et al. Oct 2008 B2
7440735 Karschnia et al. Oct 2008 B2
7460865 Nixon et al. Dec 2008 B2
7496059 Yoon Feb 2009 B2
7496078 Rahman Feb 2009 B2
7515606 Kim et al. Apr 2009 B2
7516356 Chen et al. Apr 2009 B2
7545941 Sovio et al. Jun 2009 B2
7586930 Koski Sep 2009 B2
7603129 Gonia et al. Oct 2009 B2
7620409 Budampati et al. Nov 2009 B2
7620842 Fung et al. Nov 2009 B2
7688802 Gonia et al. Mar 2010 B2
7742394 Budampati et al. Jun 2010 B2
7788524 Wing et al. Aug 2010 B2
7788970 Hitt et al. Sep 2010 B2
7801094 Gonia et al. Sep 2010 B2
7802016 Elmers-Klose et al. Sep 2010 B2
7821981 Smith et al. Oct 2010 B2
7826373 Kolavennu et al. Nov 2010 B2
7848223 Budampati et al. Dec 2010 B2
7853221 Rodriguez et al. Dec 2010 B2
7876722 Hodson et al. Jan 2011 B2
7881253 Budampati et al. Feb 2011 B2
7933240 Budampati et al. Apr 2011 B2
8041772 Amanuddin et al. Oct 2011 B2
8085672 Subramanian et al. Dec 2011 B2
8107446 Shoarinejad Jan 2012 B2
8107511 Budampati et al. Jan 2012 B2
8108853 Bale et al. Jan 2012 B2
8204078 McLaughlin Jun 2012 B2
8209403 Szabo et al. Jun 2012 B2
8280057 Budampati et al. Oct 2012 B2
8285326 Carmody et al. Oct 2012 B2
8406220 McLaughlin et al. Mar 2013 B2
8413227 Chen Apr 2013 B2
8429647 Zhou et al. Apr 2013 B2
8451789 Junell et al. May 2013 B2
8463319 Budampati et al. Jun 2013 B2
8498201 Budampati et al. Jul 2013 B2
8514861 Barker, Jr. et al. Aug 2013 B2
8565196 Yang et al. Oct 2013 B2
8644192 Budampati et al. Feb 2014 B2
8681676 Budampati et al. Mar 2014 B2
8756412 Pulini et al. Jun 2014 B2
8811231 Budampati et al. Aug 2014 B2
8924498 McLaughlin Dec 2014 B2
8929228 Budampati et al. Jan 2015 B2
20020072329 Bandeira et al. Jun 2002 A1
20020120671 Daffner et al. Aug 2002 A1
20020122230 Izadpanah et al. Sep 2002 A1
20020152373 Sun Oct 2002 A1
20020176396 Hammel et al. Nov 2002 A1
20030003912 Melpignarto et al. Jan 2003 A1
20030005149 Haas et al. Jan 2003 A1
20030177150 Fung et al. Sep 2003 A1
20030212768 Sullivan Nov 2003 A1
20040010694 Collens et al. Jan 2004 A1
20040028023 Mandhyan et al. Feb 2004 A1
20040029553 Cain Feb 2004 A1
20040083833 Hitt et al. May 2004 A1
20040107009 Fehrer et al. Jun 2004 A1
20040162919 Williamson Aug 2004 A1
20040174829 Ayyagari Sep 2004 A1
20040230899 Pagnano et al. Nov 2004 A1
20040259533 Nixon et al. Dec 2004 A1
20050059379 Sovio et al. Mar 2005 A1
20050071708 Bartfai et al. Mar 2005 A1
20050102562 Shinohara et al. May 2005 A1
20050141553 Kim et al. Jun 2005 A1
20050201349 Budampati Sep 2005 A1
20050228509 James Oct 2005 A1
20050254653 Potashnik et al. Nov 2005 A1
20050276233 Shepard et al. Dec 2005 A1
20050281215 Budampati et al. Dec 2005 A1
20050289553 Miki Dec 2005 A1
20060002368 Budampati et al. Jan 2006 A1
20060015641 Ocko et al. Jan 2006 A1
20060039347 Nakamura et al. Feb 2006 A1
20060083200 Emeott et al. Apr 2006 A1
20060104291 Rodriguez et al. May 2006 A1
20060104301 Beyer et al. May 2006 A1
20060128349 Yoon Jun 2006 A1
20060130049 Elmers-Klose et al. Jun 2006 A1
20060171344 Subramanian et al. Aug 2006 A1
20060171346 Kolavennu et al. Aug 2006 A1
20060227729 Budampati et al. Oct 2006 A1
20060256740 Koski Nov 2006 A1
20060271814 Fung et al. Nov 2006 A1
20060274644 Budampati et al. Dec 2006 A1
20060274671 Budampati et al. Dec 2006 A1
20060282498 Muro Dec 2006 A1
20060287001 Budampati et al. Dec 2006 A1
20070022317 Chen et al. Jan 2007 A1
20070030816 Kolavennu Feb 2007 A1
20070030832 Gonia et al. Feb 2007 A1
20070067458 Chand Mar 2007 A1
20070073861 Amanuddin et al. Mar 2007 A1
20070076638 Kore et al. Apr 2007 A1
20070077941 Gonia et al. Apr 2007 A1
20070087763 Budampati et al. Apr 2007 A1
20070091824 Budampati et al. Apr 2007 A1
20070091825 Budampati et al. Apr 2007 A1
20070103303 Shoarinejad May 2007 A1
20070147294 Bose et al. Jun 2007 A1
20070153677 McLaughlin et al. Jul 2007 A1
20070153789 Barker, Jr. et al. Jul 2007 A1
20070155423 Carmody et al. Jul 2007 A1
20070237137 McLaughlin Oct 2007 A1
20070261052 Bale et al. Nov 2007 A1
20070282178 Hodson et al. Dec 2007 A1
20080043637 Rahman Feb 2008 A1
20080051060 Lee et al. Feb 2008 A1
20080140844 Halpern Jun 2008 A1
20080267259 Budampati et al. Oct 2008 A1
20080273547 Phinney Nov 2008 A1
20090022121 Budampati et al. Jan 2009 A1
20090034441 Budampati Feb 2009 A1
20090060192 Budampati et al. Mar 2009 A1
20090086692 Chen Apr 2009 A1
20090109889 Budampati et al. Apr 2009 A1
20090138541 Wing et al. May 2009 A1
20100042869 Szabo et al. Feb 2010 A1
20100128699 Yang et al. May 2010 A1
20100287548 Zhou et al. Nov 2010 A1
20110305206 Junell et al. Dec 2011 A1
20120101663 Fervel et al. Apr 2012 A1
20120117416 McLaughlin May 2012 A1
20130024585 Takahashi Jan 2013 A1
20130151849 Graham Jun 2013 A1
20140292535 Petite et al. Oct 2014 A1
20140336786 Asenjo Nov 2014 A1
20150055451 Holmberg Feb 2015 A1
20150227443 Wu Aug 2015 A1
Foreign Referenced Citations (12)
Number Date Country
4134207 Apr 1993 DE
10314721 Nov 2004 DE
1081895 Mar 2001 EP
1401171 Mar 2004 EP
1439667 Jul 2004 EP
2427329 Dec 2006 GB
WO 01035190 May 2001 WO
WO 03079616 Sep 2003 WO
WO 2004047385 Jun 2004 WO
WO 2004114621 Dec 2004 WO
WO 2006017994 Feb 2006 WO
WO 2006053041 May 2006 WO
Non-Patent Literature Citations (22)
Entry
International Search Report dated Jul. 5, 2007 in connection with International Application No. PCT/US2006/048334; 3 pages.
Written Opinion of International Searching Authority dated Jul. 5, 2007 in connection with International Application No. PCT/US2006/048334; 7 pages.
International Search Report dated Nov. 22, 2007 in connection with International Application No. PCT/US2007/069614; 4 pages.
Written Opinion of International Searching Authority dated Nov. 22, 2007 in connection with International Application No. PCT/US2007/069614; 5 pages.
International Search Report dated Nov. 27, 2007 in connection with International Application No. PCT/US2007/069710; 4 pages.
Written Opinion of International Searching Authority dated Nov. 27, 2007 in connection with International Application No. PCT/US2007/069710; 6 pages.
International Search Report dated Apr. 15, 2008 in connection with International Application No. PCT/US2007/069705; 4 pages.
Written Opinion of International Searching Authority dated Apr. 15, 2008 in connection with International Application No. PCT/US2007/069705; 5 pages.
International Search Report dated Dec. 10, 2007 in connection with International Application No. PCT/US2007/069717; 5 pages.
Written Opinion of International Searching Authority dated Dec. 10, 2007 in connection with International Application No. PCT/US2007/069717; 5 pages.
European Search Report dated Oct. 6, 2008 in connection with European Patent Application No. 08161387; 3 pages.
Communication pursuant to Article 94(3) EPC dated Apr. 2, 2009 in connection with European Patent Application No. 07 761 784.3; 9 pages.
Aiello, et al.; “Wireless Distributed Measurement System by Using Mobile Devices”; IEEE Technology and Applications; Sep. 5-7, 2005; Sofia, Bulgaria; 4 pages.
Chen et al.; “Dependability Enhancement for IEEE 802.11 Wireless LAN with Redundancy Techniques”; IEEE Intl. Conference on Dependable Systems and Networks; 2003; 8 pages.
Kolavennu; “WNSIA MAC Layer”; ISA SP100 Meeting; Feb. 14, 2007; 24 pages. [See especially, p. 17]; 24 pages.
Pereira; “A Fieldbus Prototype for Educational Purposes”, IEEE Instrumentation & Measurement Magazine, New York, NY; vol. 7, No. 1; Mar. 2004; 8 pages.
Song; “Fault Recovery Port-based Fast Spanning Tree Algorithm (FRP-FAST) for the Fault-Tolerant Ethernet on the Arbitrary Switched Network Topology”; IEEE; 2001; 8 pages.
Sun, et al.; “An Efficient Deadlock-Free Tree-Based Routing Algorithm for Irregular Wormhole-Routed Networks Based on the Turn Model”;IEEE; 2004; 10 pages.
Taherian, et al.; “Event Dissemination in Mobile Wireless Sensor Networks”; IEEE; 2004; 3 pages.
Zhang, et al., “A Learning-based Adaptive Routing Tree for Wireless Sensor Networks”; IEEE; Journal of Communications, vol. 1, No. 2; May 2006; 9 pages.
“XYR 5000 Wireless Transmitters, Honeywell Solutions for Wireless Data Acquisiton and Monitoring,” www.acs.honeywell.com, Feb. 2006, 6 pages.
Strilich, et al.; “Gateway Offering Logical Model Mapped to Independent Underlying Networks”; U.S. Appl. No. 14/269,903, filed May 5, 2014; 49 pages.
Related Publications (1)
Number Date Country
20150378328 A1 Dec 2015 US
Provisional Applications (1)
Number Date Country
62016938 Jun 2014 US