Method for communicating information bundled in digital message packets

Information

  • Patent Grant
  • 7068612
  • Patent Number
    7,068,612
  • Date Filed
    Tuesday, February 25, 2003
    21 years ago
  • Date Issued
    Tuesday, June 27, 2006
    18 years ago
Abstract
A method for communicating information bundled in digital message packets via a digital network communication system is provided. The digital network communication system a sample source and each packet includes a header and a communication payload area. The method includes sampling the source at a first sample rate, selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth, determining a packet rate based on a plurality of algorithmic latency requirements, and transmitting the digital message packet containing decimated data on the digital network.
Description
BACKGROUND OF THE INVENTION

This invention relates generally to electrical switchgear and more particularly, to a method and apparatus for facilitating optimizing communications between power distribution system components.


In an industrial power distribution system, power generated by a power generation company may be supplied to an industrial or commercial facility wherein the power may be distributed throughout the industrial or commercial facility to various equipment such as, for example, motors, welding machinery, computers, heaters, lighting, and other electrical equipment. At least some known power distribution systems include switchgear which facilitates dividing the power into branch circuits which supply power to various portions of the industrial facility. Circuit breakers are provided in each branch circuit to facilitate protecting equipment within the branch circuit. Additionally, circuit breakers in each branch circuit can facilitate minimizing equipment failures since specific loads may be energized or de-energized without affecting other loads, thus creating increased efficiencies, and reduced operating and manufacturing costs. Similar switchgear may also be used within an electric utility transmission system and a plurality of distribution substations, although the switching operations used may be more complex.


Switchgear typically include multiple devices, other than the power distribution system components, to facilitate providing protection, monitoring, and control of the power distribution system components. For example, at least some known breakers include a plurality of shunt trip circuits, under-voltage relays, trip units, and a plurality of auxiliary switches that close the breaker in the event of an undesired interruption or fluctuation in the power supplied to the power distribution components. Additionally, at least one known power distribution system also includes a monitor device that monitors a performance of the power distribution system, a control device that controls an operation of the power distribution system, and a protection device that initiates a protective response when the protection device is activated.


In at least some other known power distribution systems, a monitor and control system operates independently of the protective system. For example, a protective device may de-energize a portion of the power distribution system based on its own predetermined operating limits, without the monitoring devices recording the event. The failure of the monitoring system to record the system shutdown may mislead an operator to believe that an over-current condition has not occurred within the power distribution system, and as such, a proper corrective action may not be initiated by the operator. Additionally, a protective device, i.e. a circuit breaker, may open because of an over-current condition in the power distribution system, but the control system may interpret the over-current condition as a loss of power from the power source, rather than a fault condition. As such, the control logic may undesirably attempt to connect the faulted circuit to an alternate source, thereby restoring the over-current condition. In addition to the potential increase in operational defects which may occur using such devices, the use of multiple devices and interconnecting wiring associated with the devices may cause an increase in equipment size, an increase in the complexity of wiring the devices, and/or an increase in a quantity of devices installed.


Centrally controlling of power distribution systems may overcome the above mentioned shortcomings of known power distribution systems. Central control systems may also present new problems which may need solutions before central control systems become a viable new control system. For example, communications between the central controller and controlled devices may occur over long distances, redundancy requirements may make communications slow due to additional devices communicating in parallel, and separate communication channels may need to be cross-checked for accuracy.


Central control systems may receive electrical inputs from the controlled process through remote input/output (I/O) modules communicating with the central control system over a high-speed communication network. Outputs generated by the industrial controller are likewise transmitted over the network to the I/O circuits to be communicated to the controlled equipment. The network provides a simplified means of communicating signals over an industrial environment without multiple point-to-point wires and the attendant cost of installation.


The central control system may use real time control to achieve latency goals. Effective real-time control is provided by executing the control program repeatedly in high speed “scan” cycles. During each scan cycle each remote node samples inputs at a selectable frequency and output messages are computed. The output messages are transmitted to the central control location where these data samples are processed to provide a control of the system such as centralized control. A relatively large number of samples are taken at the remote node, such as, for example 128 samples per second, and are packaged to share space in a message. Together with the high-speed communications network, this ensures the response of the central control system to changes in the inputs and its generation of outputs will be rapid. All information is dealt with centrally by a well-characterized processor and communicated over a high-speed communication network to yield predictable delay times, and low latency, which is critical to deterministic control. The high data transmission rate and large number of remote nodes attempting to communicate creates network traffic congestion that may adversely affect power distribution system latency and the ability of the power distribution system to operate efficiently.


BRIEF DESCRIPTION OF THE INVENTION

A method for communicating information bundled in digital message packets via a digital network communication system is provided. The digital network communication system a sample source and each packet includes a header and a communication payload area. The method includes sampling the source at a first sample rate, selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth, determining a packet rate based on a plurality of algorithmic latency requirements, and transmitting the digital message packet containing decimated data on the digital network.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an exemplary schematic illustration of a power distribution system;



FIG. 2 is an exemplary schematic illustration of a node power system;



FIG. 3 is an exemplary schematic illustration of a central control processing unit that may used with the power distribution system shown in FIG. 1;



FIG. 4 is an exemplary schematic illustration of a node electronic unit that may used with the power distribution system shown in FIG. 1;



FIG. 5 is an exemplary schematic illustration of a circuit breaker that may used with the power distribution system shown in FIG. 1;



FIG. 6 is a simplified block diagram of an exemplary structure of a digital message packet; and



FIG. 7 is an exemplary method 700 for communicating information bundled in digital message packets via a digital network communication system.





DETAILED DESCRIPTION OF THE INVENTION


FIG. 1 illustrates an exemplary schematic illustration of a power distribution system 10, used by an industrial facility for example. In an exemplary embodiment, system 10 includes at least one main feed system 12, a power distribution bus 14, a plurality of power circuit switches or interrupters, also referred to herein as a circuit breakers (CB) 16, and at least one load 18, such as, but not limited to, motors, welding machinery, computers, heaters, lighting, and/or other electrical equipment.


In use, power is supplied to a main feed system 12, i.e. a switchboard for example, from a source (not shown) such as, but not limited to, a steam turbine, powered from, for example, a nuclear reactor or a coal fired boiler, a gas turbine generator, and a diesel generator. Power supplied to main feed system 12 is divided into a plurality of branch circuits using circuit breakers 16 which supply power to various loads 18 in the industrial facility. In addition, circuit breakers 16 are provided in each branch circuit to facilitate protecting equipment, i.e. loads 18, connected within the respective branch circuit. Additionally, circuit breakers 16 facilitate minimizing equipment failures since specific loads 18 may be energized or de-energized without affecting other loads 18, thus creating increased efficiencies, and reduced operating and manufacturing costs.


Power distribution system 10 includes a circuit breaker control protection system 19 that includes a plurality of node electronics units 20 that are each electrically coupled to a digital network 22. Circuit breaker control protection system 19 also includes at least one central control processing unit (CCPU) 24 that is electrically coupled to digital network 22 via a switch 23 such as, but not limited to, an Ethernet switch 23. In use, each respective node electronics unit 20 is electrically coupled to a respective circuit breaker 16, such that CCPU 24 is electrically coupled to each circuit breaker 16 through digital network 22 and through an associated node electronics unit 20.


In the exemplary embodiment, digital network 22 is a Fast Ethernet protocol network. In another embodiment, digital network 22 includes, for example, at least one of a local area network (LAN) or a wide area network (WAN), dial-in-connections, cable modems, and special high-speed ISDN lines. Digital network 22 also includes any device capable of interconnecting to the Internet including a web-based phone, personal digital assistant (PDA), or other web-based connectable equipment. It should be appreciated that the digital network 22 network is upgradeable based on future revisions to IEEE 802.3(u) and its successors. It should further be appreciated that the digital network 22 is configurable, for example, in a star topology.


In one embodiment, CCPU 24 is a computer and includes a device 26, for example, a floppy disk drive or CD-ROM drive, to facilitate reading instructions and/or data from a computer-readable medium 28, such as a floppy disk or CD-ROM. In another embodiment, CCPU 24 executes instructions stored in firmware (not shown). CCPU 24 is programmed to perform functions described herein, but other programmable circuits can likewise be programmed. Accordingly, as used herein, the term computer is not limited to just those integrated circuits referred to in the art as computers, but broadly refers to computers, processors, microcontrollers, microcomputers, programmable logic controllers, application specific integrated circuits, and other programmable circuits. Additionally, although described in a power distribution setting, it is contemplated that the benefits of the invention accrue to all electrical distribution systems including industrial systems such as, for example, but not limited to, an electrical distribution system installed in an office building.



FIG. 2 is an exemplary schematic illustration of a node power distribution system 29 that can be used with power distribution system 10 (shown in FIG. 1) and more specifically, with circuit breaker control protection system 19 (shown in FIG. 1). Node power distribution system 29 includes a power source 30 that is electrically coupled to node electronics units 20 through a node power distribution bus 32. In an exemplary embodiment, power source 30 is an uninterruptible power supply (UPS). In one embodiment, power source 30 receives power from power distribution system 10 and then distributes this power to node electronics units 20 through node power distribution bus 32. In an alternative embodiment, power is not supplied to power source 30, but rather, power source 30 supplies power to node electronics units 20 using an internal power supply, such as, but not limited to, a plurality of batteries (not shown). In another alternate embodiment, node electronic units 20 are powered by secondary current available from current sensor 82 and/or voltage sensor 84. In this embodiment, circuit breaker control protection system 19 would not include node power distribution system 29, power source 30, or node power distribution bus 32.



FIG. 3 is an exemplary schematic illustration of CCPU 24. CCPU 24 includes at least one memory device 40, such as, but not limited to, a read only memory (ROM) 42, a flash memory 44, and/or a random access memory (RAM) 46. CCPU 24 also includes a central processor unit (CPU) 48 that is electrically coupled to at least one memory device 40, as well as an internal bus 50, a communications interface 52, and a communications processor 54. In an exemplary embodiment, CCPU 24 is a printed circuit board and includes a power supply 56 to supply power to a plurality of devices on the printed circuit board.


Additionally, in an exemplary embodiment, internal bus 50 includes an address bus, a data bus, and a control bus. In use, the address bus is configured to enable CPU 48 to address a plurality of internal memory locations or an input/output port, such as, but not limited to communications interface 52 through communications processor 54, and a gateway interface 57, through a gateway processor 58. The data bus is configured to transmit instructions and/or data between CPU 48 and at least one input/output, and the control bus is configured to transmit signals between the plurality of devices to facilitate ensuring that the devices are operating in synchronization. In the exemplary embodiment, internal bus 50 is a bi-directional bus such that signals can be transmitted in either direction on internal bus 50. CCPU 24 also includes at least one storage device 60 configured to store a plurality of information transmitted via internal bus 50.


In use, gateway interface 57 communicates to a remote workstation (not shown) via an Internet link 62 or an Intranet 62. In the exemplary embodiment, the remote workstation is a personal computer including a web browser. Although a single workstation is described, such functions as described herein can be performed at one of many personal computers coupled to gateway interface 57. For example, gateway interface 57 may be communicatively coupled to various individuals, including local operators and to third parties, e.g., remote system operators via an ISP Internet connection. The communication in the example embodiment is illustrated as being performed via the Internet, however, any other wide area network (WAN) type communication can be utilized in other embodiments, i.e., the systems and processes are not limited to being practiced via the Internet. In one embodiment, information is received at gateway interface 57 and transmitted to node electronics unit 20 via CCPU 24 and digital network 22. In another embodiment, information sent from node electronics unit 20 is received at communication interface 52 and transmitted to Internet 62 via gateway interface 57.



FIG. 4 is an exemplary schematic illustration of single node electronic unit 20. In the exemplary embodiment, node electronic unit 20 is a unitary device mounted remotely from CCPU 24 and circuit breaker 16. In an exemplary embodiment, node electronic unit 20 is separate from, but proximate to circuit breaker 16. In an exemplary embodiment, node electronic unit 20 is a printed circuit board.


In one embodiment, node electronics unit 20 receives signals input from a plurality of devices, such as, but not limited to, a current sensor 82, a voltage sensor 84, and/or circuit breaker 16. Status signals from circuit breaker 16 can include signals related to one or more conditions of the breaker, such as, but not limited to, an auxiliary switch status, and a spring charge switch status. Additionally, node electronics unit 20 sends signals to at least circuit breaker 16 in order to control one or more states of the breaker.


In use, signals are transmitted to CCPU 24 via node electronics unit 20, and digital network 22. Node electronics unit 20 receives the signals and packages a digital message that includes the signals and additional data relating to a health and status of node electronics unit 20. The health and status data may include information based on problems found by internal diagnostic routines and a status of self checking routines that run locally in node electronics unit 20. CCPU 24 processes digital message using one or more protection algorithms, monitoring algorithms, and any combination thereof. In response to the processing of digital message, CCPU 24 sends digital message back to node electronics unit 20 via digital network 22. In the exemplary embodiment, node electronics unit 20 actuates circuit breaker 16 via signal in response to digital message received from CCPU 24. In one embodiment, circuit breaker 16 is actuated in response to commands sent only by CCPU 24, i.e., circuit breaker 16 is not controlled locally by node electronics unit 20, but rather is operated remotely from CCPU 24 based on digital message received from node electronics unit 20 over network 22.



FIG. 5 is an exemplary schematic illustration of circuit breaker 16 that is electrically coupled to node electronics unit 20. In the exemplary embodiment, circuit breaker 16 includes a switch assembly that includes movable and/or stationary contacts, an arc suppression means, and a tripping and operating mechanism. Circuit breaker 16 includes only a trip coil 100, a close coil 102, an auxiliary switch 104, a spring charge switch 106, and a motor 108. Circuit breaker 16 does not include a trip unit. The various components of breaker 16 (e.g., trip coil 100, close coil 102, auxiliary switch 104, spring charge switch 106, motor 108) can be powered by node electronics unit 20. Alternately, breaker 16 can be powered by secondary current available from current sensor 82 and/or voltage sensor 84.


Circuit breaker 16 is in electrical communication with node electronics unit 20 through a wiring harness, which may include copper wiring, communications conduits, and any combination thereof. Current sensor 82, and voltage sensor 84 are in electrical communication with node electronics unit 20 through a cable that may include copper wiring, communications conduits, and any combination thereof. In an exemplary embodiment, circuit breaker 16 is a unitary device mounted proximate to node electronics unit 20, current sensor 82, and voltage sensor 84.


In use, actuation signals from node electronics unit 20 are transmitted to circuit breaker 16 to actuate a plurality of functions in circuit breaker 16, such as, but not limited to, operating a trip coil 100, operating a close coil 102, and affecting a circuit breaker lockout feature. An auxiliary switch 104 and operating spring charge switch 106 provide a status indication of circuit breaker parameters to node electronics unit 20. Motor 108 is configured to recharge an operating spring, configured as a close spring (not shown) after circuit breaker 16 closes. It should be appreciated that the motor 108 can include, for example, a spring charge switch, a solenoid or any other electromechanical device capable of recharging a trip spring. To close circuit breaker 16, a close coil 102 is energized by a close signal from actuation power module (not shown). Close coil 102 actuates a closing mechanism (not shown) that couples at least one movable electrical contact (not shown) to a corresponding fixed electrical contact (not shown). The closing mechanism of circuit breaker 16 latches in a closed position such that when close coil 102 is de-energized, circuit breaker 16 remains closed. When breaker 16 closes, an “a” contact of auxiliary switch 104 also closes and a “b” contact of auxiliary switch 104 opens. The position of the “a” and “b” contacts is sensed by node electronics unit 20. To open circuit breaker 16, node electronics unit 20 energizes trip coil (TC) 100. TC 100 acts directly on circuit breaker 16 to release the latching mechanism that holds circuit breaker 16 closed. When the latching mechanism is released, circuit breaker 16 will open, opening the “a” contact and closing the “b” contact of auxiliary switch 104. Trip coil 100 is then de-energized by node electronics unit 20. After breaker 16 opens, with the close spring recharged by motor 108, circuit breaker 16 is prepared for a next operating cycle. In the exemplary embodiment, each node electronics unit 20 is coupled to circuit breaker 16 in a one-to-one correspondence. For example, each node electronics unit 20 communicates directly with only one circuit breaker 16. In an alternative embodiment, node electronics unit 20 may communicate with a plurality of circuit breakers 16.



FIG. 6 is a simplified block diagram of an exemplary structure of a digital message packet 600. In the exemplary embodiment, digital message packet 600 includes a communication payload area 602 where message data is stored, a message header 604, a message address area 606, and an end of message area 608. Digital message packets 600 are sent at regular intervals from node electronics unit 20 to each CCPU 24 as a unicast message. A unicast message is a message addressed to a particular node which other nodes will ignore. In contrast to unicast is broadcast. A broadcast message is sent from a single node to a plurality of nodes. Each of the plurality of nodes receives the broadcast message, opens it and acts as each particular node is commanded to by a portion of the broadcast message. Digital message packet 600 are sent at a frequency determined by a resolution requirement of CCPU 24 for the data contained in the message.


Communication payload area 602 includes a plurality of data areas that include algorithmic data, which may be data requested by a sampling node, such as node electronics unit 20. Algorithmic data includes, for example, power distribution system basic protection data, power distribution system metering data, power distribution system waveform capture data and power distribution system harmonic analysis data. Each of such data types contained in these data areas may each be sampled at different data acquisition rates. In the exemplary embodiment, a first data area 610 includes basic protection sample data received from current sensor 82, voltage sensor 84, and status input device 86. Basic protection data may be sampled, for example, at a rate of 32 times per cycle wherein cycle refers to power distribution system 10 line frequency. A second data area 612 may include metering sample data, also received from current sensor 82 and voltage sensor 84. Metering sample data may be sampled, for example, at a rate of 64 times per cycle. A third data area 614 may include waveform capture and harmonic analysis data that is sampled at a third rate, for example 128 times per cycle. Communication payload area 602 may include any number of data areas, only three areas are illustrated by way of example and are not limiting in the number of data areas that may be included in communication payload area 602. Additionally, the above data sample rates are given by way of example and are not limiting in the rate at which sampled data may be stored in communication payload area 602.


Digital message packet 600 is sent at a frequency based on a resolution requirement determined by CCPU 24. A single digital message packet 600 may contain a plurality of samples in each of data areas 610, 612, and 614 depending on the frequency at which digital message packet 600 is sent to CCPU 24. Such a method of sending sample data facilitates effectively using an available bandwidth.



FIG. 7 is an exemplary method 700 for communicating information bundled in digital message packets via a digital network communication system. The system includes a sample source, such as a current sensor 82, a voltage sensor 84, and/or a status input device 86. Status input device 86 in turn receives a plurality of status signals from circuit breaker 16, such as, but not limited to, an auxiliary switch status, and a spring charge switch status. Each sample source is sampled by a node electronics unit 20 at a predetermined sample rate based on, for example, communication system data transfer limitations, source transfer limitations, and data message size. Each digital message packet includes a header and a communication payload area. Method 700 includes sampling 702 the source 82, 84, 86 at a first sample rate. The data rate is predetermined based on power distribution system 10 data needs, physical transfer limitations, network traffic limitations, and a type of data being transmitted, and a latency requirement of the system. At any given point in time, any of the above mentioned factors may be changing due to changing conditions within power distribution system 10 and conditions external to power distribution system 10. The system includes a decimation processor that includes a plurality of decimation operations and a local data buffer. Sampled data is decimated to prepare the data for packetizing by selecting 704 at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth. A packet rate is determined 706 based on a plurality of algorithmic latency requirements, and each digital message packet containing decimated data is transmitted 708 on the digital network.


The sample rate for sampling the source may be selected to satisfy at least one of a plurality of predetermined algorithmic data rates. For example, the rate may be selected to facilitate achieving a maximum algorithmic data rate and/or to satisfy a predetermined oversampling requirement. Additionally, the sampling rate and/or the oversampling rate, the decimation, the packet rate, and/or a destination address for each message may be modified based on a system resource availability, a network communication noise level, a data signal-to-noise ratio, a change in a number of network nodes, a communication channel utilization, and an authorization of parameters change, and a service request. Each of the above considerations may be changing dynamically during operation of power distribution system 10, such that the sampling rate and/or the oversampling rate, the decimation, the packet rate, and/or a destination address for each message may be determined periodically to satisfy the data transfer needs of power distribution system 10.


Other considerations for determining the sampling rate may include facilitating minimizing a packet transmission overhead, facilitating meeting a predetermined algorithmic latency requirement, facilitating meeting a network data capacity, facilitating minimizing a packet error rate, facilitating minimizing the penalty of a retransmission, facilitating minimizing lost data, facilitating minimizing invalid data, and facilitating maximizing a number of network nodes.


Selection of a decimation of the samples may be based on a functional area for which the sample is being made. For example, a sample rate for power distribution system basic protection data may be less than or slower than a sample rate for power distribution system metering data. Likewise for power distribution system waveform capture data, power distribution system digital oscillography data, and power distribution system harmonic data analysis data. Each functional area data type may have a changing data rate associated with it such that a sample rate for power distribution system metering data may at times be less than the sample rate for power distribution system basic protection data.


Selection of a decimation of the samples may be based on a demand from a remote processor, such as, CCPU 24, a remote monitoring and diagnostics service request, a predetermined periodicity, and a communication network loading. A plurality of decimation operations in the decimation processor are selected based on at least one of the algorithmic data rates and outputs labeled by the decimation operations include base data, incremental data, and a plurality of tags for each of the base data and for each of the incremental data. At least part of the base data and incremental data may be stored in a local data buffer.


The transmitted 708 packets are received through a remote processor such as CCPU 24, each of the incremental data in the packet are prioritized, and using the incremental data, the received data in the packets is progressively reconstructed with increasing data rate and quantization based on the labeling. Base data is obtained by decimation corresponding to a predetermined slowest algorithmic data rate wherein each incremental data output is based on an increasing algorithmic data rate, and/or a quantization requirement. In the exemplary embodiment, the sample rate is selected such that the base data and the incremental data are sent at the same packet rate and the sampling rate is equal to double a predetermined algorithmic data rate. The decimation includes selecting every other sample as the base data, and the alternate every other data as the incremental data and the base data and the incremental data are packetized in the same packet. In the exemplary embodiment, the base data and the incremental data are interleaved such that the incremental data of lowest priority are replaced with either base data, or incremental data of higher priority of previously packetized data. Transmitted packets may be transmitted redundantly to facilitate reducing a data error rate based on network communications conditions.


Service data may be provided to CCPU 24 by replacing at least one of the base data and the incremental data with the data requested, which may include system resources data, communication noise data, signal-to-noise ratio data, data indicating changes in the number of network nodes, communication channel utilization data, authorization of parameters change data, and service requested data. Service requested data includes system status data, local status data, local health data, communication data, signal-to-noise ratio data, event history data, and error history data. Such data is periodically requested by CCPU 24 to update sample rates, packet rates and other system parameters based on current system condition and data transmission needs.


Each packet includes a destination address, and for each destination with active algorithms, packetizing the base data and the incremental data in the same packet includes tagging each of the base data and each of the incremental data. The base data is packaged and the incremental data is prioritized based on latency requirements of the active algorithms at the destination. Packaging the corresponding incremental data is done in decreasing priority order and may be based on of available space in the packet and a predetermined maximum packet length. The maximum packet length may be determined based on a packet rate, a network data capacity, a length that facilitates maximizing the number of nodes in the network communication system. The tags used to tag each of the base data and each of the incremental data may be an implicit tag, a partial tag, a time stamp tag, and/or a counter tag. The destination address in each digital message packet may include a unicast destination address for data specific to each respective destination, a multicast destination address for data common to a set of destinations, and a broadcast destination address for data common to all destinations. Destination addresses may be selected based on facilitating minimizing a packet transmission overhead, facilitating meeting a predetermined algorithmic latency requirement, facilitating meeting a network data capacity, and facilitating maximizing a number of network nodes.


The above-described power distribution system communication system is cost-effective and highly reliable. Each system includes at least one central control processing unit (CCPU) and a plurality of node electronics unit communicatively coupled via a high-speed digital network. There may be additional CCPUs and corresponding network backbones coupled in the power distribution system to facilitate meeting a system reliability goal. Each node electronics unit communicates to every CCPU via a digital message packet that facilitates efficient communication while maintaining a system latency requirement. Accordingly, the power distribution system communication system facilitates protection and optimization of power system operation in a cost-effective and reliable manner.


Exemplary embodiments of power distribution system communication system components are described above in detail. The components are not limited to the specific embodiments described herein, but rather, components of each system may be utilized independently and separately from other components described herein. Each power distribution system communication system component can also be used in combination with other power distribution system components.


While the invention has been described in terms of various specific embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the claims.

Claims
  • 1. A method for communicating information bundled in digital message packets via a digital network communication system wherein the system includes a sample source and each packet includes a header and a communication payload area, the method comprising the steps of: sampling the source at a first sample rate;selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth;determining a packet rate based on a plurality of algorithmic latency requirements; andtransmitting the digital message packet containing decimated data on the digital network,wherein algorithmic data includes data requested by a sampling node and wherein sampling the source at a first sample rate comprises sampling algorithmic data that includes at least one of power distribution system basic protection data, power distribution system metering data, power distribution system waveform capture data and power distribution system harmonic analysis data.
  • 2. A method in accordance with claim 1 wherein sampling the source at a first sample rate comprises sampling the source at a sample rate selected to satisfy at least one of a plurality of predetermined algorithmic data rates.
  • 3. A method in accordance with claim 2 wherein sampling the source comprises sampling the source at a rate that facilitates achieving a maximum algorithmic data rate.
  • 4. A method in accordance with claim 2 wherein sampling the source comprises sampling the source at a rate selected to satisfy a predetermined oversampling requirement.
  • 5. A method in accordance with claim 4 further comprising modifying at least one of the sampling rate, an oversampling rate, the at least one decimation, the packet rate, and a destination address based on at least one of a system resource availability, a network communication noise level, a data signal-to-noise ratio, a change in a number of network nodes, a communication channel utilization, and an authorization of parameters change, and a service request.
  • 6. A method for communicating information bundled in digital message packets via a digital network communication system wherein the system includes a sample source and each packet includes a header and a communication payload area, the method comprising the steps of: sampling the source at a first sample rate;selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth;determining a packet rate based on a plurality of algorithmic latency requirements; andtransmitting the digital message packet containing decimated data on the digital network,wherein selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth comprises selecting at least one decimation of the samples based on at least one of power distribution system basic protection data, power distribution system metering data, power distribution system waveform capture, power distribution system digital oscillography, and power distribution system harmonic data analysis.
  • 7. A method in accordance with claim 6 wherein sampling the source at a first sample rate comprises sampling the source based on at least one of facilitating minimizing a packet transmission overhead, facilitating meeting a predetermined algorithmic latency requirement, facilitating meeting a network data capacity, facilitating minimizing a packet error rate, facilitating minimizing the penalty of a retransmission, facilitating minimizing lost data, facilitating minimizing invalid data, and facilitating maximizing a number of network nodes.
  • 8. A method in accordance with claim 6 wherein selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth comprises selecting at least one decimation of the samples based on at least one of a demand from a remote processor, a remote monitoring and diagnostics service request, a predetermined periodicity, and a communication network loading.
  • 9. A method for communicating information bundled in digital message packets via a digital network communication system wherein the system includes a sample source and each packet includes a header and a communication payload area, the method comprising the steps of: sampling the source at a first sample rate;selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth;determining a packet rate based on a plurality of algorithmic latency requirements; andtransmitting the digital message packet containing decimated data on the digital network,wherein the system includes a decimation processor that includes a plurality of decimation operations, a plurality of outputs labeled by the decimation operations and a local data buffer and wherein selecting at least one decimation of the samples based on at least one of a plurality of algorithmic data rates and a channel bandwidth comprises selecting the decimation operations in the decimation processor based on at least one of the algorithmic data rates.
  • 10. A method in accordance with claim 9 wherein the plurality of outputs labeled by the decimation operations include a plurality of base data, a plurality of incremental data, and a plurality of tags for each of the base data and for each of the incremental data and wherein said method further comprises: receiving the transmitted packets through a remote processor;prioritizing each of the plurality of incremental data; andprogressively reconstructing the data received in the packets with increasing data rate and quantization based on the labeling using the plurality of incremental data.
  • 11. A method in accordance with claim 10 further comprising obtaining the base data by decimation corresponding to a predetermined slowest algorithmic data rate wherein each incremental data output is based on at least one of each increasing algorithmic data rate, and a quantization requirement.
  • 12. A method in accordance with claim 11 further comprising: selecting a sample rate wherein the base data and the incremental data are sent at the same packet rate and wherein the sampling rate is equal to double a predetermined algorithmic data rate;selecting a decimation wherein the decimation includes selecting every other sample as the base data, and the alternate every other data as the incremental data; andpacketizing the base data and the incremental data in the same packet.
  • 13. A method in accordance with claim 12 wherein packetizing the base data and the incremental data in the same packet further comprises interleaving the base data and incremental data such that the incremental data of lowest priority are replaced with at least one of base data, and incremental data of higher priority of previously packetized data.
  • 14. A method in accordance with claim 12 further comprising transmitting redundant data packets that include the packetized data.
  • 15. A method in accordance with claim 12 further comprising providing service data by replacing at least one of the base data and the incremental data with at least one of system resources data, communication noise data, signal-to-noise ratio data, data indicating changes in the number of network nodes, communication channel utilization data, authorization of parameters change data, and service requested data.
  • 16. A method in accordance with claim 15 wherein providing service data comprises replacing at least one of the base data and the incremental data with at least one of system status data, local status data, health data, communication data, signal-to-noise ratio data, event history data, and error history data.
  • 17. A method in accordance with claim 12 wherein each packet includes a destination address, and for each destination with active algorithms, said packetizing the base data and the incremental data in the same packet comprises: tagging each of the base data and each of the incremental data;packaging the base data; andprioritizing the incremental data based on the latency requirements of the active algorithms at the destination.
  • 18. A method in accordance with claim 17 wherein tagging each of the base data and each of the incremental data comprises tagging each of the base data and each of the incremental data using at least one of an implicit tag, a partial tag, a time stamp tag, and a counter tag.
  • 19. A method in accordance with claim 17 wherein the destination address in each digital message packet includes at least one of a unicast destination addresses for data specific to each respective destination, a multicast destination address for data common to a set of destinations, and a broadcast destination address for data common to all destinations and wherein the method further comprises packaging the corresponding incremental data in decreasing priority order, and based on at least one of available space in the packet and a predetermined maximum packet length.
  • 20. A method in accordance with claim 17 wherein the maximum packet length is determined based on at least one of a packet rate, a network data capacity, a length that facilitates maximizing the number of nodes in the network communication system, and wherein the method further comprises transmitting the digital message packet over the network.
  • 21. A method in accordance with claim 10 further comprising selecting destination addresses based on at least one of facilitating minimizing a packet transmission overhead, facilitating meeting a predetermined algorithmic latency requirement, facilitating meeting a network data capacity, and facilitating maximizing a number of network nodes.
  • 22. A method in accordance with claim 10 further comprising storing at least a portion of the base data and a portion of the incremental data in the local data buffer.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is related to U.S. patent application Ser. No. 60/359,544 filed on Feb. 25, 2002 for “Integrated Protection, Monitoring, and Control” the content of which is incorporated in its entirety herein by reference. This application is also related to U.S. patent application Ser. No. 60/438,159 filed on Jan. 6, 2003 for “Single Processor Concept for Protection and Control of Circuit Breakers in Low-Voltage Switchgear” the content of which is incorporated in its entirety herein by reference.

US Referenced Citations (234)
Number Name Date Kind
3772505 Massell Nov 1973 A
3938007 Boniger et al. Feb 1976 A
3956671 Nimmersjo May 1976 A
3963964 Mustaphi Jun 1976 A
4001742 Jencks et al. Jan 1977 A
4245318 Eckart et al. Jan 1981 A
4291299 Hinz et al. Sep 1981 A
4301433 Castonguay et al. Nov 1981 A
4311919 Nail Jan 1982 A
4415968 Maeda et al. Nov 1983 A
4423459 Stich et al. Dec 1983 A
4432031 Premerlani Feb 1984 A
4455612 Girgis et al. Jun 1984 A
4468714 Russell Aug 1984 A
4589074 Thomas et al. May 1986 A
4623949 Salowe et al. Nov 1986 A
4631625 Alexander et al. Dec 1986 A
4642724 Ruta Feb 1987 A
4652966 Farag et al. Mar 1987 A
4672501 Bilac et al. Jun 1987 A
4672555 Hart et al. Jun 1987 A
4674062 Premerlani Jun 1987 A
4689712 Demeyer Aug 1987 A
4709339 Fernandes Nov 1987 A
4751653 Junk et al. Jun 1988 A
4752853 Matsko et al. Jun 1988 A
4754407 Nolan Jun 1988 A
4777607 Maury et al. Oct 1988 A
4783748 Swarztrauber et al. Nov 1988 A
4796027 Smith-Vaniz Jan 1989 A
4833592 Yamanaka May 1989 A
4849848 Ishii Jul 1989 A
4855671 Fernandes Aug 1989 A
4862308 Udren Aug 1989 A
4964058 Brown, Jr. Oct 1990 A
4979122 Davis et al. Dec 1990 A
4983955 Ham, Jr. et al. Jan 1991 A
4996646 Farrington Feb 1991 A
5053735 Ohishi et al. Oct 1991 A
5060166 Engel et al. Oct 1991 A
5101191 MacFadyen et al. Mar 1992 A
5134691 Elms Jul 1992 A
5136458 Durivage, III Aug 1992 A
5162664 Haun et al. Nov 1992 A
5166887 Farrington et al. Nov 1992 A
5170310 Studtmann et al. Dec 1992 A
5170360 Porter et al. Dec 1992 A
5179376 Pomatto Jan 1993 A
5182547 Griffith Jan 1993 A
5185705 Farrington Feb 1993 A
5196831 Bscheider Mar 1993 A
5214560 Jensen May 1993 A
5216621 Dickens Jun 1993 A
5225994 Arinobu et al. Jul 1993 A
5231565 Bilas et al. Jul 1993 A
5237511 Caird et al. Aug 1993 A
5247454 Farrington et al. Sep 1993 A
5253159 Bilas et al. Oct 1993 A
5272438 Stumme Dec 1993 A
5301121 Garverick et al. Apr 1994 A
5305174 Morita et al. Apr 1994 A
5311392 Kinney et al. May 1994 A
5323307 Wolf et al. Jun 1994 A
5353188 Hatakeyama Oct 1994 A
5361184 El-Sharkawi et al. Nov 1994 A
5367427 Matsko et al. Nov 1994 A
5369356 Kinney et al. Nov 1994 A
5381554 Langer et al. Jan 1995 A
5384712 Oravetz et al. Jan 1995 A
5402299 Bellei Mar 1995 A
5406495 Hill Apr 1995 A
5414635 Ohta May 1995 A
5420799 Peterson et al. May 1995 A
5422778 Good et al. Jun 1995 A
5440441 Ahuja Aug 1995 A
5451879 Moore Sep 1995 A
5487016 Elms Jan 1996 A
5490086 Leone et al. Feb 1996 A
5493468 Hunter et al. Feb 1996 A
5530738 McEachern Jun 1996 A
5534782 Nourse Jul 1996 A
5534833 Castonguay et al. Jul 1996 A
5537327 Snow et al. Jul 1996 A
5544065 Engel et al. Aug 1996 A
5559719 Johnson et al. Sep 1996 A
5560022 Dunstan et al. Sep 1996 A
5576625 Sukegawa et al. Nov 1996 A
5581471 McEachern et al. Dec 1996 A
5587917 Elms Dec 1996 A
5596473 Johnson et al. Jan 1997 A
5600527 Engel et al. Feb 1997 A
5608646 Pomatto Mar 1997 A
5613798 Braverman Mar 1997 A
5619392 Bertsch et al. Apr 1997 A
5627716 Lagree et al. May 1997 A
5627717 Pein et al. May 1997 A
5627718 Engel et al. May 1997 A
5629825 Wallis et al. May 1997 A
5631798 Seymour et al. May 1997 A
5638296 Johnson et al. Jun 1997 A
5650936 Loucks et al. Jul 1997 A
5661658 Putt et al. Aug 1997 A
5666256 Zavis et al. Sep 1997 A
5670923 Gonzalez et al. Sep 1997 A
5694329 Pomatto Dec 1997 A
5696695 Ehlers et al. Dec 1997 A
5719738 Singer et al. Feb 1998 A
5734576 Klancher Mar 1998 A
5736847 Van Doorn et al. Apr 1998 A
5737231 Pyle et al. Apr 1998 A
5742513 Bouhenguel et al. Apr 1998 A
5751524 Swindler May 1998 A
5754033 Thomson May 1998 A
5754440 Cox et al. May 1998 A
5768148 Murphy et al. Jun 1998 A
5784237 Velez Jul 1998 A
5784243 Pollman et al. Jul 1998 A
5786699 Sukegawa et al. Jul 1998 A
5812389 Katayama et al. Sep 1998 A
5821704 Carson et al. Oct 1998 A
5825643 Dvorak et al. Oct 1998 A
5828576 Loucks et al. Oct 1998 A
5828983 Lombardi Oct 1998 A
5831428 Pyle et al. Nov 1998 A
5867385 Brown et al. Feb 1999 A
5872722 Oravetz et al. Feb 1999 A
5872785 Kienberger Feb 1999 A
5890097 Cox Mar 1999 A
5892449 Reid et al. Apr 1999 A
5903426 Ehling May 1999 A
5905616 Lyke May 1999 A
5906271 Castonguay et al. May 1999 A
5926089 Sekiguchi et al. Jul 1999 A
5936817 Matsko et al. Aug 1999 A
5946210 Montminy et al. Aug 1999 A
5958060 Premerlani Sep 1999 A
5963457 Kanoi et al. Oct 1999 A
5973481 Thompson et al. Oct 1999 A
5973899 Williams et al. Oct 1999 A
5982595 Pozzuoli Nov 1999 A
5982596 Spencer et al. Nov 1999 A
5995911 Hart Nov 1999 A
6005757 Shvach et al. Dec 1999 A
6005758 Spencer et al. Dec 1999 A
6018451 Lyke et al. Jan 2000 A
6038516 Alexander et al. Mar 2000 A
6047321 Raab et al. Apr 2000 A
6054661 Castonguay et al. Apr 2000 A
6055145 Lagree et al. Apr 2000 A
6061609 Kanoi et al. May 2000 A
6084758 Clarey et al. Jul 2000 A
6138241 Eckel et al. Oct 2000 A
6139327 Callahan et al. Oct 2000 A
6141196 Premerlani et al. Oct 2000 A
6157527 Spencer et al. Dec 2000 A
6167329 Engel et al. Dec 2000 A
6175780 Engel Jan 2001 B1
6185482 Egolf et al. Feb 2001 B1
6185508 Van Doorn et al. Feb 2001 B1
6186842 Hirschbold et al. Feb 2001 B1
6195243 Spencer et al. Feb 2001 B1
6198402 Hasegawa et al. Mar 2001 B1
6212049 Spencer et al. Apr 2001 B1
6233128 Spencer et al. May 2001 B1
6236949 Hart May 2001 B1
6242703 Castonguay et al. Jun 2001 B1
6268991 Criniti et al. Jul 2001 B1
6285917 Sekiguchi et al. Sep 2001 B1
6288882 DiSalvo et al. Sep 2001 B1
6289267 Alexander et al. Sep 2001 B1
6291911 Dunk et al. Sep 2001 B1
6292340 O'Regan et al. Sep 2001 B1
6292717 Alexander et al. Sep 2001 B1
6292901 Lys et al. Sep 2001 B1
6297939 Bilac et al. Oct 2001 B1
6313975 Dunne et al. Nov 2001 B1
6341054 Walder et al. Jan 2002 B1
6347027 Nelson et al. Feb 2002 B1
6351823 Mayer et al. Feb 2002 B1
6356422 Bilac et al. Mar 2002 B1
6356849 Jaffe Mar 2002 B1
6369996 Bo Apr 2002 B1
6373855 Downing et al. Apr 2002 B1
6377051 Tyner et al. Apr 2002 B1
6385022 Kulidjian et al. May 2002 B1
6396279 Gruenert May 2002 B1
6397155 Przydatek et al. May 2002 B1
6405104 Dougherty Jun 2002 B1
6406328 Attarian et al. Jun 2002 B1
6411865 Qin et al. Jun 2002 B1
6441931 Moskovich et al. Aug 2002 B1
6459997 Andersen Oct 2002 B1
6496342 Horvath et al. Dec 2002 B1
6535797 Bowles et al. Mar 2003 B1
6549880 Willoughby et al. Apr 2003 B1
6553418 Collins et al. Apr 2003 B1
6556560 Katseff et al. Apr 2003 B1
20010010032 Ehlers et al. Jul 2001 A1
20010032025 Lenz et al. Oct 2001 A1
20010044588 Mault Nov 2001 A1
20010048354 Douville et al. Dec 2001 A1
20010055276 Rogers et al. Dec 2001 A1
20010055965 Delp et al. Dec 2001 A1
20020010518 Reid et al. Jan 2002 A1
20020032535 Alexander et al. Mar 2002 A1
20020034086 Scoggins et al. Mar 2002 A1
20020045992 Shincovich et al. Apr 2002 A1
20020059401 Austin May 2002 A1
20020063635 Shincovich May 2002 A1
20020064010 Nelson et al. May 2002 A1
20020091949 Ykema Jul 2002 A1
20020094799 Elliott et al. Jul 2002 A1
20020107615 Bjorklund Aug 2002 A1
20020108065 Mares Aug 2002 A1
20020109722 Rogers et al. Aug 2002 A1
20020111980 Miller et al. Aug 2002 A1
20020114285 LeBlanc Aug 2002 A1
20020116092 Hamamatsu et al. Aug 2002 A1
20020124011 Baxter et al. Sep 2002 A1
20020146076 Lee Oct 2002 A1
20020146083 Lee et al. Oct 2002 A1
20020147503 Osburn, III Oct 2002 A1
20020159402 Binder Oct 2002 A1
20020162014 Przydatek et al. Oct 2002 A1
20020163918 Cline Nov 2002 A1
20020165677 Lightbody et al. Nov 2002 A1
20020181174 Bilac et al. Dec 2002 A1
20020193888 Wewalaarachchi et al. Dec 2002 A1
20030043785 Liu et al. Mar 2003 A1
20040071132 Sundqvist et al. Apr 2004 A1
20040090994 Lockridge et al. May 2004 A1
20040213203 Lucioni Oct 2004 A1
20040223510 Tzannes et al. Nov 2004 A1
20050152382 Stirling et al. Jul 2005 A1
Foreign Referenced Citations (3)
Number Date Country
0718948 Jun 1996 EP
0723325 Jul 1996 EP
WO 02052240 Jul 2002 WO
Related Publications (1)
Number Date Country
20030214907 A1 Nov 2003 US
Provisional Applications (2)
Number Date Country
60359544 Feb 2002 US
60438159 Jan 2003 US