METHOD FOR PROCESSING DATA STREAMS INCLUDING TIME-CRITICAL MESSAGES OF A POWER NETWORK

Information

  • Patent Application
  • 20160006780
  • Publication Number
    20160006780
  • Date Filed
    July 01, 2015
    9 years ago
  • Date Published
    January 07, 2016
    9 years ago
Abstract
A method is disclosed for processing data stream within a time constraint by a stream processing network having a plurality of processing elements. The method includes determining a processing unit, by selecting at least one of the plurality of processing elements, for transmitting next data items of the data stream; collecting system information of the processing unit, wherein the system information includes load information of the selected processing element; adapting, based on the system information, a sending rate of the data stream; discarding, by the processing unit, data items of the data stream that would not be processed within the time constraint; and sending, by the processing unit, data items of the data stream that would be processed within the time constraint.
Description
RELATED APPLICATION

This application claims priority under 35 U.S.C. §119 to European Patent Application No. 14175459.8 filed in Europe on Jul. 2, 2014, the entire content of which is hereby incorporated by reference in its entirety.


FIELD

The present disclosure relates to the field of processing data in a stream processing network, in particular, to a method and a system for processing time-critical messages using a stream processing network.


BACKGROUND INFORMATION

Stream processing is a computer programming paradigm concerned with the processing of data that enters a processing system in the form of data streams being potentially unbounded in length. A stream processing system enables a user to perform computations on data that is arriving steadily and to output results continuously or periodically. In order to ensure fault-tolerance and scalability, stream processing systems are typically distributed systems, where the individual processing elements PEs are scattered over multiple interconnected computers. A PE is a logical entity in the stream processing system that takes some data streams as input, performs specific computations on the data items of these streams, and outputs its results in the form of one or more data streams.


Stream processing engines strive for short processing times even at high data rates. The small end-to-end latencies, from the time data is injected into the stream processing engine until the processed data is output, enables many essential online processing tasks such as live monitoring, anomaly detection, and trend analysis in large-scale systems. It is worth noting that commonly used stream processing engines, such as Storm or S4 undergoing incubation at the Apache Software Foundation, simply output results as fast as possible, e.g. in terms of “best-effort delivery”.


However, for many applications, in particular in the industrial domain, there are specific constraints on end-to-end latency. An example is the concentration and analysis of a large number of measurements coming from phasor measurement units PMUs in power grids. For such time-constrained operations where results should be output within a certain time interval after the input data was generated, a high system load may incur substantial delays, which entails that many data items in the output may not satisfy the time constraints. Depending on the application, data arriving late may no longer be considered useful at all. This holds true, e.g., for monitoring applications or applications with real-time constraints.


Further known approaches are for instances: a resilient plan for operator placement under end-to-end time constraints described in “Fulfilling end-to-end latency constraints in large-scale streaming environment, S. Rizou et al”, the methods to deal with variance in load according to described in “Providing Resiliency to Load Variations in Distributed Stream Processing, Y. Xing et al”, and to minimize latency dynamically based on load-correlation criteria according to “Dynamic load distribution in the Borealis Stream Processor, Y. Xing et al”. However, none of these methods solves the above problems.


SUMMARY

A method is disclosed for processing a data stream within a time constraint by a stream processing network having a plurality of processing elements, wherein the method comprises: determining a processing unit, by selecting at least one of the plurality of processing elements, for transmitting next data items of the data stream; collecting system information of the processing unit, wherein the system information includes load information of the selected processing element, adapting, based on the system information, a sending rate of the data stream, and discarding, by the processing unit, data items of the data stream that would not be processed within the time constraint; and sending, by the processing unit, data items of the data stream that would be processed within the time constraint.


A system is also disclosed of processing data stream within a time constraint by a stream processing network having a plurality of processing elements, wherein the system comprises: a processing unit determined by selecting at least one of a plurality of processing elements, wherein the processing unit is configured to send a next data item of the data stream; and a load management module provided by the selected processing element, wherein the load management module is configured to collect load information of the processing element; wherein the system is configured to adapt a sending rate of the data stream based on the system information, and to discard data items of the data stream that would not be processed within the time constraint.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter disclosed herein will be explained in more detail in the following text with reference to preferred exemplary embodiments that are illustrated in the attached drawings, in which:



FIG. 1 schematically shows a stream processing system that is enriched with load management modules LMMs running at each PE and a load control unit for storing the time constraints, according to the present disclosure;



FIG. 2 schematically shows that the LMMs send system information about the state of their PEs or host computers, such as CPU, bandwidth consumption, time constraints, to the load control unit and receive information about the global state from the load control unit directly; and



FIG. 3 schematically shows that the LMMs only receive the time constraints from the load control unit, and the system information is exchanged between the LMMs.





The reference symbols used in the drawings, and their primary meanings, are listed in summary form in the list of designations. In principle, identical parts are provided with the same reference symbols in the figures.


DETAILED DESCRIPTION

A method is disclosed to dynamically increase throughput for stream processing applications with time constraints, where the throughput is usually defined as the number of data items that are processed by the stream processing mechanism such that time constraints are met.


The present disclosure provides a method for processing data stream within a time constraint by a stream processing network comprising a plurality of processing elements, wherein the method comprises the steps of: determining a processing unit, by selecting at least one of the plurality of processing elements, for transmitting next data items of the data stream; collecting system information of the processing unit, wherein the system information includes load information of the selected processing element; adapting, based on the system information, sending rate of the stream data; discarding, by the processing unit, data items of the data stream that would not be processed within the time constraint; and sending, by the processing unit, data items of the data stream that would be processed within the time constraint.


According to a further aspect, the present disclosure provides a system of processing data stream within a time constraint by a stream processing network comprising a plurality of processing elements, wherein the system comprises: a processing unit determined by selecting at least one of the plurality of processing elements, wherein the processing unit is configured to send a next data item of the data stream; a load management module provided by the selected processing element, wherein the collect load management module is adapted to collect load information of the processing element; wherein the system is configured to adapt sending rate of the stream data based on the system information, and to discard data items of the data stream that would not be processed within the time constraint.


The system information of the processing unit may include load information such as system usage of the selected processing elements or of the host computer as well as the used communication path of the processing elements, e.g. CPU, RAM and bandwidth usage.


Each processing element can be provided with a load management module for collecting the load information of the processing element.


The method can include the step of maintaining the time constraint by a load control unit, e.g. storing and updating the time constraints in case of changes, where the time constraint may be variable over the time.


The method can include the step of estimating the time in which the data items would be transmitted, prior to the step of discarding the data items.


The method can include the steps of sending the system information from the load management module of the selected processing element to the load control unit, and receiving current time constraint required for the stream processing network, by the load management module of the selected processing element.


The processing unit can include a number n of selected processing elements. In this case, the method may further comprise the step of receiving, by the load management modules of the selected processing elements from the load control unit, the system information of the processing unit including the load information of the n selected processing elements. Alternatively, the load information of the n selected processing elements may be exchanged directly between the load management modules of the processing elements, without using the information provided by the load control unit.


The method according to the present disclosure can take both specific time constraints and the dynamics of input data and changes to the topology into account when determining whether to send data, store it locally or remotely for archiving purposes, or drop it as it can no longer satisfy the time constraints.


According to exemplary embodiments, a mechanism is disclosed to dynamically manage the load on the PEs taking potentially varying time-constraints into account. The ability to react dynamically can be advantageous as there may be unpredictable changes to the network, for example due to machine failures or the addition of computational resources. Changes in the processing load might also occur due to variable input data rates or due to the nature of the data items, which may incur significantly varying processing times. Some of the stream processing mechanisms in the state of the art may offer a certain degree of resilience and adaptability to failures and/or changing load. However, they provide either balancing the load dynamically based on the current input data as well as on the current configuration of the processing network, nor acting according to specific time constraints on the processing of the given input data.


Exemplary embodiments can provide the following exemplary advantages: a) no data is output that does not satisfy the time constraints, b) the stream processing load is reduced automatically by dropping data items early, which allows the stream processing engine to allocate more resources to data items that will meet the time constraints—as a result, the output rate of data satisfying the time constraints will increase; and c) the dynamic and automatic distribution of load reduces bottlenecks and thus also leads to an increase of data throughput. Thus, for any given stream processing system, the present invention adds value by improving the performance of a stream processing task with time constraints. Moreover, improving the output rate for a given stream processing system can minimise the hardware requirements, i.e., hardware costs can be reduced.


Features disclosed herein will now be explained by reference to exemplary embodiments illustrated in the Figures.


According to exemplary embodiments, a load management module LMM may run on each processing element PE and can offer the following functions:

    • F1) Select the PE for sending the next data items. The stream processing platform or application logic may restrict the choice to a small number of possible PEs. In the worst case, there is only one possible target PE. The number of the PEs to be selected depends on the requirement of an application. It may refer to the number of recipients for outgoing data items and this is usually smaller than the total number of available processing elements in a stream processing system.
    • F2) Send a signal or message which may contain information about the current CPU, RAM, and bandwidth consumption, and other state information.
    • F3) Modify the sending rate if required. The LMM may cache outgoing data. The sending rate is increased by issuing cached data items at a faster rate if possible, or decreased by queuing up more data items, or remained unchanged if the selected processing elements perform optimally.
    • F4) Filter out, i.e., drop (queued) data items that would not be processed in time, i.e. they would not arrive the output of the processing system within the desired time period that can be defined in form of a time constraint.


Consequently, the LMM actively balances the load by increasing, decreasing, and routing data streams. The LMM operates by intercepting all incoming and outgoing data items, and applying its functions to the data items independently. As the stream processing engine is not aware of the LMM, the stream processing platform does not need to be modified.


In order to get an overview of the system status, an additional component, i.e. the load control unit LCU, is introduced by the present invention in a preferred embodiment. The LCU may be collocated with the centralized controller or process of the stream processing platform or may run on a different machine, potentially in a process external to the stream processing system. The LCU can be a fault-tolerant process just like the centralized controller. The LCU is further a central place to maintain the time constraints for the system, which may also change over time. The LCU can store and distribute the updates of the time constraints so that the current time constraint is communicated to the PEs.


It is in the following assumed that all up-to-date time constraints are stored on the LCU. An overview of a stream processing system enriched with a LCU and LMMs running at the PEs is shown in FIG. 1.


The LCU may run in two different exemplary modes as illustrated in FIGS. 2 and 3: M1) each LMM sends signals and receives signals from the LCU. These messages can be exchanged periodically or on-demand; and M2) the LCU only sends out updated information about the time constraints. Other signals are exchanged between the LMMs directly.


In both modes, the data stream from the top-most PE is redirected to the PE at the bottom. In Mode M1, the LCU would be informed and the other PEs may learn about this change from LM if necessary. In Mode M2, the information is exchanged between the involved PEs directly. The arrow with stroked line illustrates the case that same data items are no longer sent from the PE at the top to the PE at the right, e.g. due to a high load on the PE at the right or a high bandwidth consumption in the communication path between the PE at the top and the PE at the right. Instead of that, these data items can be send from the PE at the top to the PE at the bottom.


Note that the modes are explained here as exemplary implementation embodiments. Each of these modes can be implemented alone or in combination with each other.


In the following, the present invention describes the functionality and steps to improve throughput while respecting time-constraints, according to two exemplary scenarios, i.e. either the system is not used optimally, in that the load is not distributed evenly, or the system is operating at or over its peak capacity. In the first scenario, the goal is to steer the system to a more efficient configuration while in the second scenario, the objective is to increase the amount of data that is still processed in time.


When discussing the scenarios, exemplary embodiments distinguish between two different kinds of data transmission mechanisms:

    • (a) Pull-based stream processing system, where the receiving PE requests, i.e. pulls, the next data item or items from the sending PE whenever the receiving PE has sufficient capacity to process the next data item or items The pull-based systems have the advantage that the PEs are usually not overloaded as they only request data when capacity is available.
    • (b) Push-based system, where the sending PE autonomously sends, i.e. pushes, data items to the receiving PE. Push-based systems have the advantage that they do not have the overhead of continuously requesting data items. However, setting the sending rate correctly is more complex.


An exemplary method according to the present disclosure offers benefits for both approaches. As a receiving PE in a push-based system cannot directly influence the rate of the sending PE and is thus more prone to being overwhelmed, additional remedies specifically for push-based systems (PUSH) can be provided.


Scenario 1

The data items are routed over sub-optimal paths through the stream processing network, i.e., some machines run at full capacity or are even overwhelmed, thereby resulting in increased latencies and data not being processed in time, while others are underutilized.


Remedy 1.1: The LMMs communicate to get information about the used paths and the CPU/bandwidth utilization as defined in function F2. The data streams or parts of the data streams, assuming a partitioning of the streams is possible, are rerouted to alleviate the load on the highly used machines, see function F1.


Remedy 1.2 (PUSH): The LMMs communicate to get information about the used paths and the CPU/bandwidth utilization as described in function F2). The LMMs increase/decrease the load on the highly utilized/underutilized machines by reducing/increasing the sending rate to these machines. If mode M2 is used, the decision to reduce the sending rate is forwarded recursively to the sending PEs as described in function F2 so that they can adapt their sending rates accordingly.


Scenario 2

The load on the system is high, thereby resulting in data items not meeting their processing deadlines, i.e. within the time constraint.


Remedy 2.1: The LMMs send signals along the data stream paths to measure delays as described in function F2. These measurements enable the LMMs to estimate the remaining compute time of a data item when it arrives. If the data item is unlikely to be processed in time, it is dropped as describe in function F4, thereby freeing capacity in the system for other data streams.


Remedy 2.2 (PUSH): The sending rates are throttled to lower the load as described in function F3. If mode M2 is used, the decision to reduce the sending rate is forwarded recursively to the sending PEs as described in function F2 so that they can adapt their sending rates accordingly.


Note that verifying at each stage whether a data item can still meet its deadline is preferred, since this guarantees that all output data satisfies the time constraints.


While embodiments have been described in detail in the drawings and foregoing description, such description is to be considered illustrative or exemplary and not restrictive. Variations to the disclosed embodiments can be understood and effected by those skilled in the art and practising the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain elements or steps are recited in distinct claims does not indicate that a combination of these elements or steps cannot be used to advantage, specifically, in addition to the actual claim dependency, any further meaningful claim combination shall be considered disclosed.


Thus, it will be appreciated by those skilled in the art that the present invention can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restricted. The scope of the invention is indicated by the appended claims rather than the foregoing description and all changes that come within the meaning and range and equivalence thereof are intended to be embraced therein.

Claims
  • 1. Method for processing a data stream within a time constraint by a stream processing network having a plurality of processing elements, wherein the method comprises: determining a processing unit, by selecting at least one of the plurality of processing elements, for transmitting next data items of the data stream;collecting system information of the processing unit, wherein the system information includes load information of the selected processing element, adapting, based on the system information, a sending rate of the data stream, and discarding, by the processing unit, data items of the data stream that would not be processed within the time constraint; andsending, by the processing unit, data items of the data stream that would be processed within the time constraint.
  • 2. Method according to claim 1, wherein each processing element is provided with a load management module for collecting the load information of the processing element.
  • 3. Method according to claim 2, comprising: maintaining the time constraint by a load control unit, wherein the time constraint is variable over the time.
  • 4. Method according to claim 3, comprising: prior to the step of discarding the data items, estimating time in which the data items would be transmitted.
  • 5. Method according to claim 3, comprising: sending the system information from the load management module of the selected processing element to the load control unit; andreceiving current time constraint required for the stream processing network, by the load management module of the selected processing element.
  • 6. Method according to claim 4, wherein the processing unit comprises n selected processing elements, where n>1.
  • 7. Method according to claim 6, comprising: receiving, by the load management modules of the n selected processing elements from the load control unit, the system information of the processing unit including the load information of the n selected processing elements.
  • 8. Method according to claim 6, comprising: exchanging, between the load management modules of the n selected processing elements, the load information of the n selected processing elements.
  • 9. Method according to claim 7, comprising: adapting the number of processing elements n based on the load information of the n selected processing elements.
  • 10. System of processing data stream within a time constraint by a stream processing network having a plurality of processing elements, wherein the system comprises: a processing unit determined by selecting at least one of a plurality of processing elements, wherein the processing unit is configured to send a next data item of the data stream; anda load management module provided by the selected processing element, wherein the load management module is configured to collect load information of the processing element;wherein the system is configured to adapt a sending rate of the data stream based on the system information, and to discard data items of the data stream that would not be processed within the time constraint.
  • 11. System according to claim 10, comprising: a load control unit configured to maintain the time constraint, wherein the time constraint is variable over the time.
  • 12. System according to claim 11, wherein the load management module is configured to send the system information to the load control unit, and wherein the load control unit is configured to distribute current time constraint required for the stream processing network to the load management module.
  • 13. System according to claim 12, wherein the processing unit comprises: n selected processing elements, where n>1.
  • 14. System according to claim 13, wherein the load management modules of the n selected processing elements are configured to receive, from the load control unit, the system information of the processing unit including the load information of the n selected processing elements.
  • 15. System according to claim 13, wherein the load management modules of the n processing elements are configured to exchange the system information of the processing unit including the load information of the n selected processing elements.
  • 16. Method according to claim 1, comprising: maintaining the time constraint by a load control unit, wherein the time constraint is variable over the time.
  • 17. Method according to claim 1, comprising: prior to the step of discarding the data items, estimating time in which the data items would be transmitted.
  • 18. Method according to claim 8, comprising: adapting the number of processing elements n based on the load information of the n selected processing elements.
Priority Claims (1)
Number Date Country Kind
14175459.8 Jul 2014 EP regional