This application relates to a logic device for enforcing financial compliance rules and to a network interface device adapted to support such a logic device.
Recent years have seen a marked increase in automated trading on stock markets to the extent that, today, almost all financial trading has been computerised and trade execution automated with even the most basic trades being algorithmically assisted. Hand in hand with computerization has been a huge increase in the speeds at which trading algorithms operate and the frequency at which the algorithms trade on the markets. This has led to somewhat of a technological arms race between market participants.
Very high trading speeds are now required in order to respond in a timely manner to market data and execute trades without being outperformed by other dealers who are able to trade at a shorter timescale. Market data can include the baseline data reported by the financial exchange in its data feeds, such as the buy and sell price of securities, as well as the trades and offers of other participants in the market. Responses to market data may also be required in order to provide a number of fundamental market functions, including acting as a market maker and maintaining a position in the exchange's book, and discovering liquidity and prices.
For all of these reasons, trading engines are increasingly being connected directly to exchange market data feeds and the algorithms themselves are largely required to operate autonomously. In order to react, process and trade within very short timescales (often under 10 microseconds) these algorithms simply do not have time to communicate internally with other computer systems, or even with other threads on the same machine. These timescales makes it difficult to effectively monitor and control the algorithms and all but rules out human intervention as a useful way of preventing dangerous trading behaviour. At scale, managing these trading engines becomes even more problematic, particularly tracking the position of a particular algorithm, assessing risk, and controlling an algorithm that has breached risk boundaries or is otherwise exhibiting erroneous behaviour. The penalty for failure to manage trading engines can be catastrophic to the trading enterprise and to the market, as demonstrated in August 2012 by the rapid trading disruption caused by Knight Capital's automated trading activities.
There is therefore a need for a mechanism by which risk boundaries can be enforced for high-speed trading engines.
Concerns about the ingress and egress data flows to and from a network entity may extend beyond trading applications. For example a server on a network may become compromised and may send out malicious or erroneous messages to other end-points on a network. In another example, a network entity may receive compromised communications from further network entities or endpoints which may affect the functionality of the receiving network entity.
Therefore the trustworthiness of communications to and from an entity on a network may also be of concern.
According to a first aspect, there is provided a logic device for intercepting a data flow from a network source to a network destination, the logic device comprising: a data store holding a set of compliance rules and corresponding actions; a packet inspector configured to inspect the intercepted data flow and identify from the data store a compliance rule associated with the inspected data flow; and a packet filter configured to when the data flow is identified as being associated with a compliance rule, carry out an action with respect to the data flow corresponding to the compliance rule.
The compliance rules may be configured to ensure compliance of at one of said network source and network destination with allowed behaviour.
The packet inspector may be further configured to identify the compliance rule associated with the data flow by parsing the received data flow to determine a characteristic of that data flow.
A compliance rule may identify a characteristic within the data flow for which there is an associated action. The characteristic may identify a network destination for which there is an associated action. The associated action may be blocking a data flow directed to the network destination. The associated action may be allowing a data flow directed to the network destination to continue to be transmitted to the network.
The characteristic may be an identity of a network source for which there is an associated action. The associated action may be blocking a data flow received from the network source. The associated action may be allowing a data flow received from the network source to be sent to the network destination.
The characteristic may be an amount of data transferred for a data flow and the compliance rule indicates a limit of data that may be transferred for a data flow. The characteristic may be an identity of an application level request for which there is an associated action.
The associated action may be one of: allowing a data flow comprising the application level request to be transmitted to the network destination; blocking a data flow comprising the application level request; and rewriting the application level request in a data flow to a second application level request.
The associated action may be to redirect a data flow to further destination. The associated action may comprise duplicating at least part of the data flow and forwarding said duplicated at least part of the data flow to a further entity. The duplicated at least part of the data flow may form a message digest. The message digest may comprise at least time stamp information.
The data store may further comprise an interface configured receive updates to the compliance rules and associated actions from a further entity. The logic device may be further configured to authenticate an update from the further entity. The logic device may be further configured to at least one of encrypt and decrypt an update from the further entity. The update may be authenticated using at least one key accessible to the logic device. The update may be digitally signed.
The updates to the compliance rules may be received from a control engine. The control engine may be part of a data processing device comprising the logic device. The logic device may communicate with the control engine across a network. The logic device may form part of a network interface device. The network interface device and control engine may be configured to carry out a challenge response authentication protocol.
A rule update may comprise a rule indicating that the network interface device is not to accept further rule updates. The logic device may refuse further rule updates in response to a rule update indicating that further rule updates are to be rejected.
The logic device may perform protocol processing. The logic device may perform protocol processing on a data flow to determine the characteristic. The logic device may perform protocol processing on an encrypted data flow. The logic device may perform SSL (secure sockets layer) protocol processing. The logic device may perform upper layer network protocol processing. The logic device may perform upper layer network protocol processing to determine a characteristic of the data flow in accordance with the compliance rule.
The data store may be further configured to store state information and the packet inspector is configured to identify from the data store a compliance rule associated with the inspected data flow based at least partially on the state information. The logic device may be a reconfigurable hardware device. The logic device may be a FPGA.
The network source may be a first virtual operating system and the network destination may be a second virtual operating system the first and second virtual operating systems being supported by a data processing system. The network source and the network destination may be configured to communicate over a network. The network source and the network destination may be network end-points.
According to a second aspect, there may be provided a network interface device for communicating between a data processing system and a network, comprising: the logic device of the first aspect.
According to a third aspect, there is provided a method comprising: intercepting a data flow from a network source to a network destination; storing a set of compliance rules and corresponding actions; inspecting the intercepted data flow and identifying from the data store a compliance rule associated with the inspected data flow; and when the data flow is identified as being associated with a compliance rule, carrying out an action with respect to the data flow corresponding to the compliance rule.
The method may further comprise: identifying the compliance rule associated with the data flow by parsing the received data flow to determine a characteristic of that data flow.
The method may further comprise: identifying a characteristic within the data flow for which there is an associated action. The characteristic may identify a network destination for which there is an associated action.
Carrying out an action with respect to the data flow may comprise blocking a data flow directed to the network destination. Carrying out an action with respect to the data flow may comprise allowing a data flow directed to the network destination to continue to be transmitted to the network. The characteristic may be an identity of a network source for which there is an associated action. Carrying out an action with respect to the data flow may comprise blocking a data flow received from the network source. Carrying out an action with respect to the data flow may comprise allowing a data flow received from the network source to be sent to the network destination.
The characteristic may be an amount of data transferred for a data flow and the compliance rule indicates a limit of data that may be transferred for a data flow. The characteristic may be an identity of an application level request for which there is an associated action.
Carrying out an action with respect to the data flow may comprise at least one of: allowing a data flow comprising the application level request to be transmitted to the network destination; blocking a data flow comprising the application level request; and rewriting the application level request in a data flow to a second application level request.
Carrying out an action with respect to the data flow may comprise redirecting a data flow to further destination. Carrying out an action with respect to the data flow may comprise duplicating at least part of the data flow and forwarding said duplicated at least part of the data flow to a further entity.
The method may further comprise: authenticating an update to the compliance rules from the further entity. The method may further comprise at least one of encrypting and decrypting an update to the compliance rules from a further entity. The method may further comprise: authenticating an update using at least one key accessible to the logic device. The method may further comprise digitally signing the update.
The method may further comprise: storing state information associated with the compliance rules; and identifying a compliance rule associated with the inspected data flow based at least partially on the state information.
According to a further aspect, there may be provided a logic device for enforcing financial compliance rules comprising: a first interface for receiving a flow of trade orders from a financial trading engine and a second interface for transmitting a filtered flow of trade orders onto a network; a data store holding a set of compliance rules and corresponding actions, each compliance rule expressing a predetermined trading pattern and each filter action specifying an action that is to be performed in response to the predetermined trading pattern being identified; a packet inspector configured to parse a received trade order and identify in the data store a compliance rule relating to the trade order; a matching engine arranged to match the identified compliance rule to the received trade order; and a packet filter configured to, in response to a match being found between the trade order and the identified compliance rule, perform the filter action corresponding to the identified compliance rule.
It will be appreciated that compliance rules may relate to any allowed network behaviour. It will also be appreciated that the compliance filter may intercept a trade order or other data flow.
The matching engine may be configured to, if the identified compliance rule expresses a trading pattern over a plurality of trade orders, cause the matching engine to maintain state for the compliance rule in the event that the received trade order does not match the compliance rule, the state being sufficient to allow the matching engine to identify when a subsequently received trade order matches the compliance rule.
The matching engine may be configured to maintain the state for the compliance rule at the data store. The matching engine may be configured to maintain as the state for the compliance rule data derived from predetermined values of the received trade order, the predetermined values being specified in the identified compliance rule.
The packet inspector may be configured to, on parsing the received trade order, form an abstraction of the trade order from one or more predetermined values of the trade order and use the abstraction to identify in the data store the compliance rule relating to the trade order.
The matching engine may be arranged to match the identified compliance rule to the received trade order by matching the identified compliance rule to the abstraction of the received trade order. The matching engine may be arranged to match the identified compliance rule to the received trade order by matching the identified compliance rule to the abstraction of the received trade order and state maintained for that compliance rule.
The abstraction may include one or more of the following values of the trade order: an identifier of the security to which the trade relates; a timestamp; an identifier of the type of trade; a value expressing the number of units involved in the trade; and a price specified in the trade.
Each trade order may be carried within a data packet that includes identifiers of a source network endpoint from which the trade order originates and a destination network endpoint to which the trade order is directed, the abstraction optionally including identifier(s) of one or both of the source and destination network endpoints.
The packet inspector may be configured to parse the received trade order by parsing one or both of a financial message representing the received trade order and headers of a data packet carrying the financial message. The packet inspector may be configured to identify the compliance rule in the data store using identifiers of source and/or destination network endpoints in the headers of the data packet.
The predetermined trading pattern expressed by the compliance rule may be a prohibited trading pattern and the corresponding filter action is to prevent the trade order being transmitted over the second interface. The predetermined trading pattern expressed by the compliance rule may be indicative of trade orders that exceed predefined trading limits and the corresponding filter action is one or more of: preventing the trade order being transmitted over the second interface; duplicating the trade order and causing a copy to be sent to a predefined network endpoint; redirecting the trade order to a predefined network endpoint.
The predefined trading limits may be, optionally with respect to a specified security only, one or more of: a trade order price range or value; a trade order price that differs from a specific reference price by a a maximum number of units; a maximum rate of change in price; a maximum rate at which trades are performed; a total value of trades that can be performed within a predetermined length of time.
The predefined trading limits may be a trade order price range or value and the trade order price range or value is dynamically set with respect to a specified market reference price.
The logic device may further comprise a command channel, the compliance filter being operable to dynamically update the set of compliance rules at the data store in response to commands received over the command channel. The command channel may be supported over the network and the compliance filter is directly responsive to network packets carrying commands that are received over the command channel from a control application.
The logic device may be further operable to block all trade orders from being transmitted over the second interface on receipt of a predefined control message over the control channel.
The compliance rule may define a list of approved securities to which trade orders are permitted to relate, and the corresponding filter action is to allow to be transmitted over the second interface only those trade orders that relate to security on the approved list. The first interface may be adapted to receive trade orders carried as payload of at least partially formed network data packets. The second interface may be a media access control unit configured to complete logical processing of the filtered network at least partially formed network data packets. The first and second interfaces may be one and the same interface.
According to a fifth aspect, there is provided a network interface device comprising: a controller; a physical layer unit for connection to a network; and a logic device as per any of the above aspects.
The logic device may be located on a network data path between the controller and physical layer unit.
The present application will now be described by way of example with reference to the accompanying drawings, in which:
The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art.
The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present application. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
A schematic diagram illustrating flows of data within a data processing system configured in accordance with the present application is shown in
In the example shown in
On receiving trade orders 114, a processing module 101 of compliance filter 100 parses the incoming trade orders to identify patterns in the trade orders that match one or more predefined compliance rules stored in table 103. The table further indicates a corresponding action that is to be performed by the processing module on a match being identified to a compliance rule. Such compliance rules can be used to ensure that trade orders transmitted from the data processing system remain within predefined limits.
An action could for example cause the processing module to block the transmission of trade orders that match the stored rules. This could be useful to prevent trading engine 104 from exceeding predetermined trading limits. A control application 110 could be provided and configured to program the rules into table 103, as indicated by data flow 115.
By way of a first example, suppose trading engine 104 is trading on a security XYZ according to an automated trading algorithm and in response to financial data feeds 111 received over network 105 from a financial exchange. Since the trading engine can perform trades without manual intervention and at high speed, control application 110 is used to enter a set of rules into table 103 that express limits on the trades that can be placed by the engine. For instance, the rules might express a maximum number of buy orders that the engine is permitted to make on security XYZ within a predetermined period of time. Trade orders 114 are therefore examined at processing module 101 in accordance with the stored rules to check that the number of buy orders has not been exceeded for the trading period. Any trade orders that exceed the maximum number of buy orders could be discarded by the compliance filter and not transmitted in data flow 112 to the financial exchange accessible over network 105. Alternatively, those prohibited trade orders could be redirected over data flow 113 to a different endpoint (e.g. control application 110) so as to permit monitoring of the trade orders intended by the trading engine and without allowing those trade orders to be transmitted to the financial exchange. This can aid the debugging of trading algorithms whose behaviour triggers the rules of the compliance filter.
A compliance filter 200 configured in accordance with the present application is shown in
The compliance filter comprises a packet inspector 201, a matching engine 202 and a packet filter 203. A data store 204 includes rules indicating patterns in the trade orders received from trading engine 206 and corresponding actions that are to be performed by the compliance filter on a pattern being identified.
Packet inspector 201 is arranged to parse incoming trade orders received over interface 207 so as to enable the relevant rules for each trade order to be identified in data store 204. Rules could be selected from data store 204 on the basis of any suitable identifiers of the received trade orders. Such identifiers could include one or more of a representation of a security to which the trade relates (e.g. a security symbol), a timestamp, an identifier of the type of trade (e.g. buy, sell, put, call etc.), a value expressing the number of units involved in the trade (e.g. a number of shares), and a price (e.g. an offer price for each unit or the trade as a whole). Different rules could be defined in data store 204 for different data flows from the trading engine. For example, different rules could be defined for each of the various securities traded by the trading engine, with the packet inspector being configured to identify a trading symbol from each trade order and the symbol being used as the basis of a lookup into the data store. Or, to give a second example, different rules could be applied to flows of trade orders directed to different endpoints (e.g. different financial exchanges, banks or market makers).
It is envisaged that multiple compliance rules could apply to a trade order. For example, a given trade order might relate to a particular security for which rules have been defined so as to catch predetermined trading patterns, as well as there being blanket rules that relate to all trades from a given trading engine.
Preferably the packet inspector is configured to perform a lookup into data store 204 on the basis of a predetermined set of trade order identifiers, which could be formed into an abstraction of each trade order. The abstraction could include any data from a trade order and/or data representing the source of the order (e.g. an identifier of the trading engine) and/or data representing the target for the order (e.g. an identifier of the remote financial exchange to which the order is directed). Data representing the source/target of an order would typically be included in the header of a data packet carrying that trader order.
In its simplest form a trade order abstraction could include part or all of a trade order message. In preferred embodiments, an abstraction could be derived from a set of predetermined values extracted from the respective trade order so as to form a data string representing the trade order in a format optimised for efficient lookup operations in the data store. It can be advantageous if data store 204 includes a lookup table and the abstractions are formatted so as to permit efficient lookup operations in that table. This allows the rules and corresponding actions relevant to a particular data flow (e.g. a stream of trade orders relating to a particular exchange or security) to be identified at low latency. The format of the abstraction generated for trade orders of a given data flow at packet inspector 201 could be specified in data store 204.
In the event that the abstractions formed at the packet inspector include all of the information necessary to identify the required pattern(s), the rules identified at data store 204 could be performed by the matching engine on the abstractions formed for the trade orders of a data flow, rather than on the trade orders themselves.
The rules stored in data store 204 that are identified by packet inspector 201 are used by matching engine 202 to identify patterns in the received trade orders that match predefined patterns expressed by those rules. Such pattern matching could be performed on a per-trade order basis or over multiple trade orders. State relating to a particular data flow of trade orders could be stored at data store 204 so as to allow patterns to be identified in a sequence of trade orders received from trading engine 206. On a predefined pattern being identified in a trade order/sequence of trade orders, packet filter 203 performs the action specified in the data store 204 that corresponds to the identified pattern. Matching engine 202 could be configured to perform pattern matching in accordance with any suitable pattern matching algorithms.
Examples of patterns expressed by rules stored at data store 204 that matching engine 202 could be arranged to identify in individual trade orders received from trading engine 206 include:
And examples of patterns expressed by rules stored at data store 204 that matching engine 202 could be arranged to identify over multiple trade orders received from trading engine 206 include:
The above examples are merely illustrations of some of the patterns in trade orders for which the matching engine could be configured to identify. More generally, any kind pattern could be identified in trade orders, including combinations of the patterns set out in the above examples.
In order to perform rules that require patterns to be identified over multiple trade orders, the matching engine is configured to maintain state sufficient to allow the matching engine to perform such rules over a sequence of trade orders. The matching engine could be configured to store the state at data store 204. For example, if a rule causes the matching engine to monitor the total number of units bought of a particular security in received trade orders, the matching engine would maintain state identifying the total number of units bought of that security and update that state on receiving new trade orders for the purchase of further units of that security. This would allow the matching engine to identify when a predetermined cap has been reached (i.e. the pattern expressed by the rule) and to, in response, perform a corresponding action identified in the data store. Thus, for compliance rules that express trading patterns over more than one trade order, the matching engine is configured to match compliance rules using state stored for that compliance rule. Such compliance rules would preferably specify the values of trade orders from which the state is to be derived.
The rules defining the trading limits that are to be enforced by the compliance filter are written to data store 204 by a control application, which could be embodied in software or hardware (e.g. control application 110 in
In alternative embodiments, a control application for the compliance filter could be located on a network. This could be network 205, with interfaces 208 and 209 being the same interface and control messages for the compliance filter being received over that common interface from the network. Or interface 209 could be distinct from interface 208 and connected to network 205 or another network. Whether the control channel (i.e. the path over which the compliance filter communicates with the control application) is a distinct or shared interface, it is preferably directly responsive to data packets carrying control data that are received over the network. This could require the compliance filter to perform at least some receive processing of network packets but ensures that control of the compliance filter can be performed at low latency.
The control application would typically be configured to update the rules at data store 204 as and when required in order to enforce the level of control required by the operator of the trading platform (i.e. the computer system supporting trading engine 206 and compliance filter 200). For example, the control application could update the rules at the data store before the relevant markets open each day. The control application could be configured to constantly update any rules that depend on dynamic market parameters, such as trading limits set with respect to data received over financial data feeds. In order to ensure that the parameters are maintained at low latency, such dynamic parameters in data store 204 could be updated by a component of the compliance filter itself—e.g. if the compliance filter is supported at a reconfigurable logic device then a firmware module could be defined to dynamically update rule parameters in response to data feeds received over network 205.
In response to matching engine 202 establishing a match to a pattern defined by the stored rules, packet filter 203 is configured to perform a corresponding action defined in data store 204. Examples of the actions that could be performed by packet filter 203 include:
It can be advantageous if an action is to be performed only on a certain type of trade order. For example, in response to a particular pattern of trades, the packet filter could be configured to discard all buy orders relating to a particular security but allow through any sell orders for that security such that the trading engine unwinds its holding of that security.
There could be multiple actions defined for a given flow of trade orders, the action being performed depending on which corresponding rule has been triggered. For example, a first rule could express a first trading limit which, when met, causes the packet filter to duplicate the data flow to a predefined endpoint at which a monitoring unit is located for monitoring the trades being performed by the trading engine. On a second trading limit expressed by a second rules being met by the trading flow, a corresponding action could be defined to terminate that data flow and discard all trades relating to that data flow received from the trading engine.
It can be useful if compliance filter 200 further provides a mechanism by which all or specified trading data flows originating at the trading engine can be rapidly terminated by a control application, for example in the event that the trading engine becomes unresponsive or exhibits erroneous behaviour. Further reasons why the compliance filter might terminate all trading flows include situations in which a raft of trading boxes are controlled simultaneously, or where trading needs to be halted for reasons that have not been caught by the compliance filter. A dedicated control command could be defined which, when provided to the compliance filter by a control application causes the compliance filter to block all or specified (e.g. in parameters of the control command) data flows from being transmitted onto network 205.
In the event that an action is triggered to cause trade orders to be redirected to a control application, it can in some circumstances (despite the additional latency) be advantageous to configure the control application to review the trade orders generated by the trading engine and determine whether those trade orders can be permitted onto network 205. The control application could perform this determination automatically by means of a set of algorithms, or the control application could permit manual intervention. Algorithms executed by the control application could be significantly more sophisticated than the rules performed by the compliance filter (the control application could be software supported at the host data processing system at which the trading engine and compliance filter are located).
It can be advantageous if one of the actions that the compliance filter is configured to perform in response to a rule being met is to cause a maximum trading frequency limit to be imposed on the trading engine. For example, the trading engine could cause a message to be transmitted to the trading engine to update a maximum trading frequency parameter held at the trading engine. Such a parameter could be held for each data flow of the system. The trading engine could cause a maximum trading frequency limit to be imposed on the trading engine by means of a control application as discussed above. The maximum frequency for a given data flow might be specified by the financial exchange with which the trades are being performed. As a backstop, rules could also be defined at the data store in accordance with the teaching herein so as to cause the compliance filter to identify data flows on which trade orders are being generated at a rate that exceeds a specified maximum trading frequency; in response the compliance filter would perform a corresponding action, such as preventing all trade orders of that data flow from being transmitted onto the network.
Compliance filter 200 is located between trading engine 206 and network 205 and configured to perform filtering of trade orders received from the trading engine in accordance with compliance rules at data store 204. In the example shown in
In preferred embodiments of the present application, the compliance filter is configured to receive at interface 207 data packets comprising trade order messages formed at trading engine 206. The data packets are at least partially formed for transmission over network 207. Thus, packet inspector 201 is configured to parse received data packets in order to identify the payload trade orders carried therein and the rules at data store 204 applicable to each trade order. This could require the compliance filter to perform aspects of network protocol processing. For example, the compliance filter could be configured to ensure that the sequence numbers of data packets remain in order when trade orders are discarded, and the compliance filter could be configured to calculate or cause to be recalculated any checksums over data packets that comprise discarded or amended trade orders. In certain use scenarios, the compliance filter might be required to perform full termination of one or more network protocols such as TCP/IP, in addition to the messaging protocols in accordance with which the trade orders are formed, such as FIX or OUCH.
Rules at data store 204 might in fact operate over all or part of a data packet carrying a trade order. For example, rules might simply express a predefined rule that is to be applied to all trade orders received from a particular trading engine, e.g. from a particular source address and port in the case of an IP network.
By arranging the compliance filter close to the network such that it operates on at least partially formed network data packets carrying trade orders, the compliance filter can be used to perform in-line filtering of trade orders generated at high speed in hardware. The compliance filter is preferably further configured so as to parallelise its operations through pipelining and is configured to ensure very tight bounds on the jitter imposed by its operations. Interface 208 represents an interface to the network 205, and could include or represent a link to one or more logic and physical layer network protocol processing units required to effect the transmission of completed network data packets onto the fabric of network 205, such as a media access control unit (MAC) and physical layer unit (PHY).
Interface 207 could be any kind of interface suitable to the architecture. For example, in the case that trading engine 206 and compliance filter 200 are firmware modules of a reconfigurable logic device (as shown in
A particularly advantageous embodiment of the present application is shown in
Note that in all of the embodiments described herein the packet inspector 201, matching engine 202 and packet filter 203 schematically represent functionality of the compliance filter. The packet inspector 201, matching engine 202 and packet filter 203 need not be discrete units and could together be formed of one or more functional units. For example, the packet inspector 201 and matching engine 202 could be a single unit arranged to perform both parsing and pattern matching of trade orders. In the case that the compliance filter is a reconfigurable logic device, packet inspector 201, matching engine 202 and packet filter 203 could comprises one or more firmware modules configured to provide the functionality described herein. Alternatively the compliance filter could be implemented as non-deterministic finite state automata which are generated (compiled) from the set of compliance rules.
Various examples will now be described by way of illustration as to combinations of pattern matching and filtering actions that compliance filter 200 can be configured to perform.
In a first example, rules are maintained at data store 204 such as to cause packet inspector 201 to identify trade orders offering to sell a security XYZ at more than 5% below a market reference (e.g. the price offered by a given market maker). The packet inspector identifies this rule when it performs a lookup into the data store using symbol XYZ parsed from a received trade order that relates to the sale of XYZ. The rule is performed by the matching engine which compares the sell price in the trade order to the market reference value maintained at the data store. If a match is found (i.e. the sell price is more than 5% below the reference) the matching engine causes the packet filter to perform the corresponding action in data store 204, which in this case is to prevent that trade order from being transmitted over network 205.
In a second example, rules are maintained at data store 204 such as to cause packet inspector 201 to identify in a data flow of trade orders relating to security XYZ the sell price of a security XYZ dropping by greater than 5% an hour. The packet inspector therefore maintains state representing historical information as to the sell price offered for security XYZ. On a trade order being received, the packet inspector parses the trade order to identify the security to which it relates and performs a lookup using the symbol XYZ in the data store to find the corresponding rule. If the received trade order relates to the sale of security XYZ, the packet inspector identifies the above rule and passes it to matching engine 202 which examines the trade order against the historical price information and, when the pattern expressed by the rule is identified, causes the packet filter to perform the corresponding action defined in the data store. The action in this case is to redirect the trade order and all subsequent trade orders relating to that security to a control application running a sandbox exchange at which the trade orders can be safely performed and monitored. This redirection is achieved by modifying the destination address in the header of the data packet comprising the trade order in order to cause the data packet to be delivered to a network endpoint of the control application instead of to the intended financial exchange.
In a third example, rules are maintained at data store 204 such as to cause packet inspector 201 to identify across all data flows received from a trading engine (e.g. from a particular source address and port) when the total value of all shares bought on a trading day exceeds $10 million. This is achieved by arranging that the packet inspector maintain state representing a sum of the total value of all shares bought (the value being zeroed at the start of a trading day). On receiving a trade order the packet inspector parses the data packet carrying the trade order so as to identify the source address and port from which the packet originates. This information is used as a lookup into the data store in order to identify to the matching engine which rule is to be performed on trade orders from that trading engine. In this case the rule only results in the corresponding action being performed by the packet filter when the cap is exceeded (i.e. the rule is satisfied by a trade order from the trading engine), the action being to block all trade orders from that trading engine from being sent out over the network.
In the foregoing the data processing system is described as providing a compliance filter 100 between a trading engine 104 and a network 105. It will however be appreciated that the disclosure of the present application may extend beyond being restricted to the trading engine and checking of compliance for trading orders.
A compliance filter according to further embodiments of the application may be applicable to other applications requiring security. For example, a compliance filter may be provided to monitor the traffic of a network end-point and/or application other than a trading engine. The compliance filter may be used to configure a network interface to accept only data flows to/from nominated end-points, monitor and restrict network traffic and/or provide an audit trail for activity. Such a compliance filter may for example prevent unauthorized traffic from reaching a server. Additionally or alternatively a compromised server may be restricted from outputting damaging network traffic or distributing unauthorized content.
In
At the application layer there is provided an application or network end-point 504 that is configured to send and receive data over the network 105 via the peripheral device 107. The end-point 504 may for example be a server in some embodiments however it will be appreciated that the end-point may relate to other applications.
The peripheral device 107 may comprise a compliance filter 100 which may monitor the data sent to and received from the end-point 504 for security purposes. The compliance filter 100 may receive data to be transmitted over the network 114 from the end-point 504 and transmit data 512 received over the network to the end-point 504.
The compliance filter may further receive data 112 addressed to the end-point 504 over the network and transmit data 511 from the end-point 504 to the network. The compliance engine 100 may comprise a processing module 101 and a table 103.
In operation, the compliance filter 100 may intercept data destined to the end-point 504 from the network and intercept data destined to the network from the end-point 504. This intercepted data may be checked for compliance with rules that have been set at the compliance engine and the compliance engine may process the data accordingly.
Similarly to the embodiment of
An action could for example cause the processing module to block the transmission of data 114 that matches the stored rules. This could be useful to prevent a security compromised end-point 504 from sending out compromised or malicious data.
In addition to the data processing system of
On receiving the data 511 from the network, the processing module 101 of compliance filter 100 may parse the incoming data 511 to identify patterns in the data 511 that match one or more predefined incoming compliance rules stored in table 103. The table may further indicate a corresponding action that is to be performed by the processing module on a match being identified to a compliance rule.
An action could for example cause the processing module to block passing the incoming data 511 that matches the stored rules onto the end-point 504. This could be useful to prevent malicious data or messages from reaching the end-point 504.
Additionally action may take place for both data incoming to the end-point 504 and outgoing to the network 105. For example, certain data for which a match is determined may be forwarded, or copied and forward to a further entity. This is shown for example by the dotted lines 513 and 113 of
In some embodiments, the data flow may be audited at the further entity. In this case, the data flow may continue on to its end destination as well as being provided to the further entity for recording and/or auditing.
It will be appreciated that while the table 103 has been described as comprising both outgoing and incoming compliance rules, one or more of these rules may be applicable to both incoming and outgoing data. Additionally, as described in relation to the first embodiment, the compliance filter 100 may carry out some protocol processing. In this case, the compliance filter 100 may additionally carry out incoming protocol processing in some cases.
Where the incoming or outgoing traffic is encrypted, the compliance filter may be required to decrypt the flows in order to apply the rules. With many common protocols, this would require the compliance filter to fully process (terminate) the encrypted flows, using for example protocols such as SSL. In doing so, the filter would be required to participate in key exchange with the remote endpoint in order to establish the flows. Optionally the filter may re-encrypt or compress the data which is to be passed onwards (to the host).
Similarly to the first embodiment, a control application 110 could be provided and configured to program the rules into table 103, as indicated by data flow 115.
Similarly to the filter of
Additionally, the compliance filter 600 comprises an ingress network interface 602 which may receive data from the network 105 addressed to the end-point 504 and an egress host interface 601 which may provide data 512 from the compliance filter 600 to the network 105.
The ingress host interface 607 and egress host interface 601 may form the same interface in some embodiments. The ingress network interface 602 and egress network interface 608 may form the same interface in some embodiments. Additionally, the host and network interfaces could be one and the same—for example, the interfaces could be a network interface over which the end-point 504 is accessible, with data being received at and transmitted from the compliance filter over that common interface. Compliance filter 600 is preferably a reconfigurable logic device, such as an FPGA.
The compliance filter comprises a packet inspector 201, a matching engine 202 and a packet filter 203. A data store 204 includes rules indicating patterns in the data received from network 105 and from the end-point 504 and corresponding actions that are to be performed by the compliance filter on a pattern being identified.
It will be appreciated that in this embodiment, packets from the network 105 and packets from the end-point 504 received at the compliance filter 600 may be similarly processed and checked for compliance against the rules stored in the data store 204. However, it will be appreciated that in this embodiment, the rules may not be specific to trading and may encompass further rules.
The operation of the compliance filter 600 of this embodiment may be similar to that of the compliance filter 200 of the first embodiment with the addition of filtering both incoming and outgoing data and it will be appreciated that the actions taken in response to a match in the filter being found as well as the compliance rules governing the compliance filter may be similar.
Examples of the actions that could be performed by packet filter 203 of the second embodiment include one or more of the following:
There could be multiple actions defined for a given data flow, the action being performed depending on which corresponding rule has been triggered. In some embodiments, the type of action may be dependent on a characteristic of the data, for example an end-point to which the data is directed.
It will also be appreciated that the rules applied for filtering data may be dependent on security characteristics and the security characteristics of the end-points/network entities to which the data is destined and/or received from.
Examples of the rules that could be programmed into the data store 204 of the second embodiment include one or more of the following:
In some cases where the application protocol is carried over a byte stream protocol like transmission control protocol (TCP), the rule compliance filter 100 may account for segmentation and out of order arrival of the network frames as part of the algorithm. In other cases, the rules may be matched against each network frame.
The data processing system 500 of
It will be appreciated that in some embodiments the control application may be present on a control device that may be external to the data processing system. In one embodiment the control engine may reside elsewhere on a network, for example at a security monitor and/or auditor. The control engine may run for example on a trusted host forming part of the network and/or at a switch.
In one example the control engine may run on the host of the data processing system. The control engine may be an application running on the host. The application may run in the user space in some embodiments and may run at the operating system as part of the kernel in some embodiments. Additionally in some cases the control application may run as part of a system management processor (BMC).
In other words, the control engine may form part of the data processing system or be part of and communicated with over the network. It will be appreciated that the control engine itself may receive further commands from elsewhere, for example a further network entity such as the output of a cluster of computers which is performing network threat analysis.
The rules may be programmed into the table and/or existing rules may be updated or removed. In one example, the updating of rules from the control engine may be made according to external information—for example databases (such as Norse) which are created from analysis of information gathered from the wide area. In another example, the updating the rules from the control engine 110 may be made according to locally available information—for example traffic flows (such as connection rates) observed at the local network interface card housing the compliance filter or at monitoring points located in the local data center.
The security of the process of providing rules to the data store may affect the integrity of the compliance filter. In one embodiment, rules may be written to the data store and the compliance filter may receive a command which instructs the filter to never accept new rule updates. In this case, the compliance filter may be considered very highly secure because the rules may never be tampered with.
In a further embodiment, the rules may be updated and/or written in a secure manner. For example, the compliance filter 100 may not operate in a trusted environment and authentication of updates from the control engine may be implemented. In some embodiments, the communication between the compliance filter 100 and the control engine 110 may be authenticated. In some embodiments, the control engine 110 communicates with the network interface card (NIC) on which the compliance filter is implemented.
In a first example, this authentication of the communication from the control engine 110 may be provided by the NIC containing a key stored in a tamper resistant manner (for example in read only memory (ROM) or non-volatile random access memory (NVRAM) on the silicon) and updates may only be accepted if they are verified by the NIC as being digitally signed by that key.
In a further example, a number of keys may be used, some only used by the manufacturer used to revoke or install new keys used to update the rules by customers. This would allow for each customer or by site to use their own digital signatures.
In a further embodiment, the NIC hardware may be provided as being tamper resistant. This is due to there being no diagnostic interfaces on the NIC which can be used to inject new firmware, any keys are stored in a manner that cannot be read and/or all firmware loads from external flash are also digitally signed.
The NIC firmware may further be able to participate in a digitally signed challenge-response authentication protocol. This would enable the external control engine 110 (or other such device) to prove that the NIC has not been physically replaced with another NIC. Here it is advantageous that the network packets for this protocol are sent over the primary network interface.
In some cases controlling the authentication of the process of writing the rules to the data store may satisfy security requirements of the system, however in some cases it may be desired to keep the rules themselves secret. For example, even though an untrusted entity may not write or modify the rules written to the data store, it may benefit from learning what the rules are in order to circumvent them. In this case, the rules may be encrypted rather than just signed.
It will be appreciated that any suitable encryption may be used. For example hash encryption such as the cryptographic hash functions SHA-512 may be implemented in which the NIC (and compliance filter) and control engine may share a key. For digital signing the RSA cryptosystem may be implemented for example by providing a public-private key pair. In a further or additional example the advanced encryption standard (AES) may be implemented for the message encryption of messages between the control engine and NIC (compliance filter).
An example application of the present embodiment will now be described. In this example, the present embodiment may be used to insulate the NIC from denial of service attacks. Denial of service attacks are attempts to make a server and/or network unavailable to the users of that server and/or network. This may be achieved by overloading the server and/or network with messages and requests.
In this example, traffic from particular hosts or subnets may be accepted but modified in such a manner as to reduce the effectiveness of the attacker whilst other action is being taken to remove the flows from the NIC ingress. In this case, the rules may be programmed to identify the particular host or subnet and an action may be associated therewith to modify the incoming data from the host or subnet. In another example, the traffic may simply be blocked to prevent it reaching its destination.
The compliance filter may receive data and filter the data to identify a match in the table. This match may correspond to a rule regarding a particular host or subnet, or for example a rule regarding a request rate that may be received from a particular host or subnet. An action associated with the rule and/or the identity of the sender may be carried out.
In one example the action may cause the matched data flows to be redirected to another server (or a sacrificial CPU on this server) which will handle them but very slowly and therefore without impacting traffic on the protected server.
In another example, the flows may be accepted, but transport level protocol manipulated in order to burden the attacker—for example to cause attacker to constantly retransmit data or continually timeout.
It will be appreciated that this may also be carried out by a NIC with respect to an end-point on its own host, if it is determined that a server or an end-point had been compromised. This may attempt to prevent any damage to other end-points communicating with the compromised host end-point and/or server.
In another example, the compliance filter may be implemented in a virtualized environment. A virtualized operating system may comprise a server executing multiple guest operating systems over a hypervisor. The guest operating systems may communicate with each other via a NIC. The communications between the guests may be checked that they comply with network security rules. In this case, the compliance filter may be implemented in the NIC with compliance rules governing the communication between the guest operating systems. In other words, the compliance rules will govern the allowed behavior of the guest operating system end-points.
The traffic between guest operating systems may pass through the NIC (for example PCI virtual functions of the NIC can be directly mapped into each guest allowing each guest to send packets to the NIC) and the compliance filter may implement the rule matching process. It will be appreciated that in this case, the data flow will not be directed to the network but rather from one guest operating system to another via the NIC. For example guest operating system transmit to NIC, data flow checked by compliance filter at the NIC, the NIC transmits the data flow to the further guest operating system. In this manner, the audit trail and all other features could be applied as they were in the communication between a host and a network via a NIC.
In the foregoing the compliance filter has been described as being implement as part of the NIC and/or as a programmable hardware such as an FPGA. It will further be appreciated that the compliance filter may alternatively be implemented with a CPU with dedicated hardware assistance, for example a hardware parser and hash lookup engine.
There may be some circumstances where the compliance filter or some aspects of the filter could be implemented in software running on the host itself. This would require both a trust model which accepted host software based security enforcement and a performance requirement which was within that of the CPUs available. One example of such a configuration could be a server for which the BIOS environment was running over a trusted hypervisor. In this case the hypervisor might contain the compliance filter (and receive updates via network side-band management to the server's management interfaces). In another case, the compliance filter could be implemented as an operating system component. In such cases, the level of trust of a host or operating system would be considered before such a deployment over a hardware deployment.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present application may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the application.
The present application is a continuation of and claims the benefit under 35 U.S.C. § 120 of U.S. patent application Ser. No. 14/165,371, titled “LOCKED DOWN NETWORK INTERFACE” and filed Jan. 27, 2014, which is herein incorporated by reference in its entirety. U.S. application Ser. No. 14/165,371 claims the benefit under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 61/809,777, titled “KILL SWITCH” and filed Apr. 8, 2013, which is herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5272599 | Koenen | Dec 1993 | A |
5325532 | Crosswy et al. | Jun 1994 | A |
5946189 | Koenen et al. | Aug 1999 | A |
6098112 | Ishijima et al. | Aug 2000 | A |
6160554 | Krause | Dec 2000 | A |
6304945 | Koenen | Oct 2001 | B1 |
6349035 | Koenen | Feb 2002 | B1 |
6438130 | Kagan et al. | Aug 2002 | B1 |
6502203 | Barron et al. | Dec 2002 | B2 |
6530007 | Olarig et al. | Mar 2003 | B2 |
6667918 | Leader et al. | Dec 2003 | B2 |
6718392 | Krause | Apr 2004 | B1 |
6728743 | Shachar | Apr 2004 | B2 |
6735642 | Kagan et al. | May 2004 | B2 |
6768996 | Steffens et al. | Jul 2004 | B1 |
6904534 | Koenen | Jun 2005 | B2 |
6950961 | Krause et al. | Sep 2005 | B2 |
6978331 | Kagan et al. | Dec 2005 | B1 |
7093158 | Barron et al. | Aug 2006 | B2 |
7099275 | Sarkinen et al. | Aug 2006 | B2 |
7103626 | Recio et al. | Sep 2006 | B1 |
7103744 | Garcia et al. | Sep 2006 | B2 |
7136397 | Sharma | Nov 2006 | B2 |
7143412 | Koenen | Nov 2006 | B2 |
7149227 | Stoler et al. | Dec 2006 | B2 |
7151744 | Sarkinen et al. | Dec 2006 | B2 |
7216225 | Haviv et al. | May 2007 | B2 |
7240350 | Eberhard et al. | Jul 2007 | B1 |
7245627 | Goldenberg et al. | Jul 2007 | B2 |
7254237 | Jacobson et al. | Aug 2007 | B1 |
7285996 | Fiedler | Oct 2007 | B2 |
7316017 | Jacobson et al. | Jan 2008 | B1 |
7346702 | Haviv | Mar 2008 | B2 |
7386619 | Jacobson et al. | Jun 2008 | B1 |
7403535 | Modi et al. | Jul 2008 | B2 |
7404190 | Krause et al. | Jul 2008 | B2 |
7502826 | Barron et al. | Mar 2009 | B2 |
7509355 | Hanes et al. | Mar 2009 | B2 |
7518164 | Smelloy et al. | Apr 2009 | B2 |
7519602 | Klemba et al. | Apr 2009 | B2 |
7551614 | Teisberg et al. | Jun 2009 | B2 |
7554993 | Modi et al. | Jun 2009 | B2 |
7573967 | Fiedler | Aug 2009 | B2 |
7580415 | Hudson et al. | Aug 2009 | B2 |
7580495 | Fiedler | Aug 2009 | B2 |
7617376 | Chadalapaka et al. | Nov 2009 | B2 |
7631106 | Goldenberg et al. | Dec 2009 | B2 |
7636703 | Taylor | Dec 2009 | B2 |
7650386 | McMahan et al. | Jan 2010 | B2 |
7653754 | Kagan et al. | Jan 2010 | B2 |
7685436 | Davis et al. | Mar 2010 | B2 |
7688853 | Santiago et al. | Mar 2010 | B2 |
7702629 | Cytron et al. | Apr 2010 | B2 |
7757232 | Hilland et al. | Jul 2010 | B2 |
7801027 | Kagan et al. | Sep 2010 | B2 |
7802071 | Oved | Sep 2010 | B2 |
7813460 | Fiedler | Oct 2010 | B2 |
7827442 | Sharma et al. | Nov 2010 | B2 |
7835375 | Sarkinen et al. | Nov 2010 | B2 |
7848322 | Oved | Dec 2010 | B2 |
7856488 | Cripe et al. | Dec 2010 | B2 |
7864787 | Oved | Jan 2011 | B2 |
7890692 | Pandya | Feb 2011 | B2 |
7904576 | Krause et al. | Mar 2011 | B2 |
7921178 | Haviv | Apr 2011 | B2 |
7929539 | Kagan et al. | Apr 2011 | B2 |
7930437 | Kagan et al. | Apr 2011 | B2 |
7934959 | Rephaeli et al. | May 2011 | B2 |
7945528 | Cytron et al. | May 2011 | B2 |
7954114 | Chamberlain et al. | May 2011 | B2 |
7978606 | Buskirk et al. | Jul 2011 | B2 |
7996348 | Pandya | Aug 2011 | B2 |
8000336 | Harel | Aug 2011 | B2 |
8006280 | Hildebrand et al. | Aug 2011 | B1 |
8156101 | Indeck et al. | Apr 2012 | B2 |
8326816 | Colle et al. | Dec 2012 | B2 |
8374986 | Indeck et al. | Feb 2013 | B2 |
8423789 | Poo et al. | Apr 2013 | B1 |
8473316 | Panzitta | Jun 2013 | B1 |
8789135 | Pani | Jul 2014 | B1 |
8942131 | Falk | Jan 2015 | B2 |
9325559 | Wieland | Apr 2016 | B1 |
9547824 | Indeck et al. | Jan 2017 | B2 |
10158377 | Indeck et al. | Dec 2018 | B2 |
20020059052 | Bloch et al. | May 2002 | A1 |
20020112139 | Krause et al. | Aug 2002 | A1 |
20020129293 | Hutton et al. | Sep 2002 | A1 |
20020140985 | Hudson | Oct 2002 | A1 |
20020156784 | Hanes et al. | Oct 2002 | A1 |
20030007165 | Hudson | Jan 2003 | A1 |
20030033588 | Alexander | Feb 2003 | A1 |
20030058459 | Wu et al. | Mar 2003 | A1 |
20030063299 | Cowan et al. | Apr 2003 | A1 |
20030065856 | Kagan et al. | Apr 2003 | A1 |
20030081060 | Zeng et al. | May 2003 | A1 |
20030086300 | Noyes et al. | May 2003 | A1 |
20030117958 | Nation et al. | Jun 2003 | A1 |
20030172330 | Barron et al. | Sep 2003 | A1 |
20030191786 | Matson et al. | Oct 2003 | A1 |
20030202043 | Zeng et al. | Oct 2003 | A1 |
20030214677 | Bhaskar et al. | Nov 2003 | A1 |
20040010712 | Hui | Jan 2004 | A1 |
20040015502 | Alexander et al. | Jan 2004 | A1 |
20040071250 | Bunton et al. | Apr 2004 | A1 |
20040141642 | Zeng et al. | Jul 2004 | A1 |
20040156346 | O'Neill | Aug 2004 | A1 |
20040190533 | Modi et al. | Sep 2004 | A1 |
20040190538 | Bunton et al. | Sep 2004 | A1 |
20040190557 | Barron | Sep 2004 | A1 |
20040193734 | Barron et al. | Sep 2004 | A1 |
20040193825 | Garcia et al. | Sep 2004 | A1 |
20040210674 | Gbadegesin | Oct 2004 | A1 |
20040210754 | Barron et al. | Oct 2004 | A1 |
20040249998 | Rajagopalan et al. | Dec 2004 | A1 |
20040252685 | Kagan et al. | Dec 2004 | A1 |
20050008223 | Zeng et al. | Jan 2005 | A1 |
20050018221 | Zeng et al. | Jan 2005 | A1 |
20050038918 | Hilland et al. | Feb 2005 | A1 |
20050038941 | Chadalapaka et al. | Feb 2005 | A1 |
20050039171 | Avakian et al. | Feb 2005 | A1 |
20050039172 | Rees et al. | Feb 2005 | A1 |
20050039187 | Avakian et al. | Feb 2005 | A1 |
20050066333 | Krause et al. | Mar 2005 | A1 |
20050172181 | Huliehel | Aug 2005 | A1 |
20050219278 | Hudson | Oct 2005 | A1 |
20050219314 | Donovan et al. | Oct 2005 | A1 |
20050231751 | Wu et al. | Oct 2005 | A1 |
20050240593 | D'Alo et al. | Oct 2005 | A1 |
20050276262 | Schuba | Dec 2005 | A1 |
20060010252 | Miltonberger et al. | Jan 2006 | A1 |
20060026443 | McMahan et al. | Feb 2006 | A1 |
20060026679 | Zakas | Feb 2006 | A1 |
20060045098 | Krause | Mar 2006 | A1 |
20060101273 | Tan et al. | May 2006 | A1 |
20060126619 | Teisberg et al. | Jun 2006 | A1 |
20060136570 | Pandya | Jun 2006 | A1 |
20060165074 | Modi et al. | Jul 2006 | A1 |
20060174242 | Zhu | Aug 2006 | A1 |
20060193318 | Narasimhan et al. | Aug 2006 | A1 |
20060228637 | Jackson et al. | Oct 2006 | A1 |
20060248191 | Hudson et al. | Nov 2006 | A1 |
20070150954 | Shon | Jun 2007 | A1 |
20070188351 | Brown et al. | Aug 2007 | A1 |
20070195814 | Barrie et al. | Aug 2007 | A1 |
20070214504 | Milani Comparetti et al. | Sep 2007 | A1 |
20070220183 | Kagan et al. | Sep 2007 | A1 |
20070237327 | Taylor et al. | Oct 2007 | A1 |
20070260602 | Taylor | Nov 2007 | A1 |
20070266431 | Matsuda | Nov 2007 | A1 |
20070277036 | Chamberlain et al. | Nov 2007 | A1 |
20080021716 | Buss et al. | Jan 2008 | A1 |
20080024586 | Barron | Jan 2008 | A1 |
20080025515 | Coombs | Jan 2008 | A1 |
20080109526 | Subramanian et al. | May 2008 | A1 |
20080115216 | Barron et al. | May 2008 | A1 |
20080115217 | Barron et al. | May 2008 | A1 |
20080126509 | Subramanian et al. | May 2008 | A1 |
20080147828 | Enstone et al. | Jun 2008 | A1 |
20080148400 | Barron et al. | Jun 2008 | A1 |
20080177890 | Krause et al. | Jul 2008 | A1 |
20080244060 | Cripe et al. | Oct 2008 | A1 |
20080301406 | Jacobson et al. | Dec 2008 | A1 |
20080304519 | Koenen et al. | Dec 2008 | A1 |
20080310440 | Chen | Dec 2008 | A1 |
20090060197 | Taylor et al. | Mar 2009 | A1 |
20090106780 | Nord et al. | Apr 2009 | A1 |
20090165003 | Jacobson et al. | Jun 2009 | A1 |
20090182683 | Taylor et al. | Jul 2009 | A1 |
20090201926 | Kagan et al. | Aug 2009 | A1 |
20090204813 | Kwan | Aug 2009 | A1 |
20090213856 | Paatela et al. | Aug 2009 | A1 |
20090268612 | Felderman et al. | Oct 2009 | A1 |
20090287628 | Indeck et al. | Nov 2009 | A1 |
20090302923 | Smeloy et al. | Dec 2009 | A1 |
20100088437 | Zahavi | Apr 2010 | A1 |
20100115621 | Staniford | May 2010 | A1 |
20100125477 | Mousseau et al. | May 2010 | A1 |
20100138840 | Kagan et al. | Jun 2010 | A1 |
20100169880 | Haviv et al. | Jul 2010 | A1 |
20100188140 | Smeloy | Jul 2010 | A1 |
20100189206 | Kagan | Jul 2010 | A1 |
20100198850 | Cytron et al. | Aug 2010 | A1 |
20100265849 | Harel | Oct 2010 | A1 |
20100274876 | Kagan et al. | Oct 2010 | A1 |
20110004457 | Haviv et al. | Jan 2011 | A1 |
20110004771 | Matsushima et al. | Jan 2011 | A1 |
20110010557 | Kagan et al. | Jan 2011 | A1 |
20110022560 | Breiter et al. | Jan 2011 | A1 |
20110029669 | Chuang et al. | Feb 2011 | A1 |
20110029847 | Goldenberg et al. | Feb 2011 | A1 |
20110040701 | Singla et al. | Feb 2011 | A1 |
20110044344 | Hudson et al. | Feb 2011 | A1 |
20110058571 | Bloch et al. | Mar 2011 | A1 |
20110083064 | Kagan et al. | Apr 2011 | A1 |
20110096668 | Bloch et al. | Apr 2011 | A1 |
20110113083 | Shahar | May 2011 | A1 |
20110116512 | Crupnicoff et al. | May 2011 | A1 |
20110119673 | Bloch et al. | May 2011 | A1 |
20110173352 | Sela et al. | Jul 2011 | A1 |
20110178917 | Parsons et al. | Jul 2011 | A1 |
20110178918 | Parsons et al. | Jul 2011 | A1 |
20110178919 | Parsons et al. | Jul 2011 | A1 |
20110178957 | Parsons et al. | Jul 2011 | A1 |
20110184844 | Parsons et al. | Jul 2011 | A1 |
20110196695 | Ye et al. | Aug 2011 | A1 |
20110202983 | Pope et al. | Aug 2011 | A1 |
20120016833 | Janiesch et al. | Jan 2012 | A1 |
20120084806 | Fahrny et al. | Apr 2012 | A1 |
20120089496 | Taylor et al. | Apr 2012 | A1 |
20120089497 | Taylor et al. | Apr 2012 | A1 |
20120095893 | Taylor et al. | Apr 2012 | A1 |
20120131341 | Mane et al. | May 2012 | A1 |
20120201375 | Kisielewicz | Aug 2012 | A1 |
20120210158 | Akiyama et al. | Aug 2012 | A1 |
20120246052 | Taylor et al. | Sep 2012 | A1 |
20120246520 | Matsubara et al. | Sep 2012 | A1 |
20120290702 | Vincent | Nov 2012 | A1 |
20130000700 | Lee | Jan 2013 | A1 |
20130003741 | Singh | Jan 2013 | A1 |
20130042323 | Narayanaswamy et al. | Feb 2013 | A1 |
20130047160 | Conover | Feb 2013 | A1 |
20130054857 | Horman et al. | Feb 2013 | A1 |
20130339947 | Neary et al. | Dec 2013 | A1 |
20140053280 | Durazzo et al. | Feb 2014 | A1 |
20140064293 | Deisinger | Mar 2014 | A1 |
20140115578 | Cooper et al. | Apr 2014 | A1 |
20140304802 | Pope et al. | Oct 2014 | A1 |
20150326542 | Serebrin | Nov 2015 | A1 |
Number | Date | Country |
---|---|---|
0 620 521 | Oct 1994 | EP |
2 843 897 | Mar 2015 | EP |
WO 01048972 | Jul 2001 | WO |
WO 02035838 | May 2002 | WO |
WO 2008127672 | Oct 2008 | WO |
WO 2009134219 | Nov 2009 | WO |
WO 2009136933 | Nov 2009 | WO |
WO 2010020907 | Feb 2010 | WO |
WO 2010087826 | Aug 2010 | WO |
WO 2011043769 | Apr 2011 | WO |
WO 2011053305 | May 2011 | WO |
WO 2011053330 | May 2011 | WO |
Entry |
---|
EP 18214201.8, dated May 17, 2019, Extended European Search Report. |
Extended European Search Report for European Application No. 18214201.8 dated May 17, 2019. |
Jang et al., Implementation of a Hybrid TCP/IP Offload Engine Prototype. Advances in Computer Systems Architecture Lecture Notes in Computer Science. Springer. 2005;464-77. |
EP 14163938.5—Extended European Search Report dated Feb. 3, 2015, 8 pages. |
Extended European Search Report for European Application No. 17187578.4 dated Jan. 5, 2018. |
U.S. Appl. No. 14/248,082—Notice of Allowance dated Apr. 8, 2014, 9 pages (LVL5 2038-2). |
0. Angin, et al.; “Report on the 5th IFIP lnternation Workshop on Quality of Service (IWQOS 97),” ACM Computer Communication Review, vol. 27, No. 3, pp. 100-117, Jul. 1997. |
A. Edwards, et al.; “User-Space Protocols Deliver High Performance to Applications on a Low-Cost Gb/s LAN,” ACM Computer Communication Review, vol. 24, No. 4, pp. 14-23, Oct. 1994. |
A. Edwards, S. Muir; “Experiences Implementing A High-Performance TCP in User-Space,” ACM Computer Communication Review, vol. 25, No. 4, pp. 196-205, Oct. 1995. |
A. Romanow and S. Floyd; “The Dynamics of TCP Traffic over A TM Networks,” ACM Computer Communication Review, vol. 24, No. 4, pp. 79-88, Oct. 1994. |
Adams et al., Internet X.509 Public Key Infrastructure Time Stamp Protocol (TSP). Internet Draft. PKIX Working Group. 2001. 26 pages. |
Andrew D. Birrell, et al.; “Grapevine: An E?<ercise in Distributed Computing,” Communications of the ACM, vol. 25, Issue 4, pp. 260-274, Apr. 1982. |
Andy Currid; “TCP Offload to the Rescue,” ACM Queue, vol. 2, No. 3, pp. 58-65, May 1, 2004. |
B. Leslie, et al.; “User-level device drivers: Achieved performance,” J. Comput. Sci. & Technol., vol. 20, pp. 1-17, Sep. 2005. |
Babak Falsafi, et al.; “Application-Specific Protocols for User-Level Shared Memory,” Proceedings of the 1994 conference on Supercomputing, pp. 380-389, Washington D.C.; Nov. 14, 1994. |
Bi.lie Hrvoye, et al.; “Deferred Segmentation for Wire-Speed Transmission of Large TCP Frames over Standard GbE Networks,” Proceedings of the 9th Symposium on High Performance Interconnects, 5 pages, Aug. 22, 2001. |
Bilic Hrvoye, et al.; “Presentation given at HOT1'01,” 9th Symposium on High Performance Interconnects, 9 pages, 22 D Aug. 2001. |
Boon S. Ang, et al.; “StarT-Voyager: A Flexible Platform for Exploring Scalable SMP Issues,” Proceedings of the 1998 ACM/IEEE conference on Supercomputing, Orlando, Florida, 19 pages, Nov. 7, 1998. |
Bruce Lowekamp, et al.; “Topology Discovery for Large Ethernet Networks,” ACM Computer Communication Review, vol. 31, No. 4, pp. 237-248, Oct. 2001. |
Bruce S. Davie; “A Host-Network Interface Architecture for ATM,” ACM Computer Communication Review, vol. 21, No. 4, pp. 307-315, Sep. 1991. |
C. A. Thekkath, et al.; “Implementing Network Protocols at User Level,” ACM Computer Communication Review, vol. 23, No. 4, pp. 64-132, Oct. 1993. |
C. Brendan S. Traw, et al.; “A High-Performance Host Interface for ATM Networks,” ACM Computer Communication Review, vol. 21, No. 4, pp. 317-325, Sep. 1991. |
C. Kline; “Supercomputers on the Internet: A Case Study,” ACM Computer Communication Review, vol. 17, No. 5, pp. 27-33, Aug. 1987. |
C. Partridge, J. Hughes, J. Stone; “Performance of Checksums and CRCS over Real Data,” ACM Computer Communication Review, vol. 25, No. 4, pp. 68-76, Oct. 1995. |
C. Traw and J. Smith; “Hardware/Software organization of a high performance ATM host interface,” IEEE Journal on Selected Areas in Communications, pp. 1-22, Feb. 1993. |
Celesti et al., A Remote Attestation Approach for a Secure Virtual Machine Migration in Federated Cloud Environments. Network Cloud Computing and Applications (NCAA). 2011 First International Symposium. Nov. 21, 2011:99-106. |
Charles Kalmanek; “A Retrospective View of ATM,” ACM Computer Communication Review, vol. 32, No. 5, pp. 13-19, Nov. 2002. |
Charles P. Thacker and Lawrence C. Stewart; “Firefly: a Multiprocessor Workstation,” ACM Operating Systems Review, vol. 21, Issue 4, pp. 164 -172, Oct. 1987. |
Cheng Jin, et al.; “FAST TCP: Motivation, Architecture, Algorithms, Performance,” Proceedings of IEEE lnfocom 2004, 21 pages, Mar. 7, 2004. |
Chi-Chao Chang, et al.; “Low-Latency Communication on the IBM RISC System/6000 SP,” Proceedings of the 1996 ACM/IEEE conference on Supercomputing, Pittsburgh, pp. 1-17, Nov. 17, 1996. |
Chris Maeda, Brian Bershad; “Protocol Service Decomposition for High-Performance Networking,” ACM Operating Systems Review, vol. 27, Issue 5, 12 pages, Dec. 1993. |
Christopher A. Kent, Jeffrey C. Mogul; “Fragmentation Considered Harmful,” ACM Computer Communication Review, vol. 17, No. 5, pp. 75-87, Oct. 1987. |
Craig Partridge; “How Slow Is One Gigabit Per Second ?,”ACM Computer Communication Review, vol. 20, No. 1, pp. 44-53, Jan. 1990. |
D. D. Clark and D. L. Tennenhouse; “Architectural Considerations for a New Generation of Protocols,” ACM Computer Communication Review, vol. 20, No. 4, pp. 200-208, Sep. 1990. |
D. L. Tennenhouse, D. J. Wetherall; “Towards an Active Network Architecture,” ACM Computer Communication Review, vol. 26, No. 2, pp. 5-18, Apr. 1996. |
Danny Cohen, et al.; “Use of message-based multicomputer components to construct gigabit networks,” ACM Computer Communication Review, vol. 23, No. 4, p. 32-44, Jul. 1993. |
Danny Cohen, Gregory Finn, Robert Felderman, Annette DeSchon; “ATOMIC: A Local Communication Network Created Through Repeated Application of Multicomputing Components,” Provided by Authors, pp. 1-21, Jan. 10, 1992. |
Danny Cohen, Gregory Finn, Robert Felderman, Annette DeSchon; “ATOMIC: A High-Speed Local Communication Architecture,” Journal of High Speed Networks; pp. 1-11, Jan. 3, 1994. |
David A. Borman; “Implementing TCP/IP on a Cray computer,” ACM Computer Communication Review, vol. 19, No. 2, pp. 11-15, Apr. 1989. |
David D. Clark, et al.; “An Analysis of TCP Processing Overhead,” IEEE Communications Magazine, vol. 27, No. 6, pp. 23-29, Jun. 1989. |
David D. Clark; “The Design Philosophy of the DARPA Internet Protocols,” ACM Computer Communication Review, vol. 18, No. 4, pp. 102-111, Aug. 1988. |
David R. Boggs, et al.; “Measured Capacity of an Ethernet: Myths and Reality,” ACM Computer Communication Review, vol. 18, No. 4, pp. 222-234, Aug. 1988. |
David R. Cheriton; “Sirpent: A High-Performance lnternetworking Approach,” ACM Computer Communication Review, vol. 19, No. 4, pp. 158-169, Sep. 1989. |
David Wetherall; “10 Networking Papers: Readings for Protocol Design,” ACM Computer Communication Review, vol. 36, No. 3, pp. 77-78, Jul. 2006. |
Derek McAuley, Rolf Neugebauer; “A case for Virtual Channel Processors,” Proceedings of the ACM SIGCOMM 2003 Workshops, pp. 237-242, Aug. 2003. |
Derek Robert McAuley; “Protocol Design for High Speed Networks,” PhD Thesis, University of Cambridge, 104 pages, Sep. 1989. |
Dickman, L., “Protocol Offloading vs Onloading in High Performance Networks,” Hot Interconnects Panel, 14th Symposium on High Performance Interconnects, Aug. 23, 2006, 7 pages. |
E. Blanton and M. Allman; “On Making TCP More Robust to Packet Reordering,” ACM Computer Communication Review, vol. 32, No. 1, pp. 20-30, Jan. 2002. |
E. Ruetsche; “The Architecture of Gb/s Multimedia Protocol Adapter,” ACM Computer Communication Review, vol. 23, No. 3, pp. 59-68, Jul. 1993. |
Ed Anderson, et al.; “Performance of the CRAY T3E Multiprocessor,” Proceedings of the 1997 ACM/IEEE conference on Supercomputing, pp. 1 -17, San Jose, California; Nov. 16, 1997. |
Edward D. Lazowska, David A. Patterson; “Computing Research: A Looming Crisis,” ACM Computer Communication Review, vol. 35, No. 2, 2005, pp. 65-68, Jul. 2005. |
Eric C. Cooper, et al.; “Protocol Implementation on the Nectar Communication Processor,” ACM Computer Communication Review, vol. 20, No. 4, 10 pages, Sep. 1990. |
Erich Ruetsche and Matthias Kaiserswerth; “TCP/IP on the Parallel Protocol Engine,” Proceedings of the IFIP TC6/ WG6.4 Fourth International Conference on High Performance Networking IV; pp. 119-134. Dec. 14, 1992. |
F.F. Kuo; “The Aloha System,” ACM Computer Communication Review, vol. 4, No. 1, pp. 5-8, Jan. 1974. |
Gary S. Delp, et al.; “An Analysis of Memnet: An Experiment in High-Speed Shared-Memory Local Networking,” ACM Computer Communication Review, vol. 18, No. 4, p. 165-174, Aug. 1988. |
Gene Tsudik; “Message Authentication with One-Way Hash Functions,” ACM Computer Communication Review, vol. 22, No. 5, pp. 29-38, Oct. 1992. |
Geoffray, P., “Protocol Off-Loading vs On-Loading in High-Perfomance Networks,” Hot Interconnects 2006, 14th Symposium on High Performance Interconnects, Aug. 23, 2006, 5 pages. |
Gordon E. Moore; “Cramming more components onto integrated circuits,” Electronics, vol. 38, No. 8, 4 pages, Apr. 1, 1965. |
Greg Chesson; “The Evolution of XTP,” Proceedings of the Third International Conference on High Speed Networking, pp. 1-10, Nov. 1991. |
Greg Minshall, et al.; “Flow labelled IP over ATM: design and rationale,” ACM Computer Communication Review, vol. 36, No. 3, pp. 79-92, Jul. 2006. |
Greg Regnier, et al.; ETA: Experience with an Intel Xeon Processor as a Packet Processing EnginelEEE Micro, vol. 24, No. 1, pp. 24-31, Jan. 1994. |
Greg Regnier, et al.; “TCP Onloading for Data Center Servers,” Computer, IEEE Computer Society, vol. 37, No. 11, pp. 48-58, Nov. 2004. |
Gregory G. Finn and Paul Mockapetris; “Netstation Architecture Multi-Gigabit Workstation Network Fabric,” Proceedings of lnterOp '94, Las Vegas, Nevada; pp. 1-9, May 1994. |
Gregory G. Finn; “An Integration of Network Communication with Workstation Architecture,” ACM Computer Communication Review, vol. 21, No. 5, 12 pages, Oct. 1991. |
Gregory L. Chesson; “Declaration of Dr Gregory L Chesson in Alacritech v. Microsoft,” United States District Court, Northern District California, San Francisco Division, 289 pages, Feb. 4, 2005. |
H. K. Jerry Chu; “Zero-Copy TCP in Solaris,” Proceedings of the USENIX Annual Technical Conference, 13 pages, Jan. 1996. |
H. Kanakia and .D. Cheriton; “The VMP Network Adapter Board (NAB): High-Performance Network Communication for Multiprocessors,” ACM Computer Communication Review, vol. 18, No. 4, pp. 175-187, Aug. 1988. |
Harvey J. Wassermann, et al.; “Performance Evaluation of the SGI Origin2000: A Memory-Centric Characterization of LANL ASCI Applications,” Proceedings of the 1997 ACM/IEEE conference on Supercomputing, pp. 1-11, San Jose, California; Nov. 16, 1997. |
Humaira Kamal, et al.; “SCTP versus TCP for MPI,” Proceedings of the 2005 ACM/IEEE conference on Supercomputing, Seattle, Washington, 14 pages, Nov. 12, 2005. |
Ian Leslie and Derek R. McAuley; “Fairisle: An ATM Network for the Local Area,” ACM Computer Communication Review, vol. 21, No. 4, pp. 327-336, Sep. 1991. |
Ian M. Leslie, et al.; “The Architecture of the Universe Network,” ACM Computer Communication Review, vol. 14, No. 2, pp. 2-9, Jun. 1984. |
Ian Pratt and Keir Fraser; “Arsenic: A User-Accessible Gigabit Ethernet Interface,” Proceedings of IEEE lnfocom 2001, pp. 1-11; Apr. 22, 2001. |
J. Carver Hill; “Synchronizing Processors with Memory-Content-Generated Interrupts,” Communications of the ACM, vol. 16, No. 6, p. 350-351, Jun. 1973. |
J. Evans and T. Buller; “The End of History,” IEEE TCGN Gigabit Networking Workshop, 1 O pages, Apr. 22, 2001. |
J. Vis; “A Simple LAN Performance Measure,” ACM Computer Communication Review, vol. 24, No. 1, pp. 7-11, Jan. 1994. |
J.C. Mogul; “The Case for Persistent-Connection HTTP,” ACM Computer Communication Review, vol. 25, No. 4, pp. 299-313, Oct. 1995. |
Jack B. Dennis and Earl C. Van Horn; “Programming Semantics for Multiprogrammed Computations,”Communications of the ACM, vol. 9, No. 3, pp. 143-155, Mar. 1966. |
Jeffrey C. Mogul; “TCP offload is a dumb idea whose time has come,” Proceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems, pp. 26-30, May 18, 2003. |
Jeffrey R. Michel; “The Design and Evaluation of an Off-Host Communications Protocol Architecture,” MSci Thesis, University of Virginia, 144 pages, Aug. 1993. |
Jenwei Hsieh, et al.; “Architectural and Performance Evaluation of GigaNet and Myrinet Interconnects on Clusters of Small-Scale SMP Servers,” Proceedings of the 2000 ACM/IEEE conference on Supercomputing, Dallas, Texas, pp. 1-9, Nov. 4, 2000. |
Jiuxing Liu, et al.; “Performance Comparison of MPI Implementations over lnfiniBand, Myrinet and Quadrics,” Proceedings of the 2003 ACM/IEEE conference on Supercomputing, Phoenix, Arizona, pp. 1-15, Nov. 15, 2003. |
Joe Touch, et al.; “Experiences with a Production Gigabit LAN,” Gigabit Networking Workshop '97 Meeting, Kobe, Japan, 10 pages, Apr. 1997. |
Joe Touch, et al.; “Host-based Routing Using Peer DMA,” Gigabit Networking Workshop '97 Meeting, Kobe, Japan, 2 pages, Apr. 1997. |
John M. McQuillan, et al.; “An Overview of the New Routing Algorithm for the ARPANET,” Proceedings of the 6th Data Communications Symposium, pp. 54-60, Nov. 1979. |
John Nagle; “Congestion Control in IPrfCP lnternetworks,” ACM Computer Communication Review, vol. 14, No. 4, p. 11-17, Oct. 1984. |
John Salmon, et al.; “Scaling of Beowulf-class Distributed Systems,” Proceedings of the 1998 ACM/IEEE Conference on Supercomputing, Orlando, Florida, pp. 1-18, Nov. 7, 1998. |
Jon Crowcroft, Derek McAuley; “ATM: A Retrospective on Systems Legacy,” ACM Computer Communication Review, vol. 32, No. 5, pp. 11-21, Nov. 2002. |
Jon Crowcroft; “10 Networking Papers: Recommended Reading,” ACM Computer Communication Review, vol. 36, No. 2, pp. 31-32, Apr. 2006. |
Jonathan Kay and Joseph Pasquale; “The Importance of Non-Data Touching Processing Overheads in TCP/IP,” ACM Computer Communication Review, vol. 23, No. 4, 10 pages, Oct. 1993. |
Jonathan M. Smith and C. Brendan S. Traw; “Giving Applications Access to Gb/s Networking,” IEEE Network, vol. 7, ⋅ Issue 4, 14 pages, Jul. 1993. |
Jonathan Smith; “The Influence of ATM on Operating Systems,” ACM Computer Communication Review, vol. 32, No. 5, pp. 29-37, Nov. 2002. |
Jonathan Stone, Craig Partridge; “When The CRC and TCP Checksum Disagree,” ACM Computer Communication Review, vol. 30, No. 4, 11 pages, Oct. 2000. |
Jose Carlos Sancho, et al.; “.Quantifying the Potential Benefit of Overlapping Communication and Computation in Large-Scale Scientific Applications,” Proceedings of the 2006 ACM/IEEE conference on Supercomputing, Tampa, Florida, 40 pages, Nov. 11, 2006. |
Justin Hurwitz, Wu-chun Feng; “Initial End-to-End Performance Evaluation of 10-Gigabit Ethernet,” Proceedings of the 11th Symposium on High Performance Interconnects, 6 pages, Aug. 20, 2003. |
K. Kleinpaste, P. Steenkiste, B. Zill; “Software Support for Outboard Buffering and Checksumming,” ACM Computer Communication Review, vol. 25, No. 4, pp. 87-98, Oct. 1995 |
Ken Calvert; “Reflections on Network Architecture: an Active Networking Perspective,” ACM Computer Communication Review, vol. 36, No. 2, pp. 27-30, Apr. 2006. |
Kieran Mansley, et al.; “Getting 10 Gb/s from Xen,” Euro-Par Conference 2007, Rennes, France, 10 pages, Aug. 28, 2007. |
L. S. Brakmo, et al.; “TCP Vegas: New Techniques for Congestion Detection and Avoidance,” ACM Computer Communication Review, vol. 24, No. 4, pp. 24-35, Oct. 1994. |
M. Allman; “TCP Byte Counting Refinements,” ACM Computer Communication Review, vol. 29, No. 3, pp. 14-22, Jul. 1999. |
M. de Vivo, et al.; “Internet Vulnerabilities Related to TCP/IP and T/TCP,” ACM Computer Communication Review, vol. 29, No. 1, pp. 81-85, Jan. 1999. |
M. Kaiserswerth; “The Parallel Protocol Engine,” IEEE/ACM Transactions in Networking vol. 1, Issue 6, pp. 650-663, Dec. 1993. |
M.V. Wilkes and R.M. Needham; “The Cambridge Model Distributed System,” ACM SIGOPS Operating Systems Review, vol. 14, Issue 1, pp. 21-29, Jan. 1980. |
Margaret L. Simmons and Harvey J. Wasserman; “Performance Comparison of the Cray-2 and Cray X-MP/416 23 Supercomputers,” Proceedings of the 1988 ACM/IEEE conference on Supercomputing, pp. 288-295, Orlando, Florida; Nov. 12, 1988. |
Mark David Hayter; “A Workstation Architecture to Support Multimedia,” PhD Thesis, University of Cambridge, 111 pages, Sep. 1993. |
Mark Hayter, Derek McAuley; “The Desk Area Network,” ACM Operating Systems Review, vol. 25, Issue 4, pp. 1-11, Oct. 1991. |
Marvin Zelkowitz; “Interrupt Driven Programming,” Communications of the ACM, vol. 14, No. 6, p: 417-418, Jun. 1971. |
Matthias Kaiserswerth; “The Parallel Protocol Engine,” IEEE/ACM Transactions in Networking vol. 1, Issue 6, pp. 650-663, Dec. 1993. |
Mengjou Lin, et al.; “Performance of High-Speed Network 1/0 Subsystems: Case Study of A Fibre Channel Network,” Proceedings of the 1994 conference on Supercomputing, Washington D.C.; pp. 174-183, Nov. 14, 1994. |
Michael J. Dixon; “System support for multi-service traffic,” University of Cambridge Computer Laboratory Technical Report, No. 245, pp. 1-108, Jan. 1992. |
Michael S. Warren, et al.; “Avalon: An Alpha/Linux Cluster Achieves 10 Gflops for $150k,” Proceedings of the 1998 ACM/IEEE conference on Supercomputing, Orlando, Florida, pp. 1-10, Nov. 7, 1998. |
Montry, G., “Open Fabrics Alliance,” www.openfabrics.org, Hot Interconnects 2006, 14th Symposium on High Performance Interconnects, Aug. 23, 2006, 8 pages. |
Murali Rangarajan, et al.; “TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance,” Technical Report DCR-TR-481, Computer Science Department, Rutgers University, 14 pages, Mar. 2002. |
Nanette J. Boden, et al.; “Myrinet: A Gigabit-per-Second Local-Area Network,” Draft of paper published in IEEE Micro, vol. 15, No. 1, pp. 1-15, Nov. 16, 1994. |
NR Adiga, et al.; “An Overview of the BlueGene/L Supercomputer,” Proceedings of the 2002 ACM/IEEE conference on Supercomputing, pp. 1-22, Baltimore; Nov. 16, 2002. |
P. Balaji, et al.; “Head-to-Toe Evaluation of High-Performance Sockets Over Protocol Offload Engines,” Proceedings of the IEEE International Conference on Cluster Computing, 2005, pp. 1-10, Sep. 2005. |
P. Druschel, et al.; “Experiences with a High-Speed Network Adaptor: A Software Perspective,” ACM Computer Communication Review, vol. 24, No. 4, pp. 2-13, Oct. 1994. |
P. Kermani and L. Kleinrock; “Virtual cut-through: A new computer communciation switching technique,” Computer Networks, vol. 3, No. 4, pp. 267-286, Sep. 1979. |
Parry Husbands and James C. Hoe; “MP1-StarT: Delivering Network Performance to Numerical Applications,” Proceedings of the 1998 ACM/IEEE conference on Supercomputing, Orlando, Florida, 15 pages, Nov. 7, 1998. |
Pasi Sarolahti, et al.; “F-RTO: An Enhanced Recovery Algorithm for TCP Retransmission Timeouts,” ACM Computer Communication Review, vol. 33, No. 2, pp. 51-63, Apr. 2003. |
Patrick Crowley, et al.; “Characterizing Processor Architectures for Programmable Network Interfaces,” Proceedings of the 14th international conference on Supercomputing, Santa Fe, New Mexico, 12 pages, May 8, 2000. |
Patrick Geoffray; “A Critique of ROMA,” HPCWire article: http://www.hpcwire.com/features/17886984.html, 7 pages, Aug. 18, 2006. |
Paul E. McKenney and Ken F. Dove; “Efficient Demultiplexing of Incoming TCP Packets,” ACM Computer Communication Review, vol. 22, No. 4, pp. 269-279, Oct. 1992. |
Paul Ronald Barham; “Devices in a Multi-Service Operating System,” PhD Thesis, University of Cambridge, 142 pages, Jul. 1996. |
Paul V. Mockapetris, Kevin J. Dunlap; “Development of the Domain Name System,” ACM Computer Communication Review, vol. 18, No. 4, pp. 112-122, Aug. 1988. |
Peter Druschel and Larry L. Peterson; “Fbufs: A High-Bandwidth Cross-Domain Transfer Facility,” ACM Operating Systems Review, vol. 27, Issue 5, pp. 189-202, Dec. 1993. |
Peter Steenkiste; “Analyzing Communication Latency using the Nectar Communication Processor,” ACM Computer Communication Review, vol. 22, No. 4, pp. 199-209, Oct. 1992. |
Petrini, F., “Protocol Off-Loading vs On-Loading in High-Performance Networks,” Hot Interconnects 2006, 14th Symposium on High Performance Interconnects, Aug. 23, 2006, 4 pages. |
Philip Buonadonna, et al.; “An Implementation and Analysis of the Virtual Interface Architecture,” Proceedings of the 1998 ACM/IEEE conference on Supercomputing, Orlando, Florida, 20 pages, Nov. 7, 1998. |
Piyush Shivam, et al.; “EMP: Zero-copy OS-bypass NIC-driven Gigabit Ethernet Message Passing,” Proceedings of the 2001 ACM/IEEE conference on Supercomputing, Denver, CO, pp. 1-8, Nov. 10, 2001. |
R. Braden, et al.; “Computing the Internet Checksum,” ACM Computer Communication Review, vol. 19, No. 2, pp. 86-94, Apr. 1989. |
R. Bush, D. Meyer; “Some Internet Architectural Guidelines and Philosophy,” IETF Network Working Group, Request for Comments: 3439, pp. 1-25, Dec. 2002. |
R. J. Black, I. Leslie, and D. McAuley; “Experiences of Building an ATM Switch for the Local Area,” ACM Computer Communication Review, vol. 24, No. 4, pp. 158-167, Oct. 1994. |
Raj K. Singh, et al.; “A Programmable HIPPI Interface for a Graphics Supercomputer,” Proceedings of the 1993 ACM/IEEE conference on Supercomputing, pp. 124-132, Portland, Oregon; Nov. 15, 1993. |
Raj K. Singh, et al.; “A Programmable Network Interface for a Message-Based Multicomputer,” ACM Computer Communication Review, vol. 24, No. 3, pp. 8-17, Jul. 1994. |
Reginer, G., “Protocol Onload vs. Offload,” Hot Interconnects 2006, 14th Symposium on High Performance Interconnects, Aug. 23, 2006, 1 page. |
Robert M. Brandriff, et al.; “Development of a TCP/IP for the IBM/370,” ACM Computer Communication Review, vol. 15, No. 4, pp. 2-8, Sep. 1985. |
Robert Ross, et al.; “A Case Study in Application 1/0 on Linux Clusters,” Proceedings of the 2001 ACM/IEEE conference on Supercomputing, Denver, CO, 17 pages, Nov. 10, 2001. |
Ruttgen, nCipher Corporation Ltd. TM Eastern Europe. EEPE, Miedzyzdroje. Jun. 7-9, 2006. 28 pages. |
S. L. Pope, et al.; “Enhancing Distributed Systems with Low-Latency Networking,” Parallel and Distributed Computing and Networks, Brisbane, Australia, pp. 1-12, Dec. 1998. |
Sally Floyd; “TCP and Explicit Congestion Notification,” ACM Computer Communication Review, vol. 24, No. 5, p. 8-23, Oct. 1994. |
Sayantan Sur, et al.; “High-Performance and Scalable MPI over lnfiniBand with Reduced Memory Usage: An In-Depth Performance Analysis,” Proceedings of the 2006 ACM/IEEE conference on Supercomputing, Tampa, Florida, 13 pages, Nov. 11, 2006. |
Srihari Makineni and Ravi Iyer; “Architectural Characterization of TCP/IP Packet Processing on the Pentium M Processor,” Proceedings of the 10th International Symposium on High Performance Com1:>uter Architecture, 11 pages, Feb. 14, 2004. |
Steve Muir and Jonathan Smith; “Piglet: A Low-Intrusion Vertical Operating System,” Technical Report MS-CIS-00-04, University of Pennsylvania, 2000, pp. 1-15, Jan. 2000. |
Steven J. Sistare, Christopher J. Jackson; “Ultra-High Performance Communication with MPI and the Sun Fire Link Interconnect,” Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p. 1-15, Baltimore; Nov. 16, 2002. |
Steven Pope, David Riddoch; “10Gb/s Ethernet Performance and Retrospective,” ACM Computer Communication Review, vol. 37, No. 2, pp. 89-92, Mar. 19, 2007. |
Stuart Wray, et al.; “The Medusa Applications Environment,” Proceedings of the International Conference on Multimedia Computing and Systems, Boston, MA, 9 pages, May 1994. |
Sumitha Bhandarkar, et al.; “L TCP: Improving the Performance of TCP in Highspeed Networks,” ACM Computer Communication Review, vol. 36, No. 1, pp. 41-50, Jan. 2006. |
Thomas Sterling, et al.; “Beowolf: A Parallel Workstation for Scientific Computation,” Proceedings of the 24th International Conference on Parallel Processing, pp. 1-4, Aug. 1995. |
Thorsten von Eicken, et al.; “U-Nei: A User-Level Network Interface for Parallel and Distributed Computing,” ACM Operating Systems Review, vol. 29, Issue 5, pp. 40-53, Dec. 1995. |
Tom Kelly; “Scalable TCP: Improving Performance in Highspeed Wide Area Networks,” ACM Computer Communication Review, vol. 33, No. 2, pp. 83-91, Apr. 2003. |
V. Cerf, et al.; “Propo˜a1 for an International End-to-End Protocol,” ACM Computer Communication Review, vol. 6 No. Robert M. Metcalfe and David R. Boggs; “Ethernet: distributed packet switching for local computer networks,” Communications of the ACM, vol. 19, Issue 7, pp. 395-404, Jul. 1, 1976., pp. 63-89, Jan. 1976. |
V. Jacobson; “Congestion Avoidance and Control,” ACM Computer Communication Review, vol. 18, No. 4, pp. 157-173, Aug. 1988. |
Various forum members; “MPI: A Message-Passing Interface Standard,” Message-Passing Interface Forum, University of Tennessee, Knoxville, 236 pages, May 5, 1994. |
Vinay Aggarwal, et al.; Workshop on network-110 convergence: experience, lessons, implications (NICELI), ACM Computer Communication Review, vol. 33, No. 5, pp. 75-80, Oct. 2003. |
Vinton Cerf, Robert Kahn; “A Protocol for Packet Network Intercommunication,” IEEE Transactions on Communications, vol. COM-22, No. 5, 13 pages, May 1974. |
W. E. Leland, et al.; “On the Self-Similar Nature of Ethernet Traffic,” ACM Computer Communication Review, vol. 23, D No. 4, pp. 183-193, Oct. 1993. |
W. Feng and P. Tinnakornsrisuphap; “The Failure ofTCP in High-Performance Computational Grids,” Proceedings of the 2000 ACM/IEEE conference on Supercomputing, Dallas, Texas, 11 pages, Nov. 4, 2000. |
W. Feng, et al.; “Performance Characterization of a 10-Gigabit EthernetTOE,” Proceedings of the 13th Symposium on High Performance Interconnects, pp. 1-6, Aug. 17, 2005. |
Wu-chun Feng, et al.; “Optimizing 10-Gigabit Ethernet for Networks ofWorkstations, Clusters, and Grids: A Case Study,” Proceedings of the 2003 ACM/IEEE conference on Supercomputing, Phoenix, Arizona, 13 pages, Nov. 15, 2003. |
European Communication for European Application No. 16160897.1 dated Aug. 10, 2017. |
U.S. Appl. No. 14/165,371, filed Jan. 27, 2014, Pope et al. |
EP 14163938.5, Feb. 3, 2015, Extended European Search Report. |
EP 17187578.4, Jan. 5, 2018, Extended European Search Report. |
EP 16160897.1, Aug. 10, 2017, European Communication. |
Number | Date | Country | |
---|---|---|---|
20180375829 A1 | Dec 2018 | US |
Number | Date | Country | |
---|---|---|---|
61809777 | Apr 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14165371 | Jan 2014 | US |
Child | 16121366 | US |