Method and apparatus for managing orders in financial markets

Information

  • Patent Grant
  • 11397985
  • Patent Number
    11,397,985
  • Date Filed
    Wednesday, July 25, 2018
    6 years ago
  • Date Issued
    Tuesday, July 26, 2022
    2 years ago
Abstract
An integrated order management engine is disclosed that reduces the latency associated with managing multiple orders to buy or sell a plurality of financial instruments. Also disclosed is an integrated trading platform that provides low latency communications between various platform components. Such an integrated trading platform may include a trading strategy offload engine.
Description
INTRODUCTION


FIG. 1 provides a block diagram of an exemplary trading platform. A general role of financial exchanges, crossing networks and electronic communications networks is to accept orders to buy/sell financial instruments, maintain sorted listings of buy/sell orders for each financial instrument, and match buyers/sellers at the same price (transact trades). Financial exchanges, crossing networks and electronic communications networks report all of this activity on various types of financial market data feeds as described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675. As used herein, a “financial instrument” refers to a contract representing an equity ownership, debt, or credit, typically in relation to a corporate or governmental entity, wherein the contract is saleable. Examples of financial instruments include stocks, bonds, options, commodities, currency traded on currency markets, etc. but would not include cash or checks in the sense of how those items are used outside the financial trading markets (i.e., the purchase of groceries at a grocery store using cash or check would not be covered by the term “financial instrument” as used herein; similarly, the withdrawal of $100 in cash from an Automatic Teller Machine using a debit card would not be covered by the term “financial instrument” as used herein). Furthermore, the term “financial market data” as used herein refers to data contained in or derived from a series of messages that individually represent a new offer to buy or sell a financial instrument, an indication of a completed sale of a financial instrument, notifications of corrections to previously-reported sales of a financial instrument, administrative messages related to such transactions, and the like. Feeds of messages which contain financial market data are available from a number of sources and exist in a variety of feed types—for example, Level 1 feeds and Level 2 feeds as discussed herein.


Dark Pools play a similar function of matching up buyers and sellers, but do not provide full visibility into the available liquidity and pricing information. Dark Pools may be operated by financial exchanges, investment banks, or other financial institutions. Dark Pools are rapidly becoming a key market center for electronic trading activity, with a substantial proportion of transactions occurring in dark pools, relative to public markets.


In order to facilitate the development of trading applications that leverage real-time data from multiple market centers (and their concomitant feeds), trading platforms typically normalize data and perform common data processing/enrichment functions in ticker plants, as described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675 and WO Pub. WO 2010/077829.


Trading strategies consume normalized market data, make decisions to place buy/sell orders, and pass those orders on to an order management system. Note that those orders may provide guidance to the order management system on where to route the order (e.g. whether or not it should be routed to a dark pool), how long the order should be exposed in the market before canceling it (if it is not executed), and other conditions governing the management of the order in the marketplace.


An Order Management System (OMS) (which can also be referred to as an Execution Management System (EMS)) is responsible for managing orders from one or more trading applications. Note that the OMS/EMS may be responsible for managing orders from multiple trading entities. These entities may be competing trading groups within the same investment bank. These entities may also be independent financial institutions that are accessing the market through a common prime services broker or trading infrastructure provider.


The function of the OMS/EMS is to enter orders into a market. Prior to entering an order into a market, the OMS may first perform a series of checks in order to deem the order “valid” for placement. These checks can include:

    • Individual account and risk profile
      • Order quantity, instant and cumulative
      • Quantity-price product, instant and cumulative
      • Cumulative net value on position
      • Percent away from last tick and/or open
      • Position limits, margins
      • Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
    • Corporate account and risk profile
      • Order quantity, instant and cumulative
      • Quantity-price product, instant and cumulative
      • Cumulative net value on position
      • Percent away from last tick and/or open
      • Position limits, margins
      • Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
      • Corporate “restricted list” of symbols
    • Regulatory
      • Short sale restrictions
      • Halted instruments
      • Tick rules
      • Trade through


        It can be noted that these checks are driven by account, risk, and regulatory data accessible by the OMS, as well as a view of the current state of the markets provided via normalized market data from a ticker plant.


It can also be noted that the OMS/EMS typically is used to manage order placement into multiple markets, including dark pools. Once an order is declared to be appropriate (i.e., “valid”), one of the primary functions of the OMS/EMS is to select the destination for each incoming order. Note that the OMS/EMS may also choose to sub-divide the order into smaller orders that may be routed the same or different markets. The OMS/EMS makes routing decisions based on the current state of the markets provided via normalized market data from a ticker plant, as well as routing parameters input to the OMS/EMS. Routing parameters may be scoped on a per-account or corporate basis. These parameters may include:

    • Per-market fee and rebate structure
    • Account fee and rebate structure
    • Per-market outstanding limit
    • Market access latency (continuously updated estimate of intra-exchange latency)
    • Routing strategy
      • Best net execution price (including transaction fees, maker/taker models, etc.)
      • Lowest fee
      • Inter-market Sweep Order (ISO) to all markets
      • Market preference on order
    • Order split rules
      • Range of markets
      • Max size per market
      • Price delta limit from current price of each market


Once the OMS has made a decision of where and how to route an order, it may then attempt to optimize the order and communication channel in which it transmits orders to a given market (order entry optimization). For example, orders with a higher probability of getting filled (matched) may be placed prior to orders with a lower probability of getting filled, or orders meeting certain criteria, such as order types or specific financial instruments, may have a higher probability of being filled by utilizing one communication channel rather than another. The order entry optimization may also incorporate the current view of the market (from the normalized market data) as well the current estimate of intra-market latency for the given market.



FIG. 2 presents a diagram of a conventional OMS/EMS implementation known in the art. Typically, a plurality of servers 200 and network infrastructure (switches, routers, etc.) are employed to host one or more instances of OMS/EMS functions that are interconnected via one or more messaging buses 204, 206, and 208. The OMS/EMS functions are typically implemented in software components that execute on general-purpose processors (GPPs) present in the plurality of servers 202. As shown in FIG. 2, normalized market data from a ticker plant is distributed to OMS/EMS software components via a market data messaging bus 204. Similarly an order entry messaging bus 206 carries incoming orders from trading strategies, order-related messages between OMS/EMS software components, outgoing orders to markets, and order responses from markets. A database access messaging bus 208 provides OMS/EMS software components with access to databases of entitlements, regulatory parameters, risk profiles, accounts, order routing parameters, and position blotters.


One or more order validation software components are deployed on one or more servers 202. Each order validation software component requires a market data interface to the messaging bus. The interface allows the validation software component to request the necessary market data to perform validation on incoming orders. Similarly, the order validation software components listen for new incoming orders from trading strategies on the order entry bus. Note that the latency of market data delivery and the bandwidth available on the market data bus affect the quality and quantity, respectively, of data used by the order validation software component. Furthermore, the distribution of order validation software components across multiple servers 202 segments validation decisions. As result, the previously described validation decisions are performed on a limited view of data, which introduces risk, or validation decisions are delayed until data from disparate components can be compiled in order to build a comprehensive view of risk. Such delays may reduce or eliminate market opportunities that depend on a fast response to trading opportunities.


Orders that pass the validation checks are forwarded to one or more routing strategy software components that perform order placement into multiple markets, as previously described. Like the order validation software components, each routing strategy software component requires a market data interface to the market data messaging bus through which it receives current pricing information. The order routing software components typically require a price-aggregated view of the book for the instruments for which it is routing new orders. These book views may be cached locally in the routing strategy software components or requested via the market data interface. The latency associated with these book views directly affects the quality of the data used by the routing strategy software components to make order routing decisions. Delayed data may cause a routing strategy software component to make a decision that results in a missed trading opportunity or a trading loss. Once a routing strategy software component makes a routing decision, the order along with its handling instructions and destination market is forwarded on to the order entry bus.


Typically, output orders from the routing strategy software components are directly passed to one or more FIX engine software components that implement the order-entry interface to one or more markets. The FIX engine software components pass outgoing orders to the markets and pass incoming order responses from the markets to the order entry bus. The latency induced by another transition over a messaging bus and the FIX engine processing represents an additive contribution to the total latency of the OMS/EMS.


Optionally, an OMS/EMS may include one or more order entry optimization software components. As previously described, these software components impose a priority ordering on the orders passed on to the markets. When included in the OMS/EMS, the software components receive orders from the routing strategy software components via the order entry bus, perform their priority queuing operation, and pass orders destined for the market to the appropriate FIX engine software components via the order entry messaging bus. As with the FIX engine software components, the latency induced by another transition over a messaging bus and the order entry optimization processing represents an additive contribution to the total latency of the OMS/EMS.


Thus, distributing OMS/EMS components across multiple systems results in added complexity and latency, which introduces regulatory risk and limits the opportunity to capitalize on latency-sensitive trading opportunities. Furthermore, the overhead of inter-component communication may limit the quantity of data available to components to perform their tasks. This may introduce additional regulatory risk and may further limit trading opportunities.


As a solution to these technical problems of complexity and latency, the inventors disclose a variety of embodiments whereby tight integration is provided between system components to thereby dramatically improve latency and reduce communication complexity.


For example, the inventors disclose an apparatus comprising a processor configured as an order management engine, the order management engine configured to (1) process a plurality of orders relating to a plurality of financial instruments based on a plurality of inputs, and (2) integrate at least two members of the group consisting of an order validation operation, a routing strategy operation, a position blotter operation, and an order entry optimization to thereby process the orders.


As another example, the inventors disclose a method comprising (1) processing, by a processor configured as an order management engine, a plurality of orders relating to a plurality of financial instruments based on a plurality of inputs, wherein the processing comprises performing at least two members of the group consisting of an order validation operation, a routing strategy operation, a position blotter operation, and an order entry optimization via integrated components of the order management engine.


As still another example, the inventors disclose an apparatus comprising a trading platform, the trading platform configured to receive and process streaming financial market data, the trading platform comprising at least two members of the group consisting of (1) a ticker plant engine, (2) a trading strategy engine, and (3) an order management engine, each integrated within the trading platform.


As another example, the inventors disclose a method comprising receiving and processing, by a trading platform, streaming financial market data, the trading platform comprising at least two members of the group consisting of (1) a ticker plant engine, (2) a trading strategy engine, and (3) an order management engine, each integrated within the trading platform.


The inventors also disclose an apparatus comprising a trading platform, the trading platform configured to receive and process streaming financial market data, the trading platform comprising a host system, and a trading strategy engine, wherein the trading strategy engine is configured to offload from the host system at least a portion of a trading strategy with respect to one or more financial instruments and one or more financial markets.


Further still, the inventors disclose a method comprising (1) receiving and processing, by a trading platform, streaming financial market data, the trading platform comprising a host system and a trading strategy engine, and (2) the trading strategy engine offloading from the host system at least a portion of a trading strategy with respect to one or more financial instruments and one or more financial markets.


These and other features and advantages of the present invention will be understood by those having ordinary skill in the art upon review of the description and figures hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts an exemplary trading platform;



FIG. 2 depicts a conventional OMS/EMS;



FIG. 3 depicts an exemplary embodiment of an integrated order management engine (OME);



FIG. 4 depicts an exemplary view of financial market data that can be provided by a market view component of an OME;



FIG. 5 depicts exemplary rules engines that can be employed in an order validation component of the OME;



FIG. 6 depicts an exemplary order entry optimization component of the OME;



FIG. 7 depicts an exemplary integrated trading platform.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Order Management Engine



FIG. 3 provides a block diagram of an exemplary order management engine (OME) 300 that integrates various functional components of an OMS/EMS. The integrated engine we describe herein provides significant advantages over the state-of-the-art by significantly reducing latency and complexity while expanding the breadth and increasing the quality of data that may be shared among the components. For example in an embodiment where engine components are deployed on a reconfigurable logic device, on-chip interconnects in the reconfigurable logic device have the potential to provide orders of magnitude more communication bandwidth between components hosted on the same device, as compared to components hosted on disparate servers interconnected via commodity network links. These advantages provide the OME disclosed herein with an opportunity to reduce risk and to more effectively capitalize on latency-sensitive trading opportunities.


As shown in FIG. 3, the OME is comprised of a set of parallel components that each performs a subset of the OMS/EMS functionality. The primary datapath of the order management engine (OME) is organized as a feed-forward pipeline: orders flow from the mapping component 302 to the order validation component 304 to the routing strategy component 306 to the order entry optimization component 308. This eliminates the latency and complexity overhead of general-purpose messaging buses interconnecting disparate components. Additionally, this architecture maps well to parallel processing devices such as reconfigurable logic devices (e.g., Field Programmable Gate Arrays (FGPAs)), graphics processing units (GPUs), and chip-multi-processors (CMPs). Feedback from the markets (e.g order accept/reject, order fills, latency measurements via order responses 334) is also propagated to the appropriate component via dedicated interconnects which are only practical in an integrated design. Note that each of the components may exploit parallelism internally in order to maximize throughput and minimize processing latency. Subsequently, we provide examples of parallel implementations of OME components.


The OME can ingest a stream of orders 324 originating from one or more trading strategies from one or more trading entities. Preferably, those trading strategies are accelerated and hosted on the integrated trading platform as described herein in connection with FIG. 7, although this need not be the case. Incoming orders 324 preferably contain the following fields: instrument key, individual account number, corporate account number, order type, order price, order size, order handling conditions. The instrument key uniquely identifies the financial instrument associated with the order. This key may be in one of various forms, including a string of alphanumeric characters assigned by the financial exchange, an index number assigned by the financial exchange, or an index number assigned by the ticker plant.


The mapping component 302 resolves a unique identifier for the financial instrument used by the OME to track per-instrument state. Preferably this key is an index number that allows instrument state to be directly indexed using the number. The mapping component also resolves the unique instrument identifier required for order entry into the markets. Preferably, the mapping component also resolves the instrument identifier required to retrieve the current pricing information from the market view component. As described the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675, the mapping is preferably accomplished by using a hash table implementation to minimize the number of memory accesses to perform the mapping. Similarly, the mapping component resolves a unique identifier for the individual and corporate risk profile records.


In order to seed the order validation checks, the mapping component also initiates the retrieval of relevant validation information associated with the order from one or more of the following sources:

    • Individual account and risk profile record cache 316
    • Corporate account and risk profile record cache 318
    • Regulatory record cache 320

      Preferably, each of the caches is stored in high-speed memory directly attached to the device hosting the mapping component. Such local memory may be initialized from a centralized database during maintenance windows when trading is not occurring via the operational parameters 322 interface shown in FIG. 3. The individual account and risk profile is retrieved by using the unique identifier mapped from the individual account number from the incoming order. The corporate account and risk profile is retrieved by using the unique identifier mapped from the corporate account number from the incoming order. The regulatory record is retrieved using the unique instrument identifier mapped from the instrument key as previously described. While the mapping component initiates the retrievals, the read results from the caches are passed to downstream components: order validation, routing strategy, and order entry optimization. In doing so, the mapping component pre-fetches the necessary records for downstream computations, thus masking the latency of the record retrieval from the caches.


Similarly, the mapping component initiates the retrieval of current pricing information for the financial instrument by passing the mapped instrument identifier to the market view component 310.


The market view component can ingest normalized market data 326 from a logically upstream ticker plant. Examples of ticker plants that can be employed for this purpose are the ticker plant engines described in described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675 and WO Pub. WO 2010/077829. The market view component provides a current view of the markets to other components within the OME. Typically, the view of the market is provided as regional and composite price-aggregated book views for each financial instrument such as those described in the above-referenced and incorporated WO Pub. WO 2010/077829. In the preferred embodiment, the market view component provides a current pricing record to downstream OME components that includes a snapshot of current liquidity in the form of a limited-depth price-aggregated composite book, liquidity statistics, and trade statistics, as shown in FIG. 4. The depth of the composite book view may be set as a configuration parameter, or may be dynamically determined by size of the incoming order that triggered the record retrieval. In the latter case, the depth would be chosen to provide visibility into enough liquidity to fill the order on one or more venues. The liquidity statistics provide downstream components with information about the historical share of the best bid and best offer price (i.e. what percentage of the time has the best bid price been available on BATS). The trade statistics present downstream components with a pan-market summary of execution activity for the financial instrument, such as the percentage of the current daily volume that has been executed on a particular market.


In addition to ingesting normalized market data, the market view component has the ability to update those regional and composite book views based on order entry confirmation and order fill reports received from the markets. This information from the order entry interfaces of financial markets is processed by the position blotter component. The position blotter updates the view of current outstanding positions in the market and makes this view available to the market view component, as well as other OME components. Updates to the view of outstanding positions may allow the current view of the market to be updated prior to the concomitant updates being received via the upstream ticker plant that consumes the exchanges' market data feeds. In order to prevent redundant updates to the books, the market view component can maintain a cache 328 of updates triggered by the order entry responses. When a concomitant market data update is received, it must be omitted or adjusted by the amount of liquidity added/removed by the order entry response event.


Similar to the retrieval of necessary regulatory and account records, the retrieval of the financial instrument record from the market view component masks the latency of record retrieval for downstream components.


It should also be noted that optionally, the market view component 310 can itself be a ticker plant engine that ingests financial market data to produce normalized financial market data for consumption by the order validation component.


The order validation component 304 maintains independent input buffers for incoming orders, the regulatory and account records, and the market data records. The buffers provide a synchronization mechanism whereby the order validation component initiates its computations for a new order when all necessary record information is available. The order validation component contains a plurality of rule engines that perform a set of checks as described in the Introduction. Thus the rules engine can instantiate various rules and validate orders (or groups of orders) against those rules. Such rules may be derived from any or all of the following validation rules discussed above (although it should be understood that other validation rules may be desired by a practitioner):

    • Individual account and risk profile
      • Order quantity, instant and cumulative
      • Quantity-price product, instant and cumulative
      • Cumulative net value on position
      • Percent away from last tick and/or open
      • Position limits, margins
      • Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
    • Corporate account and risk profile
      • Order quantity, instant and cumulative
      • Quantity-price product, instant and cumulative
      • Cumulative net value on position
      • Percent away from last tick and/or open
      • Position limits, margins
      • Entitlements (market access, short-sales, options, odd lots, ISO, etc.)
      • Corporate “restricted list” of symbols
    • Regulatory
      • Short sale restrictions
      • Halted instruments
      • Tick rules
      • Trade through


An example of a rules engine that can be employed toward this end is disclosed in the above-referenced and incorporated U.S. Pat. App. Pub. 2009/0287628. Note that the set of rule engines may leverage data parallelism (multiple copies of identical rule engines) and functional parallelism (pipeline of function-specific rule engines) to achieve the desired throughput and latency for the order validation component.


The specific set of checks is dictated by the validation information associated with the order (that was retrieved during the order mapping step). If all checks pass, the order is declared as valid and passed on to the routing strategy component. Note that the order validation component may update validation records and write them back to the appropriate record cache, e.g. The current and cumulative statistics on positions for a given account may be updated. As shown in FIG. 5, rule engines within the order validation component may be organized to perform checks in parallel. The output of those parallel checks can be combined in one or more rule engines that ultimately produce a decision to accept, reject, or modify the order. Examples of checks include:

    • Regulatory: IF the instrument is currently under a short-sale restriction AND the order is an offer to sell that represents a short sale, THEN reject the order.
    • Regulatory: IF the instrument is currently under a volatility trading pause on the NASDAQ market, THEN modify the order to restrict routing to the NASDAQ market.
    • Regulatory: IF the instrument is on a restricted stocks in the corporate account record (because the bank is involved in a merger deal with the company), THEN reject the order.
    • Individual: IF the notional value of the order to buy a derivatives contract is greater than the credit line available to the individual trading account, THEN reject the order.
    • Corporate: IF the aggregate notional value of all outstanding orders for the bank exceeds the defined threshold in the corporate record, THEN reject the order.


The combinatorial rules are typically more straightforward, as a reject result from any of the individual rule checks results in a reject decision for the order. The number of independent rule engines provisioned in the order validation component can be determined by the throughput requirement for the component and an analysis of the complexity of rule checks that must be performed.


Modified and accepted orders are forwarded to the routing strategy component 306, along with its concomitant records via a dedicated interconnect. This allows the routing strategy component to immediately begin processing the order. The routing strategy component determines if a valid order is to be partitioned and where the order (or each order partition) is to be routed. Similar to the order validation component, the routing strategy component utilizes a plurality of rules engines such as those described in the above-referenced and incorporated U.S. Pat. App. Pub. 2009/0287628 to make these decisions (which may also employ a parallelization strategy). The decisions are driven by routing parameters contained in the individual account, corporate account, and regulatory records, as well as data from the market view component and the position blotter component. The rules implement the types of routing strategies outlined in the Introduction. Once a routing decision is completed by the rules engines, the order (or order partitions) are passed on to the order entry optimization component 308 with directives on where and how to enter the order (or order partitions) into the market. Note that an order may be entered into a market with a wide variety of parameters that direct the exchange (or dark pool) on how the order may be matched. The routing strategy component also updates the position blotter component to reflect a new position in the market.


The latency monitor component 312 utilizes data from outgoing order events 332 and incoming order response events 334 to maintain a set of statistics for each channel to each market. The latency statistics may include estimates of intra-exchange latency based on measurements of the round-trip-time (RU) from transmitting a new order on a channel to receiving a response event (either an order accept, reject, or fill notification). The statistics may include the last measurement as well as the average, minimum, and maximum for a defined time window (e.g. a moving average). The latency statistics may also be further refined to include statics on a per-instrument/per-order-type basis for each channel. Such measurements can be performed by recording a timestamp for the transmission of an order entry event, timestamping each order entry response event, identifying the order entry event that corresponds to the response event, and then computing the difference in timestamps.


The order entry optimization component 308 optimizes the sequence in which orders are transmitted to a given market. Furthermore, the component may select the appropriate communication channel to the market if multiple channels are available. The order entry optimization component utilizes the directives from the routing strategy component, as well current estimates of intra-exchange latency computed for each independent channel to that market. The latency estimates for each instrument and order type combination may also be incorporated. As shown in FIG. 6, the order entry optimization component 308 may employ various buffers to store order data, market view data, latency statistical data, individual records data, and corporate records data. The order entry optimization component first computes a vector of scores for each new order via a plurality of computation subcomponents 600, each associated with a channel. Each score in the vector represents a relative priority for an available channel. The channel selection subcomponent 602 selects the highest score and stores the order for transmission in the queue 604 for the channel associated with that selected highest score. The score associated with the order is also used to determine its insertion point into the queue 604. Thus, each queue 604 is associated with a channel and can be implemented as a priority queue that allows new entries to be inserted with a relative priority score, i.e. the order will be inserted ahead of items with a lower score.


A FIX encoder subcomponent 606 then services the queues 604 to generate the outgoing orders 332 in accordance with the selected channels and other optimizations.


An exemplary computation subcomponent 600 can score order channels as a simple weighted sum of antecedents: sum(W[i]*A[i]), where W[i] is a user specified weight, and A[i]=antecedent. Exemplary antecedents include:

    • Estimated intra-exchange latency for the channel, instrument, order-type combination
    • Number of outstanding orders on the channel by instrument
    • Number of outstanding orders on the channel by aggregate number
    • Price delta of order price to current best bid and best offer on target market
    • Liquidity depth, defined to be the total size available between best bid/ask price and order price A score antecedent selection subcomponent 610 can be employed by the computation subcomponent 600 to select which data from the buffers is to be used for antecedent values.


As indicated above, the subcomponents of the order entry optimization component 308 shown in FIG. 6 can be implemented in hardware logic pipelines or other parallel processing-capable architectures to exploit parallelism internally in order to maximize throughput and minimize processing latency.


The position blotter update component 314 processes order entry response messages 334 from the various markets. The response messages notify the OME of which orders were placed, executed, cancelled, rejected, etc. The position blotter provides updates to the market view component when orders are placed so that the views of the market can be updated with less latency than receiving the update via the market data feed from the market center. Through a dedicated interconnect between the position blotter update component and the market view component, such updates can be passed with minimal overhead. Thus, when the OME 300 receives confirmation that an order has been placed from a destination market, the OME is able to modify its internal view of the state of the market to include the placed order. This provides the OME with a current view of the market, before the change is reported on the public market data feed. This latency advantage in the market view may then be leveraged by the OME and any trading strategies with access to such data.


The position blotter also tracks the current set of outstanding positions that the OME is managing. The component allows the order validation component and routing strategy component to incorporate a view of the outstanding positions when making validation and routing decisions.


The OME may be implemented on high performance computational platform, such as an offload engine or the like. Examples of a suitable computational platform for the OME include a reconfigurable logic device (e.g., a field programmable gate array (FPGA) or other programmable logic device (PLD)), a graphics processor unit (GPU), and a chip multiprocessors (CMP). However, it should be understood that the OME could also be deployed on one or more general purpose processors (GPPs) or other appropriately programmed processors if desired. It should also be understood that the OME may be partitioned across multiple reconfigurable logic devices (or multiple GPUs, CMPs, etc. if desired).


As used herein, the term “general-purpose processor” (or GPP) refers to a hardware device having a fixed form and whose functionality is variable, wherein this variable functionality is defined by fetching instructions and executing those instructions, of which a conventional central processing unit (CPU) is a common example. Exemplary embodiments of GPPs include an Intel Xeon processor and an AMD Opteron processor. As used herein, the term “reconfigurable logic” refers to any logic technology whose form and function can be significantly altered (i.e., reconfigured) in the field post-manufacture. This is to be contrasted with a GPP, whose function can change post-manufacture, but whose form is fixed at manufacture. Furthermore, as used herein, the term “software” refers to data processing functionality that is deployed on a GPP or other processing devices, wherein software cannot be used to change or define the form of the device on which it is loaded, while the term “firmware”, as used herein, refers to data processing functionality that is deployed on reconfigurable logic or other processing devices, wherein firmware may be used to change or define the form of the device on which it is loaded.


Thus, in embodiments where one or more components of the OME is implemented in reconfigurable logic such as an FPGA, hardware logic will be present on the device that permits fine-grained parallelism with respect to the different operations that such components perform, thereby providing such a component with the ability to operate at hardware processing speeds that are orders of magnitude faster than would be possible through software execution on a GPP.


Further, the OME may be hosted in a dedicated system with computer communications links providing the interfaces to the normalized market data, order entry interfaces of markets, and order flow from trading strategies. In a preferred embodiment, the OME is hosted in an integrated system where the full trading platform is hosted.


Integrated Trading Platform



FIG. 7 presents an exemplary block diagram of an integrated trading platform 700 that may be hosted on a single computing system. A single computing system may be a single server, appliance, “box”, etc. The system preferably uses intra-system interconnections to transfer data between the ticker plant engine(s) 702, trading strategies 704 and/or 712, and order management engine(s) 300. The integrated trading platform provides the following advantages over the state of the art (where it should be understood that this list is not exhaustive):

    • Reduced overall latency from market data receipt to order entry. Such an overall latency reduction can arise from lowered communication latency between components and lowered latency of component processing time by offloading to acceleration engines (e.g., reconfigurable logic).
    • Reduced space/power requirements for deploying a trading platform. This can be especially important for co-location in exchange datacenters.
    • Increased available bandwidth for data sharing among the trading platform components. This provides for tighter integration between components and allows components to make decisions based on additional data, thereby widening the scope of possible strategies and allowing for more complex and comprehensive processing.


The amount of general-purpose computing resources available in a single host system is fundamentally limited. This implies that pure software implementations of the trading platform or trading platform components will provide less capacity and latency performance relative to systems that leverage hardware-accelerated designs. In order to achieve a higher level of performance in a single system, trading platform components are preferably offloaded to engines that do not consume general purpose computing resources and leverage fine-grained parallelism.


Thus, as shown in FIG. 7, a host system for the trading platform can comprise a software sub-system 720 and a hardware sub-system 718, wherein the software sub-system may comprise one or more host processors and one or more associated host memories. Aspects of the trading platform such as one or more of the ticker plant engine(s) 702, strategy offload engine(s) 704, and OMEs 300 can be offloaded to the hardware sub-system for improved performance as described herein.


The ticker plant engine(s) 702 can normalize and present market data 714 from disparate feeds for presentation to consuming applications (including consuming applications that are resident in the software sub-system 720). Examples of a suitable ticker plant engine 702 are the ticker plant engines described in the above-referenced and incorporated U.S. Pat. App. Pub. 2008/0243675 and WO Pub. WO 2010/077829, which can leverage the parallelism provided by reconfigurable logic devices to provide dramatic acceleration over conventional ticker plants. Furthermore, as shown in FIG. 7 and described in the above-referenced and incorporated U.S. Pat. App. Pub 2008/0243675 and U.S. Pat. App. Pub 2007/0174841, the ticker plant engines can write normalized market data to shared system memory 708 (for consumption by trading strategies written in software and executing on the general purpose computing devices in the system) and to shared memory in other offload engines in the system via a peer-to-peer hardware interconnect 707. The peer-to-peer hardware interconnect allows data to be transferred between offload engines without the involvement of system software. Note that the peer-to-peer hardware interconnect may be implemented by dedicated links or system interconnection technologies like PCI Express.


Writing normalized market data to shared (system) memory allows multiple trading applications to view the current state of the market by simply issuing reads to the memory locations associated with the financial instruments of interest. This reduces the latency of data delivery to the trading applications by eliminating the need to receive and parse messages to extract data fields.


An exemplary embodiment of a peer-to-peer hardware interconnect is a PCI Express bus where endpoint devices are each assigned a portion of the addressable memory space. A Base Address Register (BAR) defines the address space assigned to a given device on the bus. If device A issues a write operation to an address within the BAR space associated with device B, data can be transferred directly from device A to device B without involving system software or utilizing host memory. A wide variety of protocols may be developed with this basic capability. Multiple BARs may be employed by a device to implement control structures. For example, specific BARs may be used to maintain read and write pointers for the implementation of a ring buffer or queue for data transfers between devices.


Strategy offload engines 704 may also be hosted in the integrated system. Moreover, such strategy offload engines 704 can be resident in the hardware sub-system 718 as shown in FIG. 7. Like the OME, strategy offload engines may receive normalized market data directly over the peer-to-peer hardware interconnect. Examples of suitable strategy offload engines 704 include an options pricing engine such as described in the above-referenced and incorporated U.S. Pat. App. Pub 2007/0294157, a basket calculation engine as described in the above-referenced and incorporated U.S. Pat. App. Pub. 2009/0182683, engines for performing data cleansing and integrity checks which can employ rules engines such as those described in the above-referenced and incorporated U.S. Pat. App. Pub. 2009/0287628, etc.


Note that a hardware-to-software interconnect channel 710 provides for low-latency, high-bandwidth communication between software and hardware components. An example of a suitable interconnect channel in this regard is described in the above-referenced and incorporated U.S. Pat. App. Pub. 2007/0174841. This facilitates the partitioning of trading strategies across general purpose processing and reconfigurable logic resources. Thus, the strategy offload engines 704 can also interact with the trading strategy applications 712 within the software sub-system of the host through the hardware-software channel 710, where a trading strategy application 712 can offload certain tasks to the hardware-accelerated strategy offload engine 704 for reduced latency processing.


The functions of a traditional OMS/EMS that are not performance-critical (e.g. are not performed on every order) may be hosted on general-purpose processing resources in the system if desired (although a practitioner may want to deploy all functions on high performance resources such as reconfigurable logic devices). These functions may include modification of routing parameters, modification of risk profiles, statistics gathering and monitoring. The software components of the OMS/EMS utilize the same hardware-to-software interconnection channel to communicate with the OME(s), update cached records, etc.


As noted above, in connection with the OME, examples of a suitable computational platform for one or more of the engines 702, 704, and 300 include a reconfigurable logic device (e.g., a field programmable gate array (FPGA) or other programmable logic device (PLD)), a graphics processor unit (GPU), and a chip multiprocessors (CMP). However, it should be understood that one or more of the other engines 702, 704, and 300 could also be deployed on one or more general purpose processors (GPPs) or other appropriately programmed processors if desired for parallel execution within the host. It should also be understood that the engines 702, 704, and 300 may be partitioned across multiple reconfigurable logic devices (or multiple GPUs, CMPs, etc. if desired).


Thus, in embodiments where one or more engines within the hardware sub-system 718 is implemented in reconfigurable logic such as an FPGA, hardware logic will be present on the platform that permits fine-grained parallelism with respect to the different operations that such engines perform, thereby providing such an engine with the ability to operate at hardware processing speeds that are orders of magnitude faster than would be possible through software execution on a GPP.


While the present invention has been described above in relation to its preferred embodiments, various modifications may be made thereto that still fall within the invention's scope as will be recognizable upon review of the teachings herein. As such, the full scope of the present invention is to be defined solely by the appended claims and their legal equivalents.

Claims
  • 1. An apparatus comprising: a member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processor unit (GPU), and (3) a chip multi-processor (CMP), wherein the member is configured as an order management engine, the order management engine configured to process a plurality of orders based on a plurality of inputs, the orders pertaining to a plurality of financial instruments traded on one or more financial markets, wherein the order management engine comprises a plurality of parallel components that (i) are integrated on the member via a plurality of dedicated interconnects and (ii) exploit parallelism internally within the member to operate in parallel with each other;wherein the parallel components include a mapping component and an order validation component that are integrated via the dedicated interconnects in a feed-forward orientation to process the orders;wherein the mapping component is configured to map the orders to applicable risk data and regulatory data for pre-fetching from memory to seed order validation checks by the order validation component; andwherein the order validation component is configured to (1) receive the pre-fetched applicable risk data and regulatory data and (2) perform a plurality of order validation checks on the orders against a plurality of rules in parallel to validate the orders, wherein the rules include rules based on the orders' applicable risk data and regulatory data, the validated orders for transmission to one or more financial markets.
  • 2. The apparatus of claim 1 wherein the inputs comprise at least one member of the group consisting of (1) financial market data from one or more financial market data sources, (2) a plurality of response events from one or more financial market data sources, (3) data from a plurality of individual risk profiles, (4) data from a plurality of corporate risk profiles, (5) data from an entitlement database, (6) data corresponding to a plurality of regulatory parameters, (7) data from a plurality of individual account profiles, (8) data from a plurality of corporate account profiles, and (9) data corresponding to a plurality of order routing parameters.
  • 3. The apparatus of claim 1 wherein the parallel components are deployed on the member as a processing pipeline configured for parallel operation such that each of the parallel components are configured to operate simultaneously.
  • 4. The apparatus of claim 1 wherein the member comprises the reconfigurable logic device.
  • 5. The apparatus of claim 4 wherein the reconfigurable logic device comprises a field programmable gate array (FPGA).
  • 6. The apparatus of claim 1 wherein the member comprises the GPU.
  • 7. The apparatus of claim 1 wherein the member comprises the CMP.
  • 8. A method comprising: processing a plurality of orders based on a plurality of inputs, the orders pertaining to a plurality of financial instruments traded on one or more financial markets, wherein the processing step is performed by a member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processor unit (GPU), and (3) a chip multi-processor (CMP), wherein the member is configured as an order management engine, the order management engine comprising a plurality of parallel components that (i) are integrated on the member via a plurality of dedicated interconnects and (ii) exploit parallelism internally within the member to operate in parallel with each other, wherein the parallel components include a mapping component and an order validation component that are integrated via the dedicated interconnects in a feed-forward orientation, and wherein the processing step comprises: the mapping component mapping the orders to applicable risk data and regulatory data for pre-fetching from memory to seed order validation checks by the order validation component; andthe order validation component (1) receiving the pre-fetched applicable risk data and regulatory data and (2) performing a plurality of order validation checks on the orders against a plurality of rules in parallel to validate the orders, wherein the rules include rules based on the orders' applicable risk data and regulatory data, the validated orders for transmission to one or more financial markets.
  • 9. The method of claim 8 wherein the parallel components further include a routing strategy component and an order entry optimization component that are integrated with the mapping component and the order validation component via the dedicated interconnects, the processing step further comprising: the routing strategy component (1) receiving validated orders from the order validation component and (2) performing a routing strategy operation on the validated orders to determine the financial markets in which to route the validated orders; andthe order entry optimization component (1) receiving the validated orders, (2) receiving a plurality of routing instructions from the routing strategy component that are associated with the validated orders, and (3) performing an order entry optimization operation on the validated orders based on the received routing instructions to generate a plurality of outgoing orders for a plurality of financial markets in accordance with the routing instructions.
  • 10. The method of claim 8 wherein the parallel components are deployed on the member as a processing pipeline, the parallel components operating simultaneously in parallel through the pipeline.
  • 11. The method of claim 8 wherein the member comprises the reconfigurable logic device.
  • 12. The method of claim 11 wherein the reconfigurable logic device comprises a field programmable gate array (FPGA).
  • 13. The method of claim 8 wherein the member comprises the GPU.
  • 14. The method of claim 8 wherein the member comprises the CMP.
  • 15. The apparatus of claim 1 wherein the parallel components further include a market view component, wherein the market view component is configured to maintain a current market view, the current market view comprising a current view of pricing and liquidity in one or more financial markets for one or more financial instruments; wherein the mapping component is further configured to map the orders to their applicable financial instruments to initiate a retrieval from the market view component of current pricing information within the current market view for the applicable financial instruments; andwherein the order validation component is further configured to receive the retrieved relevant current pricing information for the orders from the market view component via the dedicated interconnects, and wherein the rules include rules that are based on the orders' relevant current pricing information.
  • 16. The apparatus of claim 15 wherein the dedicated interconnects connect the mapping component with the market view component; wherein the mapping component is further configured to (1) resolve instrument identifiers for the orders and (2) provide the resolved instrument identifiers to the market view component via the dedicated interconnects to initiate retrieval of the relevant current pricing information; andwherein the market view component is further configured to (1) retrieve the current pricing information for the financial instruments applicable to the orders based on the resolved instrument identifiers and (2) provide the retrieved current pricing information to the order validation component via the dedicated interconnects.
  • 17. The apparatus of claim 1 wherein the parallel components further include a routing strategy component, the routing strategy component configured to (1) receive validated orders from the order validation component and (2) perform a routing strategy operation on the validated orders to determine a plurality of financial markets to which to route the validated orders, and wherein the dedicated interconnects connect the order validation component with the routing strategy component in the feed-forward orientation.
  • 18. The apparatus of claim 17 wherein the parallel components further include a market view component, wherein the market view component is configured to maintain a current market view, the current market view comprising a current view of pricing and liquidity in one or more financial markets for one or more financial instruments; wherein the mapping component is further configured to map the orders to their applicable financial instruments to initiate a retrieval from the market view component of current pricing information within the current market view for the applicable financial instruments; andwherein the routing strategy component is further configured to (1) receive the retrieved relevant current pricing information for the validated orders from the market view component via the dedicated interconnects and (2) determine which of the financial markets to route the validated orders to based on the received relevant current pricing information for the validated orders.
  • 19. The apparatus of claim 18 wherein the dedicated interconnects connect the mapping component with the market view component; wherein the mapping component is further configured to (1) resolve instrument identifiers for the orders and (2) provide the resolved instrument identifiers to the market view component via the dedicated interconnects to initiate retrieval of the relevant current pricing information;wherein the market view component is further configured to (1) retrieve the current pricing information for the financial instruments applicable to the orders based on the resolved instrument identifiers and (2) provide the retrieved current pricing information to the routing strategy component via the dedicated interconnects.
  • 20. The apparatus of claim 1 wherein the parallel components communicate via the dedicated interconnects without using general-purpose messaging buses.
  • 21. The apparatus of claim 1 wherein the risk data comprises risk data from a corporate account risk profile applicable to the orders, the apparatus further comprising a record cache configured to store the corporate account risk profile, wherein the mapping component initiates retrieval of the risk data from the record cache.
  • 22. The apparatus of claim 1 wherein the risk data comprises risk data from an individual account risk profile applicable to the orders, the apparatus further comprising a record cache configured to store the individual account risk profile, wherein the mapping component initiates retrieval of the risk data from the record cache.
  • 23. The apparatus of claim 1 wherein the risk data comprises (1) first risk data from a corporate account risk profile applicable to the orders and (2) second risk data from an individual account risk profile applicable to the orders, the apparatus further comprising record caches configured to store the corporate account and individual account risk profiles, wherein the mapping component initiates retrieval of the first and second risk data from the record caches.
  • 24. The apparatus of claim 1 wherein the order validation component comprises: a plurality of parallel logic instances that are configured to test the orders against the rules in parallel; andcombinatorial logic that is configured to validate the orders if the parallel logic instances indicate that the orders satisfied all of the rules.
  • 25. The apparatus of claim 24 wherein the order validation component comprises a plurality of buffers that feed the parallel logic instances, the buffers including a first buffer that is configured to buffer data representing the orders, a second buffer that is configured to buffer data representing the risk data applicable to the orders, and a third buffer that is configured to buffer data representing the regulatory data applicable to the orders.
  • 26. The apparatus of claim 17 wherein the routing strategy component is further configured to select a destination market and handling directives for an order or part of an order based on at least one member of the group consisting of (1) a current view of pricing and liquidity available at one or more markets, (2) a current view of pricing and liquidity statistics computed for one or more markets, (3) a current view last trade prices for one or more markets, (4) a current view of trade statistics computed for one or more markets, (5) an estimation of intra-market latency, (6) a plurality of routing parameters from a routing parameter record, (7) a plurality of individual account parameters from an individual account record, and (8) a plurality of corporate account parameters from a corporate account record.
  • 27. The apparatus of claim 15 wherein the order management engine comprises a memory configured to store the current market view.
  • 28. The apparatus of claim 19 wherein the market view component is further configured to provide the retrieved current pricing information to the order validation component via the dedicated interconnects; and wherein the order validation component is further configured to receive the retrieved relevant current pricing information for the orders from the market view component via the dedicated interconnects, and wherein the rules include rules that are based on the orders' relevant current pricing information.
  • 29. The apparatus of claim 17 wherein the parallel components further include: an order entry optimization component, wherein the dedicated interconnects connect the order entry optimization component with the routing strategy component in the feed-forward orientation, the order entry optimization component configured to (1) receive the validated orders, (2) receive a plurality of routing instructions from the routing strategy component that are associated with the validated orders, and (3) perform the order entry optimization operation on the validated orders based on the received routing instructions to generate a plurality of outgoing orders for a plurality of financial markets in accordance with the routing instructions.
  • 30. The apparatus of claim 18 wherein the parallel components further include: a position blotter update component configured to track a plurality of positions relating to the orders.
  • 31. The apparatus of claim 27 wherein the memory comprises a shared memory, wherein the shared memory is shared between the order management engine and a ticker plant engine; wherein the ticker plant engine is configured to write normalized financial market data to the shared memory via a peer-to-peer hardware interconnect; andwherein the market view component is further configured to generate the current market view based on the normalized financial market data in the shared memory.
  • 32. The apparatus of claim 30 wherein the position blotter update component is further configured to (1) receive a plurality of response events from the one or more financial markets that are responsive to a plurality of previous orders from the order management engine and (2) update the tracked positions based on the received response events.
  • 33. The apparatus of claim 32 wherein the dedicated interconnects connect the position blotter update component with the market view component, and wherein the market view component is further configured to further update the current market view for one or more financial instruments on one or more financial markets based on the positions tracked by the position blotter update component.
  • 34. The apparatus of claim 32 wherein the dedicated interconnects connect the position blotter update component with the order validation component, and wherein the rules include rules that are based on positions tracked by the position blotter update component.
  • 35. The apparatus of claim 32 wherein the dedicated interconnects connect the position blotter update component with the routing strategy component, and wherein the routing strategy component is further configured to determine which of the financial markets to route the validated orders to based on positions tracked by the position blotter update component.
  • 36. The apparatus of claim 29 wherein the parallel components further include a latency monitor component, the latency monitor component configured to estimate intra-market latency for one or more order entry channels.
  • 37. The apparatus of claim 36 wherein the latency monitor component is further configured to estimate intra-market latency for one or more financial instruments for one or more order entry channels.
  • 38. The apparatus of claim 36 wherein the latency monitor component is further configured to estimate intra-market latency for one or more order types for one or more order entry channels.
  • 39. The apparatus of claim 36 wherein the latency monitor component is further configured to estimate intra-market latency based on measurements of a round-trip-time from (1) a transmission of an outgoing order event to a market to (2) a receipt of a corresponding order entry response event from the market.
  • 40. The apparatus of claim 39 wherein the latency monitor component is further configured to measure round-trip-time by (1) recording a timestamp for a transmission of each of a plurality of outgoing order events, (2) recording a timestamp for a receipt of each of a plurality of order entry response events, (3) identifying the outgoing order events that correspond to order entry response events, and (4) computing the difference in timestamps as between the identified outgoing order events and their corresponding order entry response events.
  • 41. The apparatus of claim 36 wherein the dedicated interconnects connect the latency monitor component with the order entry optimization component, wherein the latency monitor component is further configured to communicate intra-market latency data to the order entry optimization component, and wherein the order entry optimization component is further configured to perform the order entry optimization operation based on the received routing instructions and the communicated intra-market latency data to generate the outgoing orders.
  • 42. The apparatus of claim 36 wherein the dedicated interconnects connect the latency monitor component with the routing strategy component, wherein the latency monitor component is further configured to communicate intra-market latency data to the routing strategy component, and wherein the routing strategy component is further configured to determine which of the financial markets to route the validated orders to based on the communicated intra-market latency data.
  • 43. The apparatus of claim 33 wherein the order management engine comprises a shared memory, wherein the shared memory is shared between the order management engine and a ticker plant engine; wherein the ticker plant engine is configured to write normalized financial market data to the shared memory via a peer-to-peer hardware interconnect;wherein the market view component is further configured to generate a book that provides pricing and liquidity information for a plurality of financial instruments based on the normalized financial market data in the shared memory and the positions tracked by the position blotter update component; andwherein the market view component is further configured to maintain a cache of updates triggered by the positions tracked by the position blotter update component to prevent redundant updates to the book.
  • 44. The apparatus of claim 29 wherein the order entry optimization component is further configured to perform the order entry optimization operation based on at least one member of the group consisting of (1) a plurality of estimates of intra-market latency and (2) a plurality of directives specified by the routing strategy component.
  • 45. The apparatus of claim 29 wherein the parallel components further include a market view component, wherein the market view component is configured to maintain a current market view, the current market view comprising a current view of pricing and liquidity in one or more financial markets for one or more financial instruments; wherein the mapping component is further configured to map the orders to their applicable financial instruments to initiate a retrieval from the market view component of current pricing information within the current market view for the applicable financial instruments; andwherein the order entry optimization component is further configured to (1) receive the retrieved relevant current pricing information for the orders from the market view component via the dedicated interconnects and (2) perform the order entry optimization operation on the validated orders based on the received routing instructions and the received relevant current pricing information for the orders to generate the outgoing orders.
  • 46. The apparatus of claim 31 wherein the market view component is further configured to update the current market view based on a plurality of order entry confirmation and order fill reports received from a plurality of the one or more financial markets.
  • 47. The apparatus of claim 45 wherein the market view component is further configured to generate the current market view from an input comprising financial market data relating to the one or more financial instruments.
  • 48. The apparatus of claim 45 wherein the current market view includes a pricing and liquidity statistics view relating to the one or more financial instruments.
  • 49. The apparatus of claim 45 wherein the current market view includes a last trade pricing view relating to the one or more financial instruments.
  • 50. The apparatus of claim 45 wherein the current market view includes a last trade statistics view relating to the one or more financial instruments.
  • 51. The apparatus of claim 45 wherein the current market view comprises a current composite view of pricing and liquidity across a plurality of financial markets for one or more financial instruments.
CROSS-REFERENCE AND PRIORITY CLAIM TO RELATED PATENT APPLICATIONS

This patent application is a divisional of U.S. patent application Ser. No. 13/316,332, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Dec. 9, 2011, now U.S. Pat. No. 10,037,568, which claims priority to provisional patent application 61/421,545, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Dec. 9, 2010, the entire disclosures of each of which are incorporated herein by reference. This patent application is related to PCT patent application PCT/US2011/064269, entitled “Method and Apparatus for Managing Orders in Financial Markets”, filed Dec. 9, 2011, and published as WO Publication WO2012/079041, the entire disclosure of which is incorporated herein by reference. This patent application is also related to U.S. Pat. Nos. 7,840,482, 7,921,046, and 7,954,114 as well as the following published patent applications: U.S. Pat. App. Pub. 2007/0174841, U.S. Pat. App. Pub. 2007/0294157, U.S. Pat. App. Pub. 2008/0243675, U.S. Pat. App. Pub. 2009/0182683, U.S. Pat. App. Pub. 2009/0287628, U.S. Pat. App. Pub. 2011/0040701, U.S. Pat. App. Pub. 2011/0178911, U.S. Pat. App. Pub. 2011/0178912, U.S. Pat. App. Pub. 2011/0178917, U.S. Pat. App. Pub. 2011/0178918, U.S. Pat. App. Pub. 2011/0178919, U.S. Pat. App. Pub. 2011/0178957, U.S. Pat. App. Pub. 2011/0179050, U.S. Pat. App. Pub. 2011/0184844, and WO Pub. WO 2010/077829, the entire disclosures of each of which are incorporated herein by reference.

US Referenced Citations (633)
Number Name Date Kind
2046381 Hicks et al. Jul 1936 A
3082402 Scantlin Mar 1963 A
3296597 Scantlin et al. Jan 1967 A
3573747 Adams et al. Apr 1971 A
3581072 Nymeyer May 1971 A
3601808 Vlack Aug 1971 A
3611314 Pritchard, Jr. et al. Oct 1971 A
3729712 Glassman Apr 1973 A
3824375 Gross et al. Jul 1974 A
3848235 Lewis et al. Nov 1974 A
3906455 Houston et al. Sep 1975 A
4044334 Bachman et al. Aug 1977 A
4081607 Vitols et al. Mar 1978 A
4298898 Cardot Nov 1981 A
4300193 Bradley et al. Nov 1981 A
4314356 Scarbrough Feb 1982 A
4385393 Chaure et al. May 1983 A
4412287 Braddock, III Oct 1983 A
4464718 Dixon et al. Aug 1984 A
4550436 Freeman et al. Oct 1985 A
4674044 Kalmus et al. Jun 1987 A
4811214 Nosenchuck et al. Mar 1989 A
4823306 Barbic et al. Apr 1989 A
4868866 Williams, Jr. Sep 1989 A
4903201 Wagner Feb 1990 A
4941178 Chuang Jul 1990 A
5023910 Thomson Jun 1991 A
5038284 Kramer Aug 1991 A
5050075 Herman et al. Sep 1991 A
5063507 Lindsey et al. Nov 1991 A
5077665 Silverman et al. Dec 1991 A
5101353 Lupien et al. Mar 1992 A
5101424 Clayton et al. Mar 1992 A
5126936 Champion et al. Jun 1992 A
5140692 Morita Aug 1992 A
5161103 Kosaka et al. Nov 1992 A
5163131 Row et al. Nov 1992 A
5179626 Thomson Jan 1993 A
5208491 Ebeling et al. May 1993 A
5226165 Martin Jul 1993 A
5233539 Agrawal et al. Aug 1993 A
5243655 Wang Sep 1993 A
5249292 Chiappa Sep 1993 A
5255136 Machado et al. Oct 1993 A
5258908 Hartheimer et al. Nov 1993 A
5265065 Turtle Nov 1993 A
5267148 Kosaka et al. Nov 1993 A
5270922 Higgins Dec 1993 A
5297032 Trojan et al. Mar 1994 A
5313560 Maruoka et al. May 1994 A
5315634 Tanaka et al. May 1994 A
5319776 Hile et al. Jun 1994 A
5327521 Savic et al. Jul 1994 A
5339411 Heaton, Jr. Aug 1994 A
5361373 Gilson Nov 1994 A
5371794 Diffie et al. Dec 1994 A
5375055 Togher et al. Dec 1994 A
5388259 Fleischman et al. Feb 1995 A
5396253 Chia Mar 1995 A
5404488 Kerrigan et al. Apr 1995 A
5418951 Damashek May 1995 A
5432822 Kaewell, Jr. Jul 1995 A
5461712 Chelstowski et al. Oct 1995 A
5465353 Hull et al. Nov 1995 A
5481735 Mortensen et al. Jan 1996 A
5488725 Turtle et al. Jan 1996 A
5497317 Hawkins et al. Mar 1996 A
5497488 Akizawa et al. Mar 1996 A
5500793 Deming, Jr. et al. Mar 1996 A
5517642 Bezek et al. May 1996 A
5544352 Egger Aug 1996 A
5546578 Takada et al. Aug 1996 A
5596569 Madonna et al. Jan 1997 A
5619574 Johnson et al. Apr 1997 A
5651125 Witt et al. Jul 1997 A
5680634 Estes Oct 1997 A
5684980 Casselman Nov 1997 A
5701464 Aucsmith Dec 1997 A
5712942 Jennings et al. Jan 1998 A
5721898 Beardsley et al. Feb 1998 A
5740244 Indeck et al. Apr 1998 A
5740466 Geldman et al. Apr 1998 A
5774835 Ozawa et al. Jun 1998 A
5774839 Shlomot Jun 1998 A
5781772 Wilkinson, III et al. Jul 1998 A
5781921 Nichols Jul 1998 A
5802290 Casselman Sep 1998 A
5805832 Brown et al. Sep 1998 A
5809483 Broka et al. Sep 1998 A
5813000 Furlani Sep 1998 A
5819273 Vora et al. Oct 1998 A
5819290 Fujita et al. Oct 1998 A
5826075 Bealkowski et al. Oct 1998 A
5845266 Lupien et al. Dec 1998 A
5857176 Ginsberg Jan 1999 A
5864738 Kessler et al. Jan 1999 A
5870730 Furuya et al. Feb 1999 A
5873071 Ferstenberg et al. Feb 1999 A
5884286 Daughtery, III Mar 1999 A
5905974 Fraser et al. May 1999 A
5913211 Nitta Jun 1999 A
5930753 Potamianos et al. Jul 1999 A
5943421 Grabon Aug 1999 A
5943429 Händel Aug 1999 A
5963923 Garber Oct 1999 A
5978801 Yuasa Nov 1999 A
5987432 Zusman Nov 1999 A
5991881 Conklin et al. Nov 1999 A
5995963 Nanba et al. Nov 1999 A
6006264 Colby et al. Dec 1999 A
6016483 Rickard et al. Jan 2000 A
6023755 Casselman Feb 2000 A
6023760 Karttunen Feb 2000 A
6028939 Yin Feb 2000 A
6034538 Abramovici Mar 2000 A
6044407 Jones et al. Mar 2000 A
6058391 Gardner May 2000 A
6061662 Makivic May 2000 A
6064739 Davis May 2000 A
6067569 Khaki et al. May 2000 A
6070172 Lowe May 2000 A
6073160 Grantham et al. Jun 2000 A
6084584 Nahi et al. Jul 2000 A
6096091 Harlmnann Aug 2000 A
6105067 Batra Aug 2000 A
6134551 Aucsmith Oct 2000 A
6138176 McDonald et al. Oct 2000 A
6147976 Shand et al. Nov 2000 A
6169969 Cohen Jan 2001 B1
6173270 Cristofich et al. Jan 2001 B1
6173276 Kant et al. Jan 2001 B1
6178494 Casselman Jan 2001 B1
6195024 Fallon Feb 2001 B1
6226676 Crump et al. May 2001 B1
6236980 Reese May 2001 B1
6243753 Machin et al. Jun 2001 B1
6247060 Boucher et al. Jun 2001 B1
6263321 Daughtery, III Jul 2001 B1
6272616 Fernando et al. Aug 2001 B1
6278982 Korhammer et al. Aug 2001 B1
6279113 Vaidya Aug 2001 B1
6279140 Slane Aug 2001 B1
6289440 Casselman Sep 2001 B1
6295530 Ritchie et al. Sep 2001 B1
6304858 Mosler et al. Oct 2001 B1
6309424 Fallon Oct 2001 B1
6317728 Kane Nov 2001 B1
6317795 Malkin et al. Nov 2001 B1
6321258 Stollfus et al. Nov 2001 B1
6336150 Ellis et al. Jan 2002 B1
6339819 Huppenthal et al. Jan 2002 B1
6370592 Kumpf Apr 2002 B1
6370645 Lee et al. Apr 2002 B1
6377942 Hinsley et al. Apr 2002 B1
6397259 Lincke et al. May 2002 B1
6397335 Franczek et al. May 2002 B1
6412000 Riddle et al. Jun 2002 B1
6415269 Dinwoodie Jul 2002 B1
6418419 Nieboer et al. Jul 2002 B1
6430272 Maruyama et al. Aug 2002 B1
6456982 Pilipovic Sep 2002 B1
6463474 Fuh et al. Oct 2002 B1
6484209 Momirov Nov 2002 B1
6499107 Gleichauf et al. Dec 2002 B1
6535868 Galeazzi et al. Mar 2003 B1
6546375 Pang et al. Apr 2003 B1
6578147 Shanklin et al. Jun 2003 B1
6581098 Kumpf Jun 2003 B1
6591302 Boucher et al. Jul 2003 B2
6594643 Freeny, Jr. Jul 2003 B1
6597812 Fallon et al. Jul 2003 B1
6601094 Mentze et al. Jul 2003 B1
6601104 Fallon Jul 2003 B1
6604158 Fallon Aug 2003 B1
6624761 Fallon Sep 2003 B2
6625150 Yu Sep 2003 B1
6691301 Bowen Feb 2004 B2
6704816 Burke Mar 2004 B1
6711558 Indeck et al. Mar 2004 B1
6765918 Dixon et al. Jul 2004 B1
6766304 Kemp, II et al. Jul 2004 B2
6772132 Kemp, II et al. Aug 2004 B1
6772136 Kant et al. Aug 2004 B2
6772345 Shetty Aug 2004 B1
6778968 Gulati Aug 2004 B1
6785677 Fritchman Aug 2004 B1
6804667 Martin Oct 2004 B1
6807156 Veres et al. Oct 2004 B1
6820129 Courey, Jr. Nov 2004 B1
6839686 Galant Jan 2005 B1
6847645 Potter et al. Jan 2005 B1
6850906 Chadha et al. Feb 2005 B1
6877044 Lo et al. Apr 2005 B2
6886103 Brustoloni et al. Apr 2005 B1
6901461 Bennett May 2005 B2
6931408 Adams et al. Aug 2005 B2
6944168 Paatela et al. Sep 2005 B2
6978223 Milliken Dec 2005 B2
6981054 Krishna Dec 2005 B1
7003488 Dunne et al. Feb 2006 B2
7024384 Daughtery, III Apr 2006 B2
7046848 Olcott May 2006 B1
7058735 Spencer Jun 2006 B2
7065475 Brundobler Jun 2006 B1
7089206 Martin Aug 2006 B2
7089326 Boucher et al. Aug 2006 B2
7093023 Lockwood et al. Aug 2006 B2
7099838 Gastineau et al. Aug 2006 B1
7103569 Groveman et al. Sep 2006 B1
7117280 Vasudevan Oct 2006 B2
7124106 Stallaert et al. Oct 2006 B1
7127424 Kemp, II et al. Oct 2006 B2
7130913 Fallon Oct 2006 B2
7139743 Indeck et al. Nov 2006 B2
7149715 Browne et al. Dec 2006 B2
7161506 Fallon Jan 2007 B2
7167980 Chiu Jan 2007 B2
7177833 Marynowski et al. Feb 2007 B1
7181437 Indeck et al. Feb 2007 B2
7181608 Fallon et al. Feb 2007 B2
7212998 Muller et al. May 2007 B1
7222114 Chan et al. May 2007 B1
7224185 Campbell et al. May 2007 B2
7225188 Gai et al. May 2007 B1
7228289 Brumfield et al. Jun 2007 B2
7249118 Sandler et al. Jul 2007 B2
7251629 Marynowski et al. Jul 2007 B1
7257842 Barton et al. Aug 2007 B2
7277887 Burrows et al. Oct 2007 B1
7287037 An et al. Oct 2007 B2
7305383 Kubesh et al. Dec 2007 B1
7305391 Wyschogrod et al. Dec 2007 B2
7321937 Fallon Jan 2008 B2
7356498 Kaminsky et al. Apr 2008 B2
7363277 Dutta et al. Apr 2008 B1
7378992 Fallon May 2008 B2
7386046 Fallon et al. Jun 2008 B2
7406444 Eng et al. Jul 2008 B2
7417568 Fallon et al. Aug 2008 B2
7454418 Wang Nov 2008 B1
7457834 Jung et al. Nov 2008 B2
7461064 Fontoura et al. Dec 2008 B2
7478431 Nachenberg Jan 2009 B1
7487327 Chang et al. Feb 2009 B1
7496108 Biran et al. Feb 2009 B2
7539845 Wentzlaff et al. May 2009 B1
7558753 Neubert et al. Jul 2009 B2
7558925 Bouchard et al. Jul 2009 B2
7565525 Vorbach et al. Jul 2009 B2
7580719 Karmarkar Aug 2009 B2
7587476 Sato Sep 2009 B2
7598958 Kelleher Oct 2009 B1
7603303 Kraus et al. Oct 2009 B1
7606267 Ho et al. Oct 2009 B2
7606968 Branscome et al. Oct 2009 B2
7617291 Fan et al. Nov 2009 B2
7636703 Taylor Dec 2009 B2
7660761 Zhou et al. Feb 2010 B2
7668849 Narancic et al. Feb 2010 B1
7685121 Brown et al. Mar 2010 B2
7698338 Hinshaw et al. Apr 2010 B2
7701945 Roesch et al. Apr 2010 B2
7714747 Fallon May 2010 B2
7715436 Eiriksson et al. May 2010 B1
7760733 Eiriksson et al. Jul 2010 B1
7761459 Zhang et al. Jul 2010 B1
7788293 Pasztor et al. Aug 2010 B2
7831720 Noureddine et al. Nov 2010 B1
7840482 Singla et al. Nov 2010 B2
7856545 Casselman Dec 2010 B2
7856546 Casselman et al. Dec 2010 B2
7908213 Monroe et al. Mar 2011 B2
7908259 Branscome et al. Mar 2011 B2
7917299 Buhler et al. Mar 2011 B2
7921046 Parsons et al. Apr 2011 B2
7945528 Cytron et al. May 2011 B2
7949650 Indeck et al. May 2011 B2
7953743 Indeck et al. May 2011 B2
7954114 Chamberlain et al. May 2011 B2
7991667 Kraus et al. Aug 2011 B2
8015099 Reid Sep 2011 B2
8024253 Peterffy et al. Sep 2011 B2
8027893 Burrows et al. Sep 2011 B1
8032440 Hait Oct 2011 B1
8046283 Burns et al. Oct 2011 B2
8069102 Indeck et al. Nov 2011 B2
8073763 Merrin et al. Dec 2011 B1
8095508 Chamberlain et al. Jan 2012 B2
8131697 Indeck et al. Mar 2012 B2
8140416 Borkovec et al. Mar 2012 B2
8156101 Indeck et al. Apr 2012 B2
8175946 Hamati et al. May 2012 B2
8224800 Branscome et al. Jul 2012 B2
8229918 Branscome et al. Jul 2012 B2
8234267 Branscome et al. Jul 2012 B2
8244718 Chamdani et al. Aug 2012 B2
8326819 Indeck et al. Dec 2012 B2
8407122 Parsons et al. Mar 2013 B2
8458081 Parsons et al. Jun 2013 B2
8478680 Parsons et al. Jul 2013 B2
8515682 Buhler et al. Aug 2013 B2
8549024 Indeck et al. Oct 2013 B2
8595104 Parsons et al. Nov 2013 B2
8600856 Parsons et al. Dec 2013 B2
8620881 Chamberlain et al. Dec 2013 B2
8626624 Parsons et al. Jan 2014 B2
8655764 Parsons et al. Feb 2014 B2
8660925 Borkovec et al. Feb 2014 B2
8751452 Chamberlain et al. Jun 2014 B2
8762249 Taylor et al. Jun 2014 B2
8768805 Taylor et al. Jul 2014 B2
8768888 Chamberlain et al. Jul 2014 B2
8843408 Singla et al. Sep 2014 B2
8880501 Indeck et al. Nov 2014 B2
8880551 Hinshaw et al. Nov 2014 B2
9020928 Indeck et al. Apr 2015 B2
9047243 Taylor et al. Jun 2015 B2
9166597 Denisenko et al. Oct 2015 B1
9176775 Chamberlain et al. Nov 2015 B2
9396222 Indeck et al. Jul 2016 B2
9582831 Parsons et al. Feb 2017 B2
9672565 Parsons et al. Jun 2017 B2
9961006 Sutardja et al. May 2018 B1
10037568 Taylor et al. Jul 2018 B2
10062115 Taylor et al. Aug 2018 B2
10121196 Parsons et al. Nov 2018 B2
10191974 Indeck et al. Jan 2019 B2
10229453 Taylor et al. Mar 2019 B2
10572824 Chamberlain et al. Feb 2020 B2
10650452 Parsons et al. May 2020 B2
10909623 Indeck et al. Feb 2021 B2
10929152 Chamberlain et al. Feb 2021 B2
10957423 Buhler et al. Mar 2021 B2
10963962 Parsons et al. Mar 2021 B2
20010003193 Woodring et al. Jun 2001 A1
20010004354 Jolitz Jun 2001 A1
20010005314 Farooq et al. Jun 2001 A1
20010013048 Imbert de Tremiolles et al. Aug 2001 A1
20010015753 Myers Aug 2001 A1
20010015919 Kean Aug 2001 A1
20010025315 Jolitz Sep 2001 A1
20010042040 Keith Nov 2001 A1
20010044770 Keith Nov 2001 A1
20010047473 Fallon Nov 2001 A1
20010056547 Dixon Dec 2001 A1
20020010825 Wilson Jan 2002 A1
20020019812 Board et al. Feb 2002 A1
20020023010 Rittmaster et al. Feb 2002 A1
20020038276 Buhannic et al. Mar 2002 A1
20020049841 Johnson et al. Apr 2002 A1
20020054604 Kadambi et al. May 2002 A1
20020069375 Bowen Jun 2002 A1
20020072893 Wilson Jun 2002 A1
20020080871 Fallon et al. Jun 2002 A1
20020082967 Kaminsky et al. Jun 2002 A1
20020091826 Comeau et al. Jul 2002 A1
20020095519 Philbrick et al. Jul 2002 A1
20020100029 Bowen Jul 2002 A1
20020101425 Hamid Aug 2002 A1
20020105911 Pruthi et al. Aug 2002 A1
20020119803 Bitterlich et al. Aug 2002 A1
20020129140 Peled et al. Sep 2002 A1
20020138376 Hinkle Sep 2002 A1
20020143521 Call Oct 2002 A1
20020150248 Kovacevic Oct 2002 A1
20020156998 Casselman Oct 2002 A1
20020162025 Sutton et al. Oct 2002 A1
20020166063 Lachman et al. Nov 2002 A1
20020169873 Zodnik Nov 2002 A1
20020180742 Hamid Dec 2002 A1
20020198813 Patterson et al. Dec 2002 A1
20020199173 Bowen Dec 2002 A1
20030009411 Ram et al. Jan 2003 A1
20030009693 Brock et al. Jan 2003 A1
20030014521 Elson et al. Jan 2003 A1
20030014662 Gupta et al. Jan 2003 A1
20030018630 Indeck et al. Jan 2003 A1
20030023653 Dunlop et al. Jan 2003 A1
20030023876 Bardsley et al. Jan 2003 A1
20030028408 RuDusky Feb 2003 A1
20030028690 Appleby-Alis et al. Feb 2003 A1
20030028864 Bowen Feb 2003 A1
20030033234 RuDusky Feb 2003 A1
20030033240 Balson et al. Feb 2003 A1
20030033450 Appleby-Alis Feb 2003 A1
20030033514 Appleby-Allis et al. Feb 2003 A1
20030033588 Alexander Feb 2003 A1
20030033594 Bowen Feb 2003 A1
20030035547 Newton Feb 2003 A1
20030037037 Adams et al. Feb 2003 A1
20030037321 Bowen Feb 2003 A1
20030041129 Appleby-Allis Feb 2003 A1
20030043805 Graham et al. Mar 2003 A1
20030046668 Bowen Mar 2003 A1
20030051043 Wyschogrod et al. Mar 2003 A1
20030055658 RuDusky Mar 2003 A1
20030055769 RuDusky Mar 2003 A1
20030055770 RuDusky Mar 2003 A1
20030055771 RuDusky Mar 2003 A1
20030055777 Ginsberg Mar 2003 A1
20030061409 RuDusky Mar 2003 A1
20030065607 Satchwell Apr 2003 A1
20030065943 Geis et al. Apr 2003 A1
20030069723 Hegde Apr 2003 A1
20030074177 Bowen Apr 2003 A1
20030074489 Steger et al. Apr 2003 A1
20030074582 Patel et al. Apr 2003 A1
20030078865 Lee Apr 2003 A1
20030079060 Dunlop Apr 2003 A1
20030086300 Noyes et al. May 2003 A1
20030093343 Huttenlocher et al. May 2003 A1
20030093347 Gray May 2003 A1
20030097481 Richter May 2003 A1
20030099254 Richter May 2003 A1
20030105620 Bowen Jun 2003 A1
20030105721 Ginter et al. Jun 2003 A1
20030110229 Kulig et al. Jun 2003 A1
20030115485 Milliken Jun 2003 A1
20030117971 Aubury Jun 2003 A1
20030120460 Aubury Jun 2003 A1
20030121010 Aubury Jun 2003 A1
20030126065 Eng et al. Jul 2003 A1
20030130899 Ferguson et al. Jul 2003 A1
20030140337 Aubury Jul 2003 A1
20030154284 Bernardin et al. Aug 2003 A1
20030154368 Stevens et al. Aug 2003 A1
20030163715 Wong Aug 2003 A1
20030167348 Greenblat Sep 2003 A1
20030172017 Feingold et al. Sep 2003 A1
20030177253 Schuehler et al. Sep 2003 A1
20030184593 Dunlop Oct 2003 A1
20030187662 Wilson Oct 2003 A1
20030191876 Fallon Oct 2003 A1
20030208430 Gershon Nov 2003 A1
20030217306 Harthcock et al. Nov 2003 A1
20030221013 Lockwood et al. Nov 2003 A1
20030233302 Weber et al. Dec 2003 A1
20040000928 Cheng et al. Jan 2004 A1
20040015502 Alexander et al. Jan 2004 A1
20040015633 Smith Jan 2004 A1
20040019703 Burton Jan 2004 A1
20040028047 Hou et al. Feb 2004 A1
20040034587 Amberson et al. Feb 2004 A1
20040049596 Schuehler et al. Mar 2004 A1
20040059666 Waelbroeck et al. Mar 2004 A1
20040062245 Sharp et al. Apr 2004 A1
20040064737 Milliken et al. Apr 2004 A1
20040073703 Boucher et al. Apr 2004 A1
20040111632 Halperin Jun 2004 A1
20040123258 Butts Jun 2004 A1
20040162826 Wyschogrod et al. Aug 2004 A1
20040170070 Rapp et al. Sep 2004 A1
20040177340 Hsu et al. Sep 2004 A1
20040186804 Chakraborty et al. Sep 2004 A1
20040186814 Chalermkraivuth et al. Sep 2004 A1
20040199448 Chalermkraivuth et al. Oct 2004 A1
20040199452 Johnston et al. Oct 2004 A1
20040205149 Dillon et al. Oct 2004 A1
20050005145 Teixeira Jan 2005 A1
20050027634 Gershon Feb 2005 A1
20050033672 Lasry et al. Feb 2005 A1
20050038946 Borden Feb 2005 A1
20050044344 Stevens Feb 2005 A1
20050074033 Chauveau Apr 2005 A1
20050080649 Alvarez et al. Apr 2005 A1
20050086520 Dharmapurikar et al. Apr 2005 A1
20050091142 Renton et al. Apr 2005 A1
20050097027 Kavanaugh May 2005 A1
20050111363 Snelgrove et al. May 2005 A1
20050131790 Benzschawel et al. Jun 2005 A1
20050135608 Zheng Jun 2005 A1
20050187844 Chalermkraivuth et al. Aug 2005 A1
20050187845 Eklund et al. Aug 2005 A1
20050187846 Subbu et al. Aug 2005 A1
20050187847 Bonissone et al. Aug 2005 A1
20050187848 Bonissone et al. Aug 2005 A1
20050187849 Bollapragada et al. Aug 2005 A1
20050190787 Kuik et al. Sep 2005 A1
20050195832 Dharmapurikar et al. Sep 2005 A1
20050197938 Davie et al. Sep 2005 A1
20050197939 Davie et al. Sep 2005 A1
20050197948 Davie et al. Sep 2005 A1
20050216384 Partlow et al. Sep 2005 A1
20050228735 Duquette Oct 2005 A1
20050229254 Singh et al. Oct 2005 A1
20050240510 Schweickert et al. Oct 2005 A1
20050243824 Abbazia et al. Nov 2005 A1
20050267836 Crosthwaite et al. Dec 2005 A1
20050283423 Moser et al. Dec 2005 A1
20050283743 Mulholland et al. Dec 2005 A1
20060020536 Renton et al. Jan 2006 A1
20060020715 Jungck Jan 2006 A1
20060026090 Balabon Feb 2006 A1
20060031154 Noviello et al. Feb 2006 A1
20060031156 Noviello et al. Feb 2006 A1
20060047636 Mohania et al. Mar 2006 A1
20060053295 Madhusudan et al. Mar 2006 A1
20060059064 Glinberg et al. Mar 2006 A1
20060059065 Glinberg et al. Mar 2006 A1
20060059066 Glinberg et al. Mar 2006 A1
20060059067 Glinberg et al. Mar 2006 A1
20060059068 Glinberg et al. Mar 2006 A1
20060059069 Glinberg et al. Mar 2006 A1
20060059083 Friesen et al. Mar 2006 A1
20060123425 Ramarao et al. Jun 2006 A1
20060129745 Thiel et al. Jun 2006 A1
20060143099 Partlow et al. Jun 2006 A1
20060146991 Thompson et al. Jul 2006 A1
20060215691 Kobayashi et al. Sep 2006 A1
20060242123 Williams Oct 2006 A1
20060259407 Rosenthal et al. Nov 2006 A1
20060259417 Marynowski et al. Nov 2006 A1
20060269148 Farber et al. Nov 2006 A1
20060282281 Egetoft Dec 2006 A1
20060282369 White Dec 2006 A1
20060294059 Chamberlain et al. Dec 2006 A1
20070011183 Langseth et al. Jan 2007 A1
20070011687 Ilik et al. Jan 2007 A1
20070025351 Cohen Feb 2007 A1
20070061231 Kim-E Mar 2007 A1
20070061241 Jovanovic et al. Mar 2007 A1
20070067108 Buhler et al. Mar 2007 A1
20070067481 Sharma et al. Mar 2007 A1
20070078837 Indeck et al. Apr 2007 A1
20070094199 Deshpande et al. Apr 2007 A1
20070112837 Houh et al. May 2007 A1
20070118457 Peterffy et al. May 2007 A1
20070118494 Jannarone et al. May 2007 A1
20070118500 Indeck et al. May 2007 A1
20070130140 Cytron et al. Jun 2007 A1
20070156574 Marynowski et al. Jul 2007 A1
20070174841 Chamberlain et al. Jul 2007 A1
20070179935 Lee et al. Aug 2007 A1
20070198523 Hayim Aug 2007 A1
20070209068 Ansari et al. Sep 2007 A1
20070237327 Taylor et al. Oct 2007 A1
20070244859 Trippe et al. Oct 2007 A1
20070260602 Taylor Nov 2007 A1
20070260814 Branscome et al. Nov 2007 A1
20070277036 Chamberlain et al. Nov 2007 A1
20070294157 Singla et al. Dec 2007 A1
20070294162 Borkovec Dec 2007 A1
20080077793 Tan et al. Mar 2008 A1
20080082502 Gupta Apr 2008 A1
20080084573 Horowitz et al. Apr 2008 A1
20080086274 Chamberlain et al. Apr 2008 A1
20080097893 Walsky Apr 2008 A1
20080104542 Cohen et al. May 2008 A1
20080109413 Indeck et al. May 2008 A1
20080114724 Indeck et al. May 2008 A1
20080114725 Indeck et al. May 2008 A1
20080114760 Indeck et al. May 2008 A1
20080126274 Jannarone et al. May 2008 A1
20080126320 Indeck et al. May 2008 A1
20080133453 Indeck et al. Jun 2008 A1
20080133519 Indeck et al. Jun 2008 A1
20080162378 Levine et al. Jul 2008 A1
20080175239 Sistanizadeh et al. Jul 2008 A1
20080183688 Chamdani et al. Jul 2008 A1
20080189251 Branscome et al. Aug 2008 A1
20080189252 Branscome et al. Aug 2008 A1
20080243675 Parsons et al. Oct 2008 A1
20080275805 Hecht Nov 2008 A1
20090019219 Magklis et al. Jan 2009 A1
20090182683 Taylor et al. Jul 2009 A1
20090262741 Jungck et al. Oct 2009 A1
20090287628 Indeck et al. Nov 2009 A1
20100005036 Kraus et al. Jan 2010 A1
20100027545 Gomes et al. Feb 2010 A1
20100082895 Branscome et al. Apr 2010 A1
20100106976 Aciicmez et al. Apr 2010 A1
20100198920 Wong et al. Aug 2010 A1
20100257537 Hinshaw et al. Oct 2010 A1
20100306479 Ezzat Dec 2010 A1
20110029471 Chakradhar et al. Feb 2011 A1
20110040701 Singla et al. Feb 2011 A1
20110040776 Najm et al. Feb 2011 A1
20110066832 Casselman et al. Mar 2011 A1
20110125960 Casselman May 2011 A1
20110145130 Glodjo et al. Jun 2011 A1
20110167083 Branscome et al. Jul 2011 A1
20110178911 Parsons et al. Jul 2011 A1
20110178912 Parsons et al. Jul 2011 A1
20110178917 Parsons et al. Jul 2011 A1
20110178918 Parsons et al. Jul 2011 A1
20110178919 Parsons et al. Jul 2011 A1
20110178957 Parsons et al. Jul 2011 A1
20110179050 Parsons et al. Jul 2011 A1
20110184844 Parsons et al. Jul 2011 A1
20110199243 Fallon et al. Aug 2011 A1
20110218987 Branscome et al. Sep 2011 A1
20110231446 Buhler et al. Sep 2011 A1
20110246353 Kraus et al. Oct 2011 A1
20110252008 Chamberlain et al. Oct 2011 A1
20110289230 Ueno Nov 2011 A1
20110295967 Wang et al. Dec 2011 A1
20120065956 Irturk Mar 2012 A1
20120089496 Taylor et al. Apr 2012 A1
20120089497 Taylor et al. Apr 2012 A1
20120095893 Taylor et al. Apr 2012 A1
20120109849 Chamberlain et al. May 2012 A1
20120110316 Chamberlain et al. May 2012 A1
20120116998 Indeck et al. May 2012 A1
20120130922 Indeck et al. May 2012 A1
20120179590 Borkovec et al. Jul 2012 A1
20120215801 Indeck et al. Aug 2012 A1
20120246052 Taylor et al. Sep 2012 A1
20130007000 Indeck et al. Jan 2013 A1
20130086096 Indeck et al. Apr 2013 A1
20130159449 Taylor et al. Jun 2013 A1
20130262287 Parsons et al. Oct 2013 A1
20130290163 Parsons et al. Oct 2013 A1
20140025656 Indeck et al. Jan 2014 A1
20140040109 Parsons et al. Feb 2014 A1
20140067830 Buhler et al. Mar 2014 A1
20140089163 Parsons et al. Mar 2014 A1
20140164215 Parsons et al. Jun 2014 A1
20140180903 Parsons et al. Jun 2014 A1
20140180904 Parsons et al. Jun 2014 A1
20140180905 Parsons et al. Jun 2014 A1
20140181133 Parsons et al. Jun 2014 A1
20140310148 Taylor et al. Oct 2014 A1
20140310717 Chamberlain et al. Oct 2014 A1
20160070583 Chamberlain et al. Mar 2016 A1
20160328470 Indeck et al. Nov 2016 A1
20170102950 Chamberlain et al. Apr 2017 A1
20170124255 Buhler et al. May 2017 A1
20190155831 Indeck et al. May 2019 A1
20190205975 Taylor et al. Jul 2019 A1
20190324770 Chamberlain et al. Oct 2019 A1
20210142218 Chamberlain et al. May 2021 A1
20210200559 Chamberlain et al. Jul 2021 A1
20210304848 Buhler et al. Sep 2021 A1
Foreign Referenced Citations (54)
Number Date Country
0573991 Dec 1993 EP
0880088 Nov 1996 EP
0851358 Jul 1998 EP
0887723 Dec 1998 EP
0911738 Apr 1999 EP
09145544 Jun 1997 JP
09-269901 Oct 1997 JP
11-259559 Sep 1999 JP
11282912 Oct 1999 JP
11316765 Nov 1999 JP
2000286715 Oct 2000 JP
2001268071 Sep 2001 JP
2001283000 Oct 2001 JP
2002101089 Apr 2002 JP
2002269343 Sep 2002 JP
2002352070 Dec 2002 JP
2003-036360 Feb 2003 JP
2003256660 Sep 2003 JP
2006059203 Mar 2006 JP
2006293852 Oct 2006 JP
1180644 Nov 2008 JP
2010-530591 Sep 2010 JP
199010910 Sep 1990 WO
199409443 Apr 1994 WO
199737735 Oct 1997 WO
2000041136 Jul 2000 WO
2001022425 Mar 2001 WO
0135216 May 2001 WO
200172106 Oct 2001 WO
2001080082 Oct 2001 WO
2001080558 Oct 2001 WO
0190890 Nov 2001 WO
2002061525 Aug 2002 WO
2003100650 Apr 2003 WO
2003036845 May 2003 WO
2003100662 Dec 2003 WO
2004017604 Feb 2004 WO
2004042560 May 2004 WO
2004042561 May 2004 WO
2004042562 May 2004 WO
2004042574 May 2004 WO
2005017708 Feb 2005 WO
2005026925 Mar 2005 WO
2005048134 May 2005 WO
2006023948 Mar 2006 WO
2006096324 Sep 2006 WO
2007064685 Jun 2007 WO
2007074903 Jul 2007 WO
2007087507 Aug 2007 WO
2007127336 Nov 2007 WO
2008022036 Feb 2008 WO
2009089467 Jul 2009 WO
2009140363 Nov 2009 WO
2010077829 Jul 2010 WO
Non-Patent Literature Citations (177)
Entry
Gokhale, Maya B. & Graham, Paul S. Reconfigurable Computing. Springer. 2005. pp. 1-209 (Year: 2005).
“A Reconfigurable Computing Model for Biological Research Application of Smith-Waterman Analysis to Bacterial Genomes” A White Paper Prepared by Star Bridge Systems, Inc. [retrieved Dec. 12, 2006], Retrieved from the Internet: <URL: http://www.starbridgesystems.com/resources/whitepapers/Smith%20 Waterman%20Whitepaper.pdf.
“ACTIV Financial Announces Hardware Based Market Data Feed Processing Strategy”, For Release on Apr. 2, 2007, 2 pages.
“ACTIV Financial Delivers Accelerated Market Data Feed”, Apr. 6, 2007, byline of Apr. 2, 2007, downloaded from http://hpcwire.com/hpc.1346816.html on Jun. 19, 2007, 3 pages.
“DRC, Exegy Announce Joint Development Agreement”, Jun. 8, 2007, byline of Jun. 4, 2007; downloaded from http://www.hpcwire.com/hpc/1595644.html on Jun. 19, 2007, 3 pages.
“Lucent Technologies Delivers “PayloadPlus” Network Processors for Programmable, MultiProtocol, OC-48c Processing”, Lucent Technologies Press Release, downloaded from http://www.lucent.com/press/1000/0010320.meb.html on Mar. 21, 2002.
“Overview, Field Programmable Port Extender”, Jan. 2002 GigabitWorkshop Tutorial, Washington University, St. Louis, MO, Jan. 3-4, 2002, pp. 1-4.
“Payload Plus™ Agere System Interface”, Agere Systems Product Brief, Jun. 2001, downloaded from Internet, Jan. 2002, pp. 1-6.
“RFC793: Transmission Control Protocol, Darpa Internet Program, Protocol Specification”, Sep. 1981.
“Technology Overview”, Data Search Systems Incorporated, downloaded from the http://www.datasearchsystems.com/tech.htm on Apr. 19, 2004.
“The Field-Programmable Port Extender (FPX)”, downloaded from http://www.arl.wustl.edu/arl/ in Mar. 2002.
Aldwairi et al., “Configurable String Matching Hardware for Speeding up Intrusion Detection”, SIRARCH Comput. Archit. News, vol. 33, No. 1, pp. 99-107, Mar. 2005.
Amanuma et al., “A FPGA Architecture For High Speed Computation”, Proceedings of 60th Convention Architecture, Software Science, Engineering, Mar. 14, 2000, pp. 1-163-1-164, Information Processing Society, Japan.
Anerousis et al., “Using the AT&T Labs Packetscope for Internet Measurement, Design, and Performance Analysis”, Network and Distributed Systems Research Laboratory, AT&T Labs-Research, Florham, Park, NJ, Oct. 1997.
Anonymous, “Method for Allocating Computer Disk Space to a File of Known Size”, IBM Technical Disclosure Bulletin, vol. 27, No. 10B, Mar. 1, 1985, New York.
Arnold et al., “The Splash 2 Processor and Applications”, Proceedings 1993 IFFF International Conference on Computer Design: VLSI In Computers and Processors (ICCD '93), Oct. 3, 1993, pp. 482-485, IEEE Computer Society, Cambridge, MA USA.
Artan et al., “Multi-packet Signature Detection using Prefix Bloom Filters”, 2005, IEEE, pp. 1811-1816.
Asami et al., “Improvement of DES Key Search on FPGA-Based Parallel Machine “RASH””, Proceedings of Information Processing Society, Aug. 15, 2000, pp. 50-57, vol. 41, No. SIG5 (HPS1), Japan.
Baboescu et al., “Scalable Packet Classification,” SIGCOMM'01, Aug. 27-31, 2001, pp. 199-210, San Diego, California, USA; http://www.ecse.rpi.edu/homepages/shivkuma/teaching/sp2001/readings/baboescu-pkt-classification.pdf.
Baer, “Computer Systems Architecture”, 1980, pp. 262-265; Computer Science Press, Potomac, Maryland.
Baeza-Yates et al., “New and Faster Filters for Multiple Approximate String Matching”, Random Structures and Algorithms (RSA), Jan. 2002, pp. 23-49, vol. 20, No. 1.
Baker et al., “High-throughput Linked-Pattern Matching for Intrusion Detection Systems”, ANCS 2005: Proceedings of the 2005 Symposium on Architecture for Networking and Communications Systems, pp. 193-202, ACM Press, 2005.
Barone-Adesi et al., “Efficient Analytic Approximation of American Option Values”, Journal of Finance, vol. 42, No. 2 (Jun. 1987), pp. 301-320.
Batory, “Modeling the Storage Architectures of Commercial Database Systems”, ACM Transactions on Database Systems, Dec. 1985, pp. 463-528, vol. 10, issue 4.
Behrens et al., “BLASTN Redundancy Filter in Reprogrammable Hardware,” Final Project Submission, Fall 2003, Department of Computer Science and Engineering, Washington University.
Berk, “JLex: A lexical analyzer generator for Java™ ”, downloaded from http://www.cs.princeton.edu/˜appel/modern/ava/Jlex/ in Jan. 2002, pp. 1-18.
Bianchi et al., “Improved Queueing Analysis of Shared Buffer Switching Networks”, ACM, Aug. 1993, pp. 482-490.
Bloom, “Space/Time Trade-offs in Hash Coding With Allowable Errors”, Communications of the ACM, Jul. 1970, pp. 422-426, vol. 13, No. 7, Computer Usage Company, Newton Upper Falls, Massachusetts, USA.
Braun et al., “Layered Protocol Wrappers for Internet Packet Processing in Reconfigurable Hardware”, Proceedings of Hot Interconnects 9 (Hotl-9) Stanford, CA, Aug. 22-24, 2001, pp. 93-98.
Braun et al., “Protocol Wrappers for Layered Network Packet Processing in Reconfigurable Hardware”, IEEE Micro, Jan.-Feb. 2002, pp. 66-74.
Brodie et al., “Dynamic Reconfigurable Computing”, in Proc. of 9th Military and Aerospace Programmable Logic Devices International Conference, Sep. 2006.
Cavnar et al., “N-Gram-Based Text Categorization”, Proceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval, Las Vegas, pp. 161-175, 1994.
Celko, “Joe Celko's Data & Databases: Concepts in Practice”, 1999, pp. 72-74, Morgan Kaufmann Publishers.
Chamberlain et al., “Achieving Real Data Throughput for an FPGA Co-Processor on Commodity Server Platforms”, Proc, of 1st Workshop on Building Block Engine Architectures for Computers and Networks, Oct. 2004, Boston, MA.
Chamberlain et al., “The Mercury System: Embedding Computation Into Disk Drives”, 7th High Performance Embedded Computing Workshop, Sep. 2003, Boston, MA.
Chamberlain et al., “The Mercury System: Exploiting Truly Fast Hardware for Data Search”, Proc. of Workshop on Storage Network Architecture and Parallel I/Os, Sep. 2003, New Orleans, LA.
Cho et al., “Deep Packet Filter with Dedicated Logic and Read Only Memories”, 12th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, Apr. 2004.
Choi et al., “Design of a Flexible Open Platform for High Performance Active Networks”, Allerton Conference, 1999, Champaign, IL.
Cholleti, “Storage Allocation in Bounded Time”, MS Thesis, Dept. of Computer Science and Engineering, Washington Univeristy, St. Louis, MO (Dec. 2002). Available as Washington University Technical Report WUCSE-2003-2.
Clark et al., “Scalable Pattern Matching for High Speed Networks”, Proceedings of the 12th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 2004; FCCM 2004, Apr. 20-23, 2004; pp. 249-257; IEEE Computer Society; Cambridge, MA USA.
Cloutier et al., “VIP: An FPGA-Based Processor for Image Processing and Neural Networks”, Proceedings of Fifth International Conference on Microelectronics for Neural Networks, Feb. 12, 1996, pp. 330-336, Los Alamitos, California.
Compton et al., “Configurable Computing: A Survey of Systems and Software”, Technical Report, Northwestern University, Dept. of ECE, 1999.
Compton et al., “Reconfigurable Computing: A Survey of Systems and Software”, Technical Report, Northwestern University, Dept. of ECE, 1999, presented by Yi-Gang Tai.
Compton et al., “Reconfigurable Computing: A Survey of Systems and Software”, University of Washington, ACM Computing Surveys, Jun. 2, 2002, pp. 171-210, vol. 34 No. 2, <http://www.idi.ntnu.no/emner/tdt22/2011/reconfig.pdf>.
Cong et al., “An Optional Technology Mapping Algorithm for Delay Optimization in Lookup-Table Based FPGA Designs”, IEEE, 1992, pp. 48-53.
Corbet et al., Linux Device Drivers: Where the Kernel Meets the Hardware, O'Reilly, Feb. 2005, pp. 19-20, 412-414, and 441, 3rd Edition.
Crosman, “Who Will Cure Your Data Latency?”, Storage & Servers, Jun. 20, 2007, URL: http://www.networkcomputing.com/article/printFullArticleSrc.jhtml?article ID=199905630.
Cuppu and Jacob, “Organizational Design Trade-Offs at the DRAM, Memory Bus and Memory Controller Level Initial Results,” Technical Report UMB-SCA-1999-2, Univ. of Maryland Systems & Computer Architecture Group, Nov. 1999, pp. 1-10.
Currid, “TCP Offload to the Rescue”, Networks, Jun. 14, 2004, 16 pages, vol. 2, No. 3.
Shirazi et al., “Quantitative Analysis of FPGA-based Database Searching”, Journal of VLSI Signal Processing Systems For Signal, Image, and Video Technology, May 2001, pp. 85-96, vol. 28, No. 1/2, Kluwer Academic Publishers, Dordrecht, NL.
Sidhu et al., “Fast Regular Expression Matching Using FPGAs”, IEEE Symposium on Field Programmable Custom Computing Machines (FCCM 2001), Apr. 2001.
Sidhu et al., “String Matching on Multicontext FPGAs Using Self-Reconfiguration”, FPGA '99: Proceedings of the 1999 ACM/SIGDA 7th International Symposium on Field Programmable Gate Arrays, Feb. 1999, pp. 217-226.
Singh et al., “The EarlyBird System for Real-Time Detection on Unknown Worms”, Technical report CS2003-0761, Aug. 2003.
Skiena et al., “Programming Challenges: The Programming Contest Training Manual”, 2003, pp. 30-31, Springer.
Sourdis and Pnevmatikatos, “Fast, Large-Scale String Match for a 10Gbps FPGA-based Network Intrusion Detection System”, 13th International Conference on Field Programmable Logic and Applications, 2003.
Steinbach et al., “A Comparison of Document Clustering Techniques”, KDD Workshop on Text Mining, 2000.
Tan et al., “A High Throughput String Matching Architecture for Intrusion Detection and Prevention”, ISCA 2005: 32nd Annual International Symposium on Computer Architecture, pp. 112-122, 2005.
Taylor et al., “Dynamic Hardware Plugins (DHP): Exploiting Reconfigurable Hardware for High-Performance Programmable Routers”, Computer Networks, 38(3): 295-310 (16), Feb. 21, 2002, and online at http://www.cc.gatech.edu/classes/AY2007/cs8803hpc_fall/papers/phplugins.pdf.
Taylor et al., “Generalized RAD Module Interface Specification of the Field Programmable Port Extender (FPX) Version 2”, Washington University, Department of Computer Science, Technical Report, Jul. 5, 2001, pp. 1-10.
Taylor et al., “Modular Design Techniques for the FPX”, Field Programmable Port Extender: Jan. 2002 Gigabit Workshop Tutorial, Washington University, St Louis, MO, Jan. 3-4, 2002.
Taylor et al., “Scalable Packet Classification using Distributed Crossproducting of Field Labels”, Proceedings of IEEE Infocom, Mar. 2005, pp. 1-12, vol. 20, No. 1.
Taylor, “Models, Algorithms, and Architectures for Scalable Packet Classification”, doctoral thesis, Department of Computer Science and Engineering, Washington University, St. Louis, MO, Aug. 2004, pp. 1-201.
Thomson Reuters, “Mellanox InfiniBand Accelerates the Exegy Ticker Plant at Major Exchanges”, Jul. 22, 2008, URL: http://www.reuters.com/article/pressRelease/idUS125385+22-Jul-2008+BW20080722.
Uluski et al., “Characterizing Antivirus Workload Execution”, SIGARCH Comput. Archit. News, vol. 33, No. 1, pp. 90-98, Mar. 2005.
Villasenor et al., “Configurable Computing Solutions For Automatic Target Recognition”, FPGAS for Custom Computing Machines, 1996, Proceedings, IEEE Symposium on Napa Valley, CA, Apr. 17-19, 1996, pp. 70-79, 1996 IEEE, Napa Valley, CA, Los Alamitos, CA, USA.
Waldvogel et al., “Scalable High-Speed Prefix Matching”, ACM Transactions on Computer Systems, Nov. 2001, pp. 440-482, vol. 19, No. 4.
Ward et al., “Dynamically Reconfigurable Computing: A Novel Computation Technology with Potential to Improve National Security Capabilities”, May 15, 2003, A White Paper Prepared by Star Bridge Systems, Inc. [retrieved Dec. 12, 2006]. Retrieved from the Internet: <URL: http://www.starbridgesystems.com/resources/whitepapers/Dynamically%20Reconfigurable%20Computing.pdf.
Weaver et al., “Very Fast Containment of Scanning Worms”, Proc. USENIX Security Symposium 2004, San Diego, CA, Aug. 2004, located at http://www.icsi.berkely.edu/˜nweaver/containment/containment.pdf.
West et al., “An FPGA-Based Search Engine for Unstructured Database”, Proc. of 2nd Workshop on Application Specific Processors, Dec. 2003, San Diego, CA.
Wooster et al., “HTTPDUMP Network HTTP Packet Snooper”, Apr. 25, 1996.
Worboys, “GIS: A Computing Perspective”, 1995, pp. 245-247, 287, Taylor & Francis Ltd.
Yamaguchi et al., “High Speed Homology Search with FPGAs”, Proceedings Pacific Symposium on Biocomputing, Jan. 3-7, 2002, pp. 271-282, vol. 7, Online, Lihue, Hawaii, USA.
Yan et al., “Enhancing Collaborative Spam Detection with Bloom Filters”, 2006, IEEE, pp. 414-425.
Yoshitani et al., “Performance Evaluation of Parallel Volume Rendering Machine Re Volver/C40”, Study Report of Information Processing Society, Mar. 5, 1999, pp. 79-84, vol. 99, No. 21.
Ziv et al., “A Universal Algorithm for Sequential Data Compression”, IEEE Trans. Inform. Theory, IT-23(3): 337-343 (1997).
Denoyer et al., “HMM-based Passage Models for Document Classification and Ranking”, Proceedings of ECIR-01, 23rd European Colloquim Information Retrieval Research, Darmstatd, DE, pp. 126-135, 2001.
Dharmapurikar et al., “Deep Packet Inspection Using Parallel Bloom Filters,” IEEE Micro, Jan.-Feb., 2004, vol. 24, Issue: 1, pp. 52-61.
Dharmapurikar et al., “Deep Packet Inspection Using Parallel Bloom Filters,” Symposium on High Performance nterconnects (Hotl), Stanford, California, 2003, pp. 44-51.
Dharmapurikar et al., “Design and Implementation of a String Matching System for Network Intrusion Detection using FPGA-based Bloom Filters”, Proc. of 12th Annual IEEE Symposium on Field Programmable Custom Computing Machines, 2004, pp. 1-10.
Dharmapurikar et al., “Longest Prefix Matching Using Bloom Filters,” SIGCOMM, 2003, pp. 201-212.
Dharmapurikar et al., “Robust TCP Stream Reassembly in the Presence of Adversaries”, Proc. of the 14th Conference on USENIX Security Symposium—vol. 14, 16 pages, Baltimore, MD, 2005; http://www.icir.org/vern/papers/TcpReassembly/TCPReassembly.pdf.
Dharmapurikar, “Fast and Scalable Pattern Matching for Content Filtering”, ACM, ANCS 05, 2005, pp. 183-192.
Ebeling et al., “RaPiD—Reconfigurable Pipelined Datapath”, University of Washington, Dept. of Computer Science and Engineering, Sep. 23, 1996, Seattle, WA.
Exegy Inc., “Exegy and HyperFeed to Unveil Exelerate TP at SIA Conference”, Release Date: Jun. 20, 2006, downloaded from http://news.thomasnet.com/companystory/488004 on Jun. 19, 2007, 4 pages.
Exegy Inc., “First Exegy Ticker Plant Deployed”, Release Date: Oct. 17, 2006, downloaded from http://news.thomasnet.com/companystory/496530 on Jun. 19, 2007, 5 pages.
Extended European Search Report for EP Application 11847815.5 dated Apr. 4, 2014.
Feldman, “High Frequency Traders Get Boost From FPGA Acceleration”, Jun. 8, 2007, downloaded from http://www.hpcwire.com/hpc.1600113.html on Jun. 19, 2007, 4 pages.
Franklin et al., “An Architecture for Fast Processing of Large Unstructured Data Sets.” Proc. of 22nd Int'l Conf. on Computer Design, Oct. 2004, pp. 280-287.
Franklin et al., “Assisting Network Intrusion Detection with Reconfigurable Hardware”, Symposium on Field-Programmable Custom Computing Machines (FCCM 2002), Apr. 2002, Napa, California.
Fu et al., “The FPX KCPSM Module: An Embedded, Reconfigurable Active Processing Module for the Field Programmable Port Extender (FPX)”, Washington University, Department of Computer Science, Technical Report WUCS-01-14, Jul. 2001.
Gavrila et al., “Multi-feature Hierarchical Template Matching Using Distance Transforms”, IEEE, Aug. 16-20, 1998, vol. 1, pp. 439-444.
Gokhale et al., “Reconfigurable Computing: Accelerating Computation With Field-Programmable Gate Arrays”, 2005, pp. 1-3, 7, 11-15, 39, 92-93, Springer.
Gokhale et al., “Reconfigurable Computing: Accelerating Computation with Field-Programmable Gate Arrays”, Springer, 2005, pp. 1-36.
Gokhale et al., “Reconfigurable Computing: Accelerating Computation with Field-Programmable Gate Arrays”, Springer, 2005, pp. 1-54, 92-96.
Google Search Results Page for “field programmable gate array financial calculation stock market” over dates of Jan. 1, 1990-May 21, 2002, 1 page.
Gunther et al., “Assessing Document Relevance with Run-Time Reconfigurable Machines”, IEEE Symposium on FPGAs for Custom Computing Machines, 1996, pp. 10-17, Proceedings, Napa Valley, CA.
Gupta et al., “High-Speed Implementations of Rule-Based Systems,” ACM Transactions on Computer Systems, May 1989, pp. 119-146, vol. 7, Issue 2.
Gupta et al., “Packet Classification on Multiple Fields”, Computer Systems Laboratory, Stanford University, Stanford, CA.
Gupta et al., “PMM: A Parallel Architecture for Production Systems,” Proceedings of the IEEE, Apr. 1992, pp. 693-696, vol. 2.
Gyang, “NCBI BLASTN Stage 1 in Reconfigurable Hardware,” Technical Report WUCSE-2005-30, Aug. 2004, Department of Computer Science and Engineering, Washington University, St. Louis, MO.
Halaas et al., “A Recursive MISD Architecture for Pattern Matching”, IEEE Transactions on Very Large Scale Integration, vol. 12, No. 7, pp. 727-734, Jul. 2004.
Harris, “Pete's Blog: Can FPGAs Overcome the FUD?”, Low-Latency.com, May 14, 2007, URL: http://www.a-teamgroup.com/article/pete-blog-can-fpgas-overcome-the-fud/.
Hauck et al., “Software Technologies for Reconfigurable Systems”, Northwestern University, Dept. of ECE, Technical Report, 1996.
Hayes, “Computer Architecture and Organization”, Second Edition, 1988, pp. 448-459, McGraw-Hill, Inc.
Hezel et al., “FPGA-Based Template Matching Using Distance Transforms”, Proceedings of the 10th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, Apr. 22, 2002, pp. 89-97, IEEE Computer Society, USA.
Hirsch, “Tech Predictions for 2008”, Reconfigurable Computing, Jan. 16, 2008, URL: http://fpgacomputing.blogspot.com/2008_01_01_archive.html.
Hoinville, et al. “Spatial Noise Phenomena of Longitudinal Magnetic Recording Media”, IEEE Transactions on Magnetics, vol. 28, No. 6, Nov. 1992.
Hollaar, “Hardware Systems for Text Information Retrieval”, Proceedings of the Sixth Annual International ACM Sigir Conference on Research and Development in Information Retrieval, Jun. 6-8, 1983, pp. 3-9, Baltimore, Maryland, USA.
Howe, Data Analysis for Database Design Third Edition, 2001, 335 pages, Butterworth-Heinemann.
Hutchings et al., “Assisting Network Intrusion Detection with Reconfigurable Hardware”, FCCM 2002: 10th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 2002.
Ibrahim et al., “Lecture Notes in Computer Science: Database and Expert Systems Applications”, 2000, p. 769, vol. 1873, Springer.
International Preliminary Report on Patentability (Chapter I) for PCT/US2011/064269 dated Jun. 12, 2013.
International Search Report and Written Opinion for PCT/US2011/064269 dated Apr. 20, 2012.
Jacobson et al., “RFC 1072: TCP Extensions for Long-Delay Paths”, Oct. 1988.
Jacobson et al., “tcpdump—dump traffic on a network”, Jun. 30, 1997, online at www.cse.cuhk.edu.hk/˜cslui/CEG4430/tcpdump.ps.gz.
Johnson et al., “Pattern Matching in Reconfigurable Logic for Packet Classification”, College of Computing, Georgia Institute of Technology, Atlanta, GA.
Jones et al., “A Probabilistic Model of Information Retrieval: Development and Status”, Information Processing and Management, Aug. 1998, 76 pages.
Keutzer et al., “A Survey of Programmable Platforms—Network Proc”, University of California-Berkeley, pp. 1-29.
Koloniari et al., “Content-Based Routing of Path Queries in Peer-to-Peer Systems”, pp. 1-19, E. Bertino et al. (Eds.): EDBT 2004, LNCS 2992, pp. 29-47, 2004, copyright by Springer-Verlag, Germany.
Krishnamurthy et al., “Biosequence Similarity Search On The Mercury System”, Proceedings of the 15th IEEE International Conference on Application-Specific Systems, Architectures, and Processors (ASAP04), Sep. 2004, pp. 365-375.
Lancaster et al., “Acceleration of Ungapped Extension in Mercury BLAST”, Seventh (7th) Workshop on Media and Streaming Processors, Nov. 12, 2005, Thirty-Eighth (38th) International Symposium on Microarchitecture (MICRO-38), Barcelona, Spain.
Li et al., “Large-Scale IP Traceback in High-Speed Internet: Practical Techniques and Theoretical Foundation”, Proceedings of the 2004 IEEE Symposium on Security and Privacy, 2004, pp. 1-15.
Lin et al., “Real-Time Image Template Matching Based on Systolic Array Processor”, International Journal of Electronics; Dec. 1, 1992; pp. 1165-1176; vol. 73, No. 6; London, Great Britain.
Lockwood et al., “Field Programmable Port Extender (FPX) for Distributed Routing and Queuing”, ACM International Symposium on Field Programmable Gate Arrays (FPGA 2000), Monterey, CA, Feb. 2000, pp. 137-144.
Lockwood et al., “Hello, World: A Simple Application for the Field Programmable Port Extender (FPX)”, Washington University, Department of Computer Science, Technical Report WUCS-00-12, Jul. 11, 2000.
Lockwood et al., “Parallel FPGA Programming over Backplane Chassis”, Washington University, Department of Computer Science, Technical Report WUCS-00-11, Jun. 12, 2000.
Lockwood et al., “Reprogrammable Network Packet Processing on the Field Programmable Port Extender (FPX)”, ACM International Symposium on Field Programmable Gate Arrays (FPGA 2001), Monterey, CA, Feb. 2001, pp. 87-93.
Lockwood, “An Open Platform for Development of Network Processing Modules in Reprogrammable Hardware”, IEC DesignCon 2001, Santa Clara, CA, Jan. 2001, Paper WB-19.
Lockwood, “Building Networks with Reprogrammable Hardware”, Field Programmable Port Extender: Jan. 2002 Gigabit Workshop Tutorial, Washington University, St. Louis, MO, Jan. 3-4, 2002.
Lockwood, “Evolvable Internet Hardware Platforms”, NASA/DoD Workshop on Evolvable Hardware (EHW'01), Long Beach, CA, Jul. 12-14, 2001, pp. 271-279.
Lockwood, “Hardware Laboratory Configuration”, Field Programmable Port Extender: Jan. 2002 Gigabit Workshop Tutorial, Washington University, St Louis, MO, Jan. 3-4, 2002.
Lockwood, “Introduction”, Field Programmable Port Extender: Jan. 2002 Gigabit Workshop Tutorial, Washington University, St. Louis, MO, Jan. 3-4, 2002.
Lockwood, “Platform and Methodology for Teaching Design of Hardware Modules in Internet Routers and Firewalls”, IEEE Computer Society International Conference on Microelectronic Systems Education (MSE'2001), Las Vegas, NV, Jun. 17-18, 2001, pp. 56-57.
Lockwood, “Protocol Processing on the FPX”, Field Programmable Port Extender: Jan. 2002 Gigabit Workshop Tutorial, Washington University, St. Louis, MO, Jan. 3-4, 2002.
Lockwood, “Simulation and Synthesis”, Field Programmable Port Extender: Jan. 2002 Gigabit Workshop Tutorial, Washington University, St. Louis, MO, Jan. 3-4, 2002.
Lockwood, “Simulation of the Hello World Application for the Field-Programmable Port Extender (FPX)”, Washington University, Applied Research Lab, Spring 2001 Gigabits Kits Workshop.
Madhusudan, “Design of a System for Real-Time Worm Detection”, Hot Interconnects, pp. 77-83, Stanford, CA, Aug. 2004, found at http://www hoti.org/hoti12/program/papers/2004/paper4.2.pdf.
Madhusudan, “Design of a System for Real-Time Worm Detection”, Power Point Presentation in Support of Master's Thesis, Washington Univ., Dept. of Computer Science and Engineering, St. Louis, MO, Aug. 2004.
Mosanya et al., “A FPGA-Based Hardware Implementation of Generalized Profile Search Using Online Arithmetic”, ACM/Sigda International Symposium on Field Programmable Gate Arrays (FPGA '99), Feb. 21-23, 1999, pp. 101-111, Monterey, CA, USA.
Moscola et al., “FPGrep and FPSed: Regular Expression Search and Substitution for Packet Streaming in Field Programmable Hardware”, Dept. of Computer Science, Applied Research Lab, Washington University, Jan. 8, 2002, unpublished, pp. 1-19, St. Louis, MO.
Moscola et al., “FPSed: A Streaming Content Search-and-Replace Module for an Internet Firewall”, Proc. of Hot Interconnects, 11th Symposium on High Performance Interconnects, pp. 122-129, Aug. 20, 2003.
Moscola, “FPGrep and FPSed: Packet Payload Processors for Managing the Flow of Digital Content on Local Area Networks and the Internet”, Master's Thesis, Sever Institute of Technology, Washington University, St. Louis, MO, Aug. 2003.
Motwani et al., “Randomized Algorithms”, 1995, pp. 215-216, Cambridge University Press.
Mueller, “Upgrading and Repairing PCs, 15th Anniversary Edition”, 2004, pp. 63-66, 188, Que.
Navarro, “A Guided Tour to Approximate String Matching”, ACM Computing Surveys, vol. 33, No. 1, Mar. 2001, pp. 31-88.
Nunez et al., “The X-MatchLITE FPGA-Based Data Compressor”, Euromicro Conference 1999, Proceedings, Italy, Sep. 8-10, 1999, pp. 126-132, Los Alamitos, CA.
Nwodoh et al., “A Processing System for Real-Time Holographic Video Computation”, Reconfigurable Technology: FPGAs for Computing and Application, Proceedings for the SPIE, Sep. 1999, Boston, pp. 129-140, vol. 3844.
Office Action for EP Application 11847815.5 dated Dec. 22, 2016.
Office Action for JP Application 2013-543394 dated Nov. 16, 2015.
Pramanik et al., “A Hardware Pattern Matching Algorithm on a Dataflow”; Computer Journal; Jul. 1, 1985; pp. 264-269; vol. 28, No. 3; Oxford University Press, Surrey, Great Britain.
Prosecution History for U.S. Appl. No. 11/765,306, now U.S. Pat. No. 7921046, filed Jun. 19, 2007.
Prosecution History for U.S. Appl. No. 13/076,968, filed Mar. 31, 2011 (Parsons et al.).
Ramakrishna et al., “A Performance Study of Hashing Functions for Hardware Applications”, Int. Conf. on Computing and Information, May 1994, pp. 1621-1636, vol. 1, No. 1.
Ramakrishna et al., “Efficient Hardware Hashing Functions for High Performance Computers”, IEEE Transactions on Computers, Dec. 1997, vol. 46, No. 12.
Ratha et al., “Convolution on Splash 2”, Proceedings of IEEE Symposium on FPGAS for Custom Computing Machines, Apr. 19, 1995, pp. 204-213, Los Alamitos, California.
Response to Extended European Search Report for EP Applicaion 11847815.5 dated Apr. 4, 2014.
Roesch, “Snort—Lightweight Intrusion Detection for Networks”, Proceedings of LISA '99: 13th Systems Administration Conference; Nov. 7-12, 1999; pp. 229-238; USENIX Association, Seattle, WA USA.
Russ et al., Non-Intrusive Built-In Self-Test for FPGA and MCM Applications, Aug. 8-10, 1995, IEEE, 480-485.
Sachin Tandon, “A Programmable Architecture for Real-Time Derivative Trading”, Master's Thesis, University of Edinburgh, 2003.
Schmerken, “With Hyperfeed Litigation Pending, Exegy Launches Low-Latency Ticker Plant”, in Wall Street & Technology Blog, Mar. 20, 2007, pp. 1-2.
Schmit, “Incremental Reconfiguration for Pipelined Applications”, FPGAs for Custom Computing Machines, Proceedings, The 5th Annual IEEE Symposium, Dept. of ECE, Carnegie Mellon University, Apr. 16-18, 1997, pp. 47-55, Pittsburgh, PA.
Schuehler et al., “Architecture for a Hardware Based, TCP/IP Content Scanning System”, IEEE Micro, 24(1):62-69, Jan.-Feb. 2004, USA.
Schuehler et al., “TCP-Splitter: A TCP/IP Flow Monitor in Reconfigurable Hardware”, Hot Interconnects 10 (Hotl-10), Stanford, CA, Aug. 21-23, 2002, pp. 127-131.
Seki et al., “High Speed Computation of Shogi With FPGA”, Proceedings of 58th Convention Architecture, Software Science, Engineering, Mar. 9, 1999, pp. 1-133-1-134.
Shah, “Understanding Network Processors”, Version 1.0, University of California-Berkeley, Sep. 4, 2001.
Shalunov et al., “Bulk TCP Use and Performance on Internet 2”, ACM SIGCOMM Internet Measurement Workshop, 2001.
Shasha et al., “Database Tuning”, 2003, pp. 280-284, Morgan Kaufmann Publishers.
Office Action for CA Application 2820898 dated Aug. 20, 2018.
Office Action for EP Application 11847815.5 dated Dec. 21, 2018.
Prosecution History for U.S. Appl. No. 13/316,332, now U.S. Pat. No. 10,037,568, filed Dec. 9, 2011.
Office Action for EP Application 11847815.5 dated Feb. 6, 2020.
OrCAD unveils strategy for leadership of mainstream programmable logic design market; strategy includes partnerships and a next generation product, OrCAD express for windows: A shrink-wrapped 32-bit windows application that includes VHDL simulation and synthesis. (Jun. 3, 1996). Retrieved Sep. 16, 2020 (Year: 1996).
Smith, E. (Oct. 10, 1994). QuickLogic QuickWorks guarantees fastest FPGA design cycle. Business Wire Retrieved from https:// dialog.proquest.com/professional/docview/447031280?accountid=131444 <https://dialog.proquest.com/?professional/docview/447031280?accountid=131444> retrieved Sep. 16, 2020 (Year: 1994).
Diniz et al., “Data Search and Reorganization Using FPGAs: Application to Spatial Pointer-Based Data Structures”, IEEE, 2003, 11 pgs.
Summons to Attend Oral Proceedings for EP Application 11847815.5 dated Sep. 29, 2021.
Villasenor et al., “The Flexibility of Configurable Computing”, IEEE, 1998, pp. 67-84.
Vuillemin et al., “Programmable Active Memories: Reconfigurable Systems Come of Age”, IEEE, 1996, pp. 56-69, vol. 4, No. 1.
Related Publications (1)
Number Date Country
20180330444 A1 Nov 2018 US
Provisional Applications (1)
Number Date Country
61421545 Dec 2010 US
Divisions (1)
Number Date Country
Parent 13316332 Dec 2011 US
Child 16044614 US