The computing industry faces increasing challenges in its efforts to improve the speed and efficiency of software-driven computing devices, e.g., due to power limitations and other factors. Software-driven computing devices employ one or more central processing units (CPUs) that process machine-readable instructions in a conventional temporal manner. To address this issue, the computing industry has proposed using hardware acceleration components (such as field-programmable gate arrays (FPGAs)) to supplement the processing performed by software-driven computing devices. However, software-driven computing devices and hardware acceleration components are dissimilar types of devices having fundamentally different architectures, performance characteristics, power requirements, program configuration paradigms, interface features, and so on. It is thus a challenging task to integrate these two types of devices together in a manner that satisfies the various design requirements of a particular data processing environment.
A data processing system is described herein that includes two or more software-driven host components. The two or more host components collectively provide a software plane. The data processing system also includes two or more hardware acceleration components (such as FPGA devices) that collectively provide a hardware acceleration plane. In one implementation, a common physical network allows the host components to communicate with each other, and which also allows the hardware acceleration components to communicate with each other. Further, the hardware acceleration components in the hardware acceleration plane include functionality that enables them to communicate with each other in a transparent manner without assistance from the software plane. Overall, the data processing system may be said to support two logical networks that share a common physical network substrate. The logical networks may interact with each other, but otherwise operate in an independent manner.
The above-summarized functionality can be manifested in various types of systems, devices, components, methods, computer readable storage media, data structures, graphical user interface presentations, articles of manufacture, and so on.
This Summary is provided to introduce a selection of concepts in a simplified form; these concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The same numbers are used throughout the disclosure and figures to reference like components and features. Series 100 numbers refer to features originally found in
This disclosure is organized as follows. Section A describes an illustrative data processing system that includes a hardware acceleration plane and a software plane. Section B describes management functionality that is used to manage the data processing system of Section A. Section C sets forth one implementation of an illustrative hardware acceleration component in the hardware acceleration plane.
As a preliminary matter, some of the figures describe concepts in the context of one or more structural components, variously referred to as functionality, modules, features, elements, etc. The various components shown in the figures can be implemented in any manner by any physical and tangible mechanisms, for instance, by software running on computer equipment, hardware (e.g., chip-implemented logic functionality), etc., and/or any combination thereof. In one case, the illustrated separation of various components in the figures into distinct units may reflect the use of corresponding distinct physical and tangible components in an actual implementation. Alternatively, or in addition, any single component illustrated in the figures may be implemented by plural actual physical components. Alternatively, or in addition, the depiction of any two or more separate components in the figures may reflect different functions performed by a single actual physical component.
Other figures describe the concepts in flowchart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are illustrative and non-limiting. Certain blocks described herein can be grouped together and performed in a single operation, certain blocks can be broken apart into plural component blocks, and certain blocks can be performed in an order that differs from that which is illustrated herein (including a parallel manner of performing the blocks). The blocks shown in the flowcharts can be implemented in any manner by any physical and tangible mechanisms, for instance, by software running on computer equipment, hardware (e.g., chip-implemented logic functionality), etc., and/or any combination thereof.
As to terminology, the phrase “configured to” encompasses any way that any kind of physical and tangible functionality can be constructed to perform an identified operation. The functionality can be configured to perform an operation using, for instance, software running on computer equipment, hardware (e.g., chip-implemented logic functionality), etc., and/or any combination thereof.
The term “logic” encompasses any physical and tangible functionality for performing a task. For instance, each operation illustrated in the flowcharts corresponds to a logic component for performing that operation. An operation can be performed using, for instance, software running on computer equipment, hardware (e.g., chip-implemented logic functionality), etc., and/or any combination thereof. When implemented by computing equipment, a logic component represents an electrical component that is a physical part of the computing system, however implemented.
Any of the storage resources described herein, or any combination of the storage resources, may be regarded as a computer readable medium. In many cases, a computer readable medium represents some form of physical and tangible entity. The term computer readable medium also encompasses propagated signals, e.g., transmitted or received via physical conduit and/or air or other wireless medium, etc. However, the specific terms “computer readable storage medium” and “computer readable medium device” expressly exclude propagated signals per se, while including all other forms of computer readable media.
The following explanation may identify one or more features as “optional.” This type of statement is not to be interpreted as an exhaustive indication of features that may be considered optional; that is, other features can be considered as optional, although not explicitly identified in the text. Further, any description of a single entity is not intended to preclude the use of plural such entities; similarly, a description of plural entities is not intended to preclude the use of a single entity. Further, while the description may explain certain features as alternative ways of carrying out identified functions or implementing identified mechanisms, the features can also be combined together in any combination. Finally, the terms “exemplary” or “illustrative” refer to one implementation among potentially many implementations.
A. Overview
The term “hardware” acceleration component is also intended to broadly encompass different ways of leveraging a hardware device to perform a function, including, for instance, at least: a) a case in which at least some tasks are implemented in hard ASIC logic or the like; b) a case in which at least some tasks are implemented in soft (configurable) FPGA logic or the like; c) a case in which at least some tasks run as software on FPGA software processor overlays or the like; d) a case in which at least some tasks run on MPPAs of soft processors or the like; e) a case in which at least some tasks run as software on hard ASIC processors or the like, and so on, or any combination thereof. Likewise, the data processing system 102 can accommodate different manifestations of software-driven devices in the software plane 104.
To simplify repeated reference to hardware acceleration components, the following explanation will henceforth refer to these devices as simply “acceleration components.” Further, the following explanation will present a primary example in which the acceleration components correspond to FPGA devices, although, as noted, the data processing system 102 may be constructed using other types of acceleration components. Further, the hardware acceleration plane 106 may be constructed using a heterogeneous collection of acceleration components, including different types of FPGA devices having different respective processing capabilities and architectures, a mixture of FPGA devices and other devices, and so on.
A host component generally performs operations using a temporal execution paradigm, e.g., by using each of its CPU hardware threads to execute machine-readable instructions, one after the after. In contrast, an acceleration component may perform operations using a spatial paradigm, e.g., by using a large number of parallel logic elements to perform computational tasks. Thus, an acceleration component can perform some operations in less time compared to a software-driven host component. In the context of the data processing system 102, the “acceleration” qualifier associated with the term “acceleration component” reflects its potential for accelerating the functions that are performed by the host components.
In one example, the data processing system 102 corresponds to a data center environment that includes a plurality of computer servers. The computer servers correspond to the host components in the software plane 104 shown in
In one implementation, each host component in the data processing system 102 is coupled to at least one acceleration component through a local link. That fundamental unit of processing equipment is referred to herein as a “server unit component” because that equipment may be grouped together and maintained as a single serviceable unit within the data processing system 102 (although not necessarily so). The host component in the server unit component is referred to as the “local” host component to distinguish it from other host components that are associated with other server unit components. Likewise, the acceleration component(s) of the server unit component are referred to as the “local” acceleration component(s) to distinguish them from other acceleration components that are associated with other server unit components.
For example,
The local host component 108 may further indirectly communicate with any other remote acceleration component in the hardware acceleration plane 106. For example, the local host component 108 has access to a remote acceleration component 116 via the local acceleration component 110. More specifically, the local acceleration component 110 communicates with the remote acceleration component 116 via a link 118.
In one implementation, a common network 120 is used to couple host components in the software plane 104 to other host components, and to couple acceleration components in the hardware acceleration plane 106 to other acceleration components. That is, two host components may use the same network 120 to communicate with each other as do two acceleration components. As another feature, the interaction among host components in the software plane 104 is independent of the interaction among acceleration components in the hardware acceleration plane 106. This means, for instance, that two or more acceleration components may communicate with each other in a transparent manner from the perspective of host components in the software plane 104, outside the direction of the host components, and without the host components being “aware” of the particular interactions that are taking place in the hardware acceleration plane 106. A host component may nevertheless initiate interactions that take place in the hardware acceleration plane 106 by issuing a request for a service that is hosted by the hardware acceleration plane 106.
According to one non-limiting implementation, the data processing system 102 uses the Ethernet protocol to transmit IP packets over the common network 120. In one implementation, each local host component in a server unit component is given a single physical IP address. The local acceleration component in the same server unit component may adopt the same IP address. The server unit component can determine whether an incoming packet is destined for the local host component as opposed to the local acceleration component in different ways. For example, packets that are destined for the local acceleration component can be formulated as user datagram protocol (UDP) packets specifying a specific port; host-destined packets, on the other hand, are not formulated in this way. In another case, packets belonging to the acceleration plane 106 can be distinguished from packets belonging to the software plane 104 based on the value of a status flag in each of the packets (e.g., in the header or body of a packet).
In view of the above characteristic, the data processing system 102 may be conceptualized as forming two logical networks that share the same physical communication links. The packets associated with the two logical networks may be distinguished from each other by their respective traffic classes in the manner described above. But in other implementations (e.g., as described below with respect to
Finally, management functionality 122 serves to manage the operations of the data processing system 102. As will be set forth in greater detail in Section B (below), the management functionality 122 can be physically implemented using different control architectures. For example, in one control architecture, the management functionality 122 may include plural local management components that are coupled to one or more global management components.
By way of introduction to Section B, the management functionality 122 can include a number of sub-components that perform different respective logical functions (which can be physically implemented in different ways). A location determination component 124, for instance, identifies the current locations of services within the data processing system 102, based on current allocation information stored in a data store 126. As used herein, a service refers to any function that is performed by the data processing system 102. For example, one service may correspond to an encryption function. Another service may correspond to a document ranking function. Another service may correspond to a data compression function, and so on.
In operation, the location determination component 124 may receive a request for a service. In response, the location determination component 124 returns an address associated with the service, if that address is present in the data store 126. The address may identify a particular acceleration component that hosts the requested service.
A service mapping component (SMC) 128 maps services to particular acceleration components. The SMC 128 may operate in at least two modes depending on the type of triggering event that it receives which invokes it operation. In a first case, the SMC 128 processes requests for services made by instances of tenant functionality. An instance of tenant functionality may correspond to a software program running on a particular local host component, or, more specifically, a program executing on a virtual machine that, in turn, is associated with the particular local host component. That software program may request a service in the course of its execution. The SMC 128 handles the request by determining an appropriate component (or components) in the data processing system 102 to provide the service. Possible components for consideration include: a local acceleration component (associated with the local host component from which the request originated); a remote acceleration component; and/or the local host component itself (whereupon the local host component will implement the service in software). The SMC 128 makes its determinations based on one or more mapping considerations, such as whether the requested service pertains to a line-rate service.
In another manner of operation, the SMC 128 generally operates in a background and global mode, allocating services to acceleration components based on global conditions in the data processing system 102 (rather than, or in addition to, handling individual requests from instances of tenant functionality). For example, the SMC 128 may invoke its allocation function in response to a change in demand that affects one or more services. In this mode, the SMC 128 again makes its determinations based on one or more mapping considerations, such as the historical demand associated with the services, etc.
The SMC 128 may interact with the location determination component 124 in performing its functions. For instance, the SMC 128 may consult the data store 126 when it seeks to determine the address of an already allocated service provided by an acceleration component. The SMC 128 can also update the data store 126 when it maps a service to one or more acceleration components, e.g., by storing the addresses of those acceleration components in relation to the service.
Although not shown in
Note that
In many cases, a requested service is implemented on a single acceleration component (although there may be plural redundant such acceleration components to choose from among). But in the particular example of
In the particular case of
First, note that the operations that take place in the hardware acceleration plane 106 are performed in an independent manner of operations performed in the software plane 104. In other words, the host components in the software plane 104 do not manage the operations in the hardware acceleration plane 106. However, the host components may invoke the operations in the hardware acceleration plane 106 by issuing requests for services that are hosted by the hardware acceleration plane 106.
Second, note that the hardware acceleration plane 106 performs its transactions in a manner that is transparent to a requesting host component. For example, the local host component 204 may be “unaware” of how its request is being processed in the hardware acceleration plane, including the fact that the service corresponds to a multi-component service.
Third, note that, in this implementation, the communication in the software plane 104 (e.g., corresponding to operation (1)) takes place using the same common network 120 as communication in the hardware acceleration plane 106 (e.g., corresponding to operations (3)-(6)). Operations (2) and (7) may take place over a local link, corresponding to the localH-to-localS coupling 114 shown in
The multi-component service shown in
For example,
Moreover, a multi-component service does not necessarily need to employ a single head component, or any head component. For example, a multi-component service can employ a cluster of acceleration components which all perform the same function. The data processing system 102 can be configured to invoke this kind of multi-component service by contacting any arbitrary member in the cluster. That acceleration component may be referred to as a head component because it is the first component to be accessed, but it otherwise has no special status. In yet other cases, a host component may initially distribute plural requests to plural members of a collection of acceleration components.
Finally, note that the local acceleration component 418 is coupled to the TOR switch 410. Hence, in this particular implementation, the local acceleration component 418 represents the sole path through which the host component 412 interacts with other components in the data center 402 (including other host components and other acceleration components). Among other effects, the architecture of
Note that the local host component 412 may communicate with the local acceleration component 418 through the local link 420 or via the NIC 422. Different entities may leverage these two paths in different respective circumstances. For example, assume that a program running on the host component 412 requests a service. In one implementation, assume that the host component 412 provides a local instantiation of the location determination component 124 and the data store 126. Or a global management component may provide the location determination component 124 and its data store 126. In either case, the host component 412 may consult the data store 126 to determine the address of the service. The host component 412 may then access the service via the NIC 422 and the TOR switch 410, using the identified address.
In another implementation, assume that local acceleration component 418 provides a local instantiation of the location determination component 124 and the data store 126. The host component 412 may access the local acceleration component 418 via the local link 420. The local acceleration component 418 can then consult the local data store 126 to determine the address of the service, upon which it accesses the service via the TOR switch 410. Still other ways of accessing the service are possible.
The routing infrastructure shown in
The data center 402 shown in
Generally note that, while
Also note that, in the examples set forth above, a server unit component may refer to a physical grouping of components, e.g., by forming a single serviceable unit within a rack of a data center. In other cases, a server unit component may include one or more host components and one or more acceleration components that are not necessarily housed together in a single physical unit. In that case, a local acceleration component may be considered logically, rather than physically, associated with its respective local host component.
Alternatively, or in addition, a local host component and one or more remote acceleration components can be implemented on a single physical component, such as a single MPSoC-FPGA die. The network switch may also be incorporated into that single component.
In other cases, local hard CPUs, and/or soft CPUs, and/or acceleration logic provided by a single processing component (e.g., as implemented on a single die) may be coupled via diverse networks to other elements on other processing components (e.g., as implemented on other dies, boards, racks, etc.). An individual service may itself utilize one or more recursively local interconnection networks.
Further note that the above description was framed in the context of host components which issue service requests that are satisfied by acceleration components. But alternatively, or in addition, any acceleration component can also make a request for a service which can be satisfied by any other component, e.g., another acceleration component and/or even a host component. The SMC 102 can address such a request in a similar manner to that described above. Indeed, certain features described herein can be implemented on a hardware acceleration plane by itself, without a software plane.
More generally stated, certain features can be implemented by any first component which requests a service, which may be satisfied by the first component, and/or by one or more local components relative to the first component, and/or by one or more remote components relative to the first component. To facilitate explanation, however, the description below will continue to be framed mainly in the context in which the entity making the request corresponds to a local host component.
Finally, other implementations can adopt different strategies for coupling the host components to the hardware components, e.g., other than the localH-to-localS coupling 114 shown in
In block 908, the associated local acceleration component may locally perform the service, assuming that the address that has been identified pertains to functionality that is locally implemented by the local acceleration component. Alternatively, or in addition, in block 910, the local acceleration component routes the request to a remote acceleration component. As noted above, the local acceleration component is configured to perform routing to the remote acceleration component without involvement of the local host component. Further, plural host components communicate in the data processing system 102 with each other over a same physical network as do plural acceleration components.
In conclusion to Section A, the data processing system 102 has a number of useful characteristics. First, the data processing system 102 uses a common network 120 (except for the example of
B. Management Functionality
As described in the introductory Section A, the location determination component 124 identifies the current location of services within the data processing system 102, based on current allocation information stored in the data store 126. In operation, the location determination component 124 receives a request for a service. In response, it returns an address of the service, if present within the data store 126. The address may identify a particular acceleration component that implements the service.
The data store 126 may maintain any type of information which maps services to addresses. In the small excerpt shown in
In some implementations, the data store 126 may optionally also store status information which characterizes each current service-to-component allocation in any manner. Generally, the status information for a service-to-component allocation specifies the way that the allocated service, as implemented on its assigned component (or components), is to be treated within the data processing system 102, such as by specifying its level of persistence, specifying its access rights (e.g., “ownership rights”), etc. In one non-limiting implementation, for instance, a service-to-component allocation can be designated as either reserved or non-reserved. When performing a configuration operation, the SMC 128 can take into account the reserved/non-reserved status information associated with an allocation in determining whether it is appropriate to change that allocation, e.g., to satisfy a current request for a service, a change in demand for one or more services, etc. For example, the data store 126 indicates that the acceleration components having address a1, a6, and a8 are currently configured to perform service w, but that only the assignments to acceleration components a1 and a8 are considered reserved. Thus, the SMC 128 will view the allocation to acceleration component a6 as a more appropriate candidate for reassignment (reconfiguration), compared to the other two acceleration components.
In addition, or alternatively, the data store 126 can provide information which indicates whether a service-to-component allocation is to be shared by all instances of tenant functionality, or dedicated to one or more particular instances of tenant functionality (or some other indicated consumer(s) of the service). In the former (fully shared) case, all instances of tenant functionality vie for the same resources provided by an acceleration component. In the latter (dedicated) case, only those clients that are associated with a service allocation are permitted to use the allocated acceleration component.
The SMC 128 may also interact with a data store 1002 that provides availability information. The availability information identifies a pool of acceleration components that have free capacity to implement one or more services. For example, in one manner of use, the SMC 128 may determine that it is appropriate to assign one or more acceleration components as providers of a function. To do so, the SMC 128 draws on the data store 1002 to find acceleration components that have free capacity to implement the function. The SMC 128 will then assign the function to one or more of these free acceleration components. Doing so will change the availability-related status of the chosen acceleration components.
The SMC 128 also manages and maintains the availability information in the data store 1002. In doing so, the SMC 128 can use different rules to determine whether an acceleration component is available or unavailable. In one approach, the SMC 128 may consider an acceleration component that is currently being used as unavailable, while an acceleration component that is not currently being used as available. In other cases, the acceleration component may have different configurable domains (e.g., tiles), some of which are being currently used and others which are not being currently used. Here, the SMC 128 can specify the availability of an acceleration component by expressing the fraction of its processing resources that are currently not being used. For example,
In other cases, the SMC 128 can take into consideration pending requests for an acceleration component in registering whether it is available or not available. For example, the SMC 128 may indicate that an acceleration component is not available because it is scheduled to deliver a service to one or more instances of tenant functionality, even though it may not be engaged in providing that service at the current time.
In other cases, the SMC 128 can also register the type of each acceleration component that is available. For example, the data processing system 102 may correspond to a heterogeneous environment that supports acceleration components having different physical characteristics. The availability information in this case can indicate not only the identities of processing resources that are available, but also the types of those resources.
In other cases, the SMC 128 can also take into consideration the status of a service-to-component allocation when registering an acceleration component as available or unavailable. For example, assume that a particular acceleration component is currently configured to perform a certain service, and furthermore, assume that the allocation has been designated as reserved rather than non-reserved. The SMC 128 may designate that acceleration component as unavailable (or some fraction thereof as being unavailable) in view of its reserved status alone, irrespective of whether the service is currently being actively used to perform a function at the present time. In practice, the reserved status of an acceleration component therefore serves as a lock which prevents the SMC 128 from reconfiguring the acceleration component, at least in certain circumstances.
Now referring to the core mapping operation of the SMC 128 itself, the SMC 128 allocates or maps services to acceleration components in response to triggering events. More specifically, the SMC 128 operates in different modes depending on the type of triggering event that has been received. In a request-driven mode, the SMC 128 handles requests for services by tenant functionality. Here, each triggering event corresponds to a request by an instance of tenant functionality that resides, at least in part, on a particular local host component. In response to each request by a local host component, the SMC 128 determines an appropriate component to implement the service. For example, the SMC 128 may choose from among: a local acceleration component (associated with the local host component that made the request), a remote acceleration component, or the local host component itself (whereupon the local host component will implement the service in software), or some combination thereof.
In a second background mode, the SMC 128 operates by globally allocating services to acceleration components within the data processing system 102 to meet overall anticipated demand in the data processing system 102 and/or to satisfy other system-wide objectives and other factors (rather than narrowly focusing on individual requests by host components). Here, each triggering event that is received corresponds to some condition in the data processing system 102 as a whole that warrants allocation (or reallocation) of a service, such as a change in demand for the service.
Note, however, that the above-described modes are not mutually exclusive domains of analysis. For example, in the request-driven mode, the SMC 128 may attempt to achieve at least two objectives. As a first primary objective, the SMC 128 will attempt to find an acceleration component (or components) that will satisfy an outstanding request for a service, while also meeting one or more performance goals relevant to the data processing system 102 as a whole. As a second objective, the SMC 128 may optionally also consider the long term implications of its allocation of the service with respect to future uses of that service by other instances of tenant functionality. In other words, the second objective pertains to a background consideration that happens to be triggered by a request by a particular instance of tenant functionality.
For example, consider the following simplified case. An instance of tenant functionality may make a request for a service, where that instance of tenant functionality is associated with a local host component. The SMC 128 may respond to the request by configuring a local acceleration component to perform the service. In making this decision, the SMC 128 may first of all attempt to find an allocation which satisfies the request by the instance of tenant functionality. But the SMC 128 may also make its allocation based on a determination that many other host components have requested the same service, and that these host components are mostly located in the same rack as the instance of tenant functionality which has generated the current request for the service. In other words, this supplemental finding further supports the decision to place the service on an in-rack acceleration component.
In addition, an instance of tenant functionality (or a local host component) may specifically request that it be granted a reserved and dedicated use of a local acceleration component. The status determination logic 1004 can use different environment-specific rules in determining whether to honor this request. For instance, the status determination logic 1004 may decide to honor the request, providing that no other triggering event is received which warrants overriding the request. The status determination logic 1004 may override the request, for instance, when it seeks to fulfill another request that is determined, based on any environment-specific reasons, as having greater urgency than the tenant functionality's request.
In some implementations, note that an instance of tenant functionality (or a local host component or some other consumer of a service) may independently control the use of its local resources. For example, a local host component may pass utilization information to the management functionality 122 which indicates that its local acceleration component is not available or not fully available, irrespective of whether the local acceleration component is actually busy at the moment. In doing so, the local host component may prevent the SMC 128 from “stealing” its local resources. Different implementations can use different environment-specific rules to determine whether an entity is permitted to restrict access to its local resources in the above-described manner, and if so, in what circumstances.
In another example, assume that the SMC 128 determines that there has been a general increase in demand for a particular service. In response, the SMC 128 may find a prescribed number of free acceleration components, corresponding to a “pool” of acceleration components, and then designate that pool of acceleration components as reserved (but fully shared) resources for use in providing the particular service. Later, the SMC 128 may detect a general decrease in demand for the particular service. In response, the SMC 128 can decrease the pool of reserved acceleration components, e.g., by changing the status of one or more acceleration components that were previously registered as “reserved” to “non-reserved.”
Note that the particular dimensions of status described above (reserved vs. non-reserved, dedicated vs. fully shared) are cited by way of illustration, not limitation. Other implementations can adopt any other status-related dimensions, or may accommodate only a single status designation (and therefore omit use of the status determination logic 1004 functionality).
As a second component of analysis, the SMC 128 may use size determination logic 1006 to determine a number of acceleration components that are appropriate to provide a service. The SMC 128 can make such a determination based on a consideration of the processing demands associated with the service, together with the resources that are available to meet those processing demands.
As a third component of analysis, the SMC 128 can use type determination logic 1008 to determine the type(s) of acceleration components that are appropriate to provide a service. For example, consider the case in which the data processing system 102 has a heterogeneous collection of acceleration components having different respective capabilities. The type determination logic 1008 can determine one or more of a particular kind of acceleration components that are appropriate to provide the service.
As a fourth component of analysis, the SMC 128 can use placement determination logic 1010 to determine the specific acceleration component (or components) that are appropriate to address a particular triggering event. This determination, in turn, can have one more aspects. For instance, as part of its analysis, the placement determination logic 1010 can determine whether it is appropriate to configure an acceleration component to perform a service, where that component is not currently configured to perform the service.
The above facets of analysis are cited by way of illustration, not limitation. In other implementations, the SMC 128 can provide additional phases of analyses.
Generally, the SMC 128 performs its various allocation determinations based on one or more mapping considerations. For example, one mapping consideration may pertain to historical demand information provided in a data store 1012.
Note, however, that the SMC 128 need not perform multi-factor analysis in all cases. In some cases, for instance, a host component may make a request for a service that is associated with a single fixed location, e.g., corresponding to the local acceleration component or a remote acceleration component. In those cases, the SMC 128 may simply defer to the location determination component 124 to map the service request to the address of the service, rather than assessing the costs and benefits of executing the service in different ways. In other cases, the data store 126 may associate plural addresses with a single service, each address associated with an acceleration component that can perform the service. The SMC 128 can use any mapping consideration(s) in allocating a request for a service to a particular address, such as a load balancing consideration.
As a result of its operation, the SMC 128 can update the data store 126 with information that maps services to addresses at which those services can be found (assuming that this information has been changed by the SMC 128). The SMC 128 can also store status information that pertains to new service-to-component allocations.
To configure one or more acceleration components to perform a function (if not already so configured), the SMC 128 can invoke a configuration component 1014. In one implementation, the configuration component 1014 configures acceleration components by sending a configuration stream to the acceleration components. A configuration stream specifies the logic to be “programmed” into a recipient acceleration component. The configuration component 1014 may use different strategies to configure an acceleration component, several of which are set forth below.
A failure monitoring component 1016 determines whether an acceleration component has failed. The SMC 128 may respond to a failure notification by substituting a spare acceleration component for a failed acceleration component.
B.1. Operation of the SMC in a Request-Driven Mode
In operation (1), the local host component 1102 may send its request for the service to the SMC 128. In operation (2), among other analyses, the SMC 128 may determine at least one appropriate component to implement the service. In this case, assume that the SMC 128 determines that a remote acceleration component 1104 is the most appropriate component to implement the service. The SMC 128 can obtain the address of that acceleration component 1104 from the location determination component 124. In operation (3), the SMC 128 may communicate its answer to the local host component 1102, e.g., in the form of the address associated with the service. In operation (4), the local host component 1102 may invoke the remote acceleration component 1104 via its local acceleration component 1106. Other ways of handling a request by tenant functionality are possible. For example, the local acceleration component 1106 can query the SMC 128, rather than, or in addition to, the local host component 102.
Path 1108 represents an example in which a representative acceleration component 1110 (and/or its associated local host component) communicates utilization information to the SMC 128. The utilization information may identify whether the acceleration component 1110 is available or unavailable for use, in whole or in part. The utilization information may also optionally specify the type of processing resources that the acceleration component 1110 possesses which are available for use. As noted above, the utilization information can also be chosen to purposively prevent the SMC 128 from later utilizing the resources of the acceleration component 1110, e.g., by indicating in whole or in part that the resources are not available.
Although not shown, any acceleration component can also make directed requests for specific resources to the SMC 128. For example, the host component 1102 may specifically ask to use its local acceleration component 1106 as a reserved and dedicated resource. As noted above, the SMC 128 can use different environment-specific rules in determining whether to honor such a request.
Further, although not shown, other components besides the host components can make requests. For example, a hardware acceleration component may run an instance of tenant functionality that issues a request for a service that can be satisfied by itself, another hardware acceleration component (or components), a host component (or components), etc., or any combination thereof.
Further assume that a local acceleration component 1208 is coupled to the local host component 1202, e.g., via a PCIe local link or the like. At the current time, the local acceleration component 1208 hosts A1 logic 1210 for performing the acceleration service A1, and A2 logic 1212 for performing the acceleration service A2.
According to one management decision, the SMC 128 assigns T1 to the A1 logic 1210, and assigns T2 to the A2 logic 1212. However, this decision by the SMC 128 is not a fixed rule; as will be described, the SMC 128 may make its decision based on plural factors, some of which may reflect conflicting considerations. As such, based on other factors (not described at this juncture), the SMC 128 may choose to assign jobs to acceleration logic in a different manner from that illustrated in
In the scenario of
In response to the above scenario, the SMC 128 may choose to assign T1 to the A1 logic 1310 of the acceleration component 1308. The SMC 128 may then assign T2 to the A2 logic 1312 of a remote acceleration component 1314, which is already configured to perform that service. Again, the illustrated assignment is set forth here in the spirit of illustration, not limitation; the SMC 128 may choose a different allocation based on another combination of input considerations. In one implementation, the local host component 1302 and the remote acceleration component 1314 can optionally compress the information that they send to each other, e.g., to reduce consumption of bandwidth.
Note that the host component 1302 accesses the A2 logic 1312 via the local acceleration component 1308. But in another case (not illustrated), the host component 1302 may access the A2 logic 1312 via the local host component (not illustrated) that is associated with the acceleration component 1314.
Generally, the SMC 128 can perform configuration in a full or partial manner to satisfy any request by an instance of tenant functionality. The SMC performs full configuration by reconfiguring all of the application logic provided by an acceleration component. The SMC 128 can perform partial configuration by reconfiguring part (e.g., one or more tiles) of the application logic provided by an acceleration component, leaving other parts (e.g., one or more other tiles) intact and operational during reconfiguration. The same is true with respect to the operation of the SMC 128 in its background mode of operation, described below. Further note that additional factors may play a role in determining whether the A3 logic 1412 is a valid candidate for reconfiguration, such as whether or not the service is considered reserved, whether or not there are pending requests for this service, etc.
Finally, the above examples were described in the context of instances of tenant functionality that run on host components. But as already noted above, the instances of tenant functionality may more generally correspond to service requestors, and those service requestors can run on any component(s), including acceleration components. Thus, for example, a requestor that runs on an acceleration component can generate a request for a service to be executed by one or more other acceleration components and/or by itself and/or by one or more host components. The SMC 102 can handle the requestor's request in any of the ways described above.
B.2. Operation of the SMC in a Background Mode
In the particular example of
The SMC 128 can also operate in the background mode to allocate one or more acceleration components, which implement a particular service, to at least one instance of tenant functionality, without necessarily requiring the tenant functionality to make a request for this particular service each time. For example, assume that an instance of tenant functionality regularly uses a compression function, corresponding to “service z” in
B.3. Physical Implementations of the Management Functionality
The architecture of
Further, the local management component 1804 can send utilization information to a global management component on any basis, such as periodic basis and/or an event-driven basis (e.g., in response to a change in utilization). The global management component can use the utilization information to update its master record of availability information in the data store 1002.
In operation, the low-level management components (2004, 2012, . . . ) handle certain low-level management decisions that directly affect the resources associated with individual server unit components. The mid-level management components (2018, 2020) can make decisions which affect a relevant section of the data processing system 102, such as an individual rack or a group of racks. The top-level management component (2022) can make global decisions which broadly apply to the entire data processing system 102.
B.4. The Configuration Component
Finally,
C. Illustrative Implementation of a Hardware Acceleration Component
From a high-level standpoint, the acceleration component 2502 may be implemented as a hierarchy having different layers of functionality. At a lowest level, the acceleration component 2502 provides an “outer shell” which provides basic interface-related components that generally remain the same across most application scenarios. A core component 2504, which lies inside the outer shell, may include an “inner shell” and application logic 2506. The inner shell corresponds to all the resources in the core component 2504 other than the application logic 2506, and represents a second level of resources that remain the same within a certain set of application scenarios. The application logic 2506 itself represents a highest level of resources which are most readily subject to change. Note however that any component of the acceleration component 2502 can technically be reconfigured.
In operation, the application logic 2506 interacts with the outer shell resources and inner shell resources in a manner analogous to the way a software-implemented application interacts with its underlying operating system resources. From an application development standpoint, the use of common outer shell resources and inner shell resources frees a developer from having to recreate these common components for each application that he or she creates. This strategy also reduces the risk that a developer may alter core inner or outer shell functions in a manner that causes problems within the data processing system 102 as a whole.
Referring first to the outer shell, the acceleration component 2502 includes a bridge 2508 for coupling the acceleration component 2502 to the network interface controller (via a NIC interface 2510) and a local top-of-rack switch (via a TOR interface 2512). The bridge 2508 supports two modes. In a first node, the bridge 2508 provides a data path that allows traffic from the NIC or TOR to flow into the acceleration component 2502, and traffic from the acceleration component 2502 to flow out to the NIC or TOR. The acceleration component 2502 can perform any processing on the traffic that it “intercepts,” such as compression, encryption, etc. In a second mode, the bridge 2508 supports a data path that allows traffic to flow between the NIC and the TOR without being further processed by the acceleration component 2502. Internally, the bridge may be composed of various FIFOs (2514, 2516) which buffer received packets, and various selectors and arbitration logic which route packets to their desired destinations. A bypass control component 2518 controls whether the bridge 2508 operates in the first mode or the second mode.
A memory controller 2520 governs interaction between the acceleration component 2502 and local memory 2522 (such as DRAM memory). The memory controller 2520 may perform error correction as part of its services.
A host interface 2524 provides functionality that enables the acceleration component to interact with a local host component (not shown in
Finally, the shell may include various other features 2526, such as clock signal generators, status LEDs, error correction functionality, and so on.
In one implementation, the inner shell may include a router 2528 for routing messages between various internal components of the acceleration component 2502, and between the acceleration component 2502 and external entities (via a transport component 2530). Each such endpoint is associated with a respective port. For example, the router 2528 is coupled to the memory controller 2520, host interface 1120, application logic 2506, and transport component 2530.
The transport component 2530 formulates packets for transmission to remote entities (such as remote acceleration components), and receives packets from the remote acceleration components (such as remote acceleration components).
A 3-port switch 2532, when activated, takes over the function of the bridge 2508 by routing packets between the NIC and TOR, and between the NIC or TOR and a local port associated with the acceleration component 2502 itself.
Finally, an optional diagnostic recorder 2534 stores transaction information regarding operations performed by the router 2528, transport component 2530, and 3-port switch 2532 in a circular buffer. For example, the transaction information may include data about a packet's origin and destination IP addresses, host-specific data, timestamps, etc. A technician may study a log of the transaction information in an attempt to diagnose causes of failure or sub-optimal performance in the acceleration component 2502.
In some implementations, the data processing system 102 of
C.1. The Local Link
In operations (4) and (5), the application logic 2712 retrieves the data from the input buffer 2710, processes it to generate an output result, and places the output result in an output buffer 2714. In operation (6), the acceleration component 2704 copies the contents of the output buffer 2714 into an output buffer in the host logic's memory. In operation (7), the acceleration component notifies the host logic 2706 that the data is ready for it to retrieve. In operation (8), the host logic thread wakes up and consumes the data in the output buffer 2716. The host logic 2706 may then discard the contents of the output buffer 2716, which allows the acceleration component 2704 to reuse it in the next transaction.
C.2. The Router
In one non-limiting implementation, the router 2528 supports a number of virtual channels (such as eight) for transmitting different classes of traffic over a same physical link. That is, the router 2528 may support multiple traffic classes for those scenarios in which multiple services are implemented by the application logic 2506, and those services need to communicate on separate classes of traffic.
The router 2528 may govern access to the router's resources (e.g., its available buffer space) using a credit-based flow technique. In that technique, the input units (2802-2808) provide upstream entities with credits, which correspond to the exact number of flits available in their buffers. The credits grant the upstream entities the right to transmit their data to the input units (2802-2808). More specifically, in one implementation, the router 2528 supports “elastic” input buffers that can be shared among multiple virtual channels. The output units (2810-2816) are responsible for tracking available credits in their downstream receivers, and provide grants to any input units (2802-2808) that are requesting to send a flit to a given output port.
C.3. The Transport Component
A packet processing component 2904 processes messages arriving from the router 2528 which are destined for a remote endpoint (e.g., another acceleration component). It does so by buffering and packetizing the messages. The packet processing component 2904 also processes packets that are received from some remote endpoint and are destined for the router 2528.
For messages arriving from the router 2528, the packet processing component 2904 matches each message request to a Send Connection Table entry in the Send Connection Table, e.g., using header information and virtual channel (VC) information associated with the message as a lookup item, as provided by router 2528. The packet processing component 2904 uses the information retrieved from the Send Connection Table entry (such as a sequence number, address information, etc.) to construct packets that it sends out to the remote entity.
More specifically, in one non-limiting approach, the packet processing component 2904 encapsulates packets in UDP/IP Ethernet frames, and sends them to a remote acceleration component. In one implementation the packets may include an Ethernet header, followed by an IPv4 header, followed by a UDP header, followed by transport header (specifically associated with the transport component 2530), followed by a payload.
For packets arriving from the network (e.g., as received on a local port of the 3-port switch 2532), the packet processing component 2904 matches each packet to a Receive Connectable Table entry provided in the packet header. If there is a match, the packet processing component retrieves a virtual channel field of the entry, and uses that information to forward the received message to the router 2528 (in accordance with the credit-flow technique used by the router 2528).
A failure handling component 2906 buffers all sent packets until it receives an acknowledgement (ACK) from the receiving node (e.g., the remote acceleration component). If an ACK for a connection does not arrive within a specified time-out period, the failure handling component 2906 can retransmit the packet. The failure handling component 2906 will repeat such retransmission for a prescribed number times (e.g., 128 times). If the packet remains unacknowledged after all such attempts, the failure handling component 2906 can discard it and free its buffer.
C.4. The 3-Port Switch
The 3-port switch 2532 connects to the NIC interface 2510 (corresponding to a host interface), the TOR interface 2512, and a local interface associated with the local acceleration component 2502 itself. The 3-port switch 2532 may be conceptualized as including receiving interfaces (3002, 3004, 3006) for respectively receiving packets from the host component and TOR switch, and for receiving packets at the local acceleration component. The 3-port switch 2532 also includes transmitting interfaces (3008, 3010, 3012) for respectively providing packets to the TOR switch and host component, and receiving packets transmitted by the local acceleration component.
Packet classifiers (3014, 3016) determine the class of packets received from the host component or the TOR switch, e.g., based on status information specified by the packets. In one implementation, each packet is either classified as belonging to a lossless flow (e.g., remote direct memory access (RDMA) traffic) or a lossy flow (e.g., transmission control protocol/Internet Protocol (TCP/IP) traffic). Traffic that belongs to a lossless flow is intolerant to packet loss, while traffic that belongs to a lossy flow can tolerate some packet loss.
Packet buffers (3018, 3020) store the incoming packets in different respective buffers, depending on the class of traffic to which they pertain. If there is no space available in the buffer, the packet will be dropped. (In one implementation, the 3-port switch 2532 does not provide packet buffering for packets provided by the local acceleration component (via the local port) because the application logic 2506 can regulate the flow of packets through the use of “back pressuring.”) Arbitration logic 3022 selects among the available packets and transmits the selected packets.
As described above, traffic that is destined for the local acceleration component is encapsulated in UDP/IP packets on a fixed port number. The 3-port switch 2532 inspects incoming packets (e.g., as received from the TOR) to determine if they are UDP packets on the correct port number. If so, the 3-port switch 2532 outputs the packet on the local RX port interface 3006. In one implementation, all traffic arriving on the local TX port interface 3012 is sent out of the TOR TX port interface 3008, but it could also be sent to the host TX port interface 3010. Further note that
PFC processing logic 3024 allows the 3-port switch 2532 to insert Priority Flow Control frames into either the flow of traffic transmitted to the TOR or host component. That is, for lossless traffic classes, if a packet buffer fills up, the PFC processing logic 3024 sends a PFC message to the link partner, requesting that traffic on that class be paused. If a PFC control frame is received for a lossless traffic class on either the host RX port interface 3002 or the TOR RX port interface 3004, the 3-port switch 2532 will cease sending packets on the port that received the control message.
C.5. An Illustrative Host Component
The host component 3102 also includes an input/output module 3110 for receiving various inputs (via input devices 3112), and for providing various outputs (via output devices 3114). One particular output mechanism may include a presentation device 3116 and an associated graphical user interface (GUI) 3118. The host component 3102 can also include one or more network interfaces 3120 for exchanging data with other devices via one or more communication conduits 3122. One or more communication buses 3124 communicatively couple the above-described components together.
The communication conduit(s) 3122 can be implemented in any manner, e.g., by a local area network, a wide area network (e.g., the Internet), point-to-point connections, etc., or any combination thereof. The communication conduit(s) 3722 can include any combination of hardwired links, wireless links, routers, gateway functionality, name servers, etc., governed by any protocol or combination of protocols.
The following summary provides a non-exhaustive list of illustrative aspects of the technology set forth herein.
According to a first aspect, a data processing system is described that includes two or more host components, each of which uses one or more central processing units to execute machine-readable instructions, the two or more host components collectively providing a software plane. The data processing system also includes two or more hardware acceleration components that collectively provide a hardware acceleration plane. The data processing system also includes a common network for allowing the host components to communicate with each other, and for allowing the hardware acceleration components to communicate with each other. Further, the hardware acceleration components in the hardware acceleration plane have functionality that enables the hardware acceleration components to communicate with each other in a transparent manner without assistance from the software acceleration plane.
According to a second aspect, the above-referenced two or more hardware acceleration components in the hardware acceleration plane correspond to field-programmable gate array (FPGA) devices.
According to a third aspect, the above-referenced two or more host components in the software plane exchange packets over the common network via a first logical network, and the above-referenced two or more hardware acceleration components in the hardware acceleration plane exchange packets over the common network via a second logical network. The first logical network and the second logical network share physical links of the common network and are distinguished from each other based on classes of traffic to which their respective packets pertain.
According to a fourth aspect, packets sent over the second logical network use a specified protocol on an identified port, which constitutes a characteristic that distinguishes packets sent over the second logical network from packets sent over the first logical network.
According to a fifth aspect, the data processing system further includes plural server unit components. Each server unit component includes: a local host component; a local hardware acceleration component; and a local link for coupling the local host component with the local hardware acceleration component. The local hardware acceleration component is coupled to the common network, and serves as a conduit by which the local host component communicates with the common network.
According to a sixth aspect, at least one server unit component includes plural local host components and/or plural local hardware acceleration components.
According to a seventh aspect, the local hardware acceleration component is coupled to a top-of-rack switch in a data center.
According to an eighth aspect, the local hardware acceleration component is also coupled to a network interface controller, and the network interface controller is coupled to the local host component.
According to a ninth aspect, the local host component or the local hardware acceleration component is configured to: issue a request for a service; and receive a reply to the request which identifies an address of the service. The local hardware acceleration component is configured to: locally perform the service, when the address that has been identified pertains to functionality that is locally implemented by the local hardware acceleration component; and route the request to a particular remote hardware acceleration component via the common network, when the address that has been identified pertains to functionality that is remotely implemented by the remote hardware acceleration component. Again, the local hardware acceleration component is configured to perform routing without involvement of the local host component.
According to a tenth aspect, the data processing system further includes management functionality for identifying the address in response to the request.
According to an eleventh aspect, a method is described for performing a function in a data processing environment. The method includes performing the following operations in a local host component which uses one or more central processing units to execute machine-readable instructions, or in a local hardware acceleration component that is coupled to the local host component: (a) issue a request for a service; and (b) receive a reply to the request which identifies an address of the service. The method also includes performing the following operations in the local hardware acceleration component: (a) locally perform the service, when the address that has been identified pertains to functionality that is locally implemented by the local hardware acceleration component; and (b) route the request to a remote hardware acceleration component, when the address that has been identified pertains to functionality that is remotely implemented by the remote hardware acceleration component. Again, the local hardware acceleration component is configured to perform routing to the remote hardware acceleration component without involvement of the local host component. Further, plural host components communicate with each other in the data processing environment, and plural hardware acceleration components communicate with each other in the data processing environment, over a common network.
According to a twelfth aspect, each hardware acceleration component in the above-described method corresponds to a field-programmable gate array (FPGA) device.
According to a thirteenth aspect, the common network in the above-described method supports a first logical network and a second logical network that share physical links of the common network. The host components in the data processing environment use the first logical network to exchange packets with each other, and the hardware acceleration components in the data processing environment use the second logical network to exchange packets with each other. The first logical network and the second logical networks are distinguished from each other based on classes of traffic to which their respective packets pertain.
According to a fourteenth aspect, the packets sent over the second network in the above-described method use a specified protocol on an identified port, which constitutes a characteristic that distinguishes packets sent over the second logical network from packets sent over the first logical network.
According to a fifteenth aspect, the local hardware acceleration component in the above-described method is coupled to the common network, and the local host component interacts with the common network via the local hardware acceleration component.
According to a sixteenth aspect, the local hardware acceleration component in the above-described method is coupled to a top-of-rack switch in a data center.
According to a seventeenth aspect, a server unit component in a data center is described. The server unit component includes: a local host component that uses one or more central processing units to execute machine-readable instructions; a local hardware acceleration component; and a local link for coupling the local host component with the local hardware acceleration component. The local hardware acceleration component is coupled to a common network, and serves as a conduit by which the local host component communicates with the common network. More generally, the data center includes plural host components and plural hardware acceleration components, provided in other respective server unit components, wherein the common network serves as a shared conduit by which the plural host components communicate with each other and the plural hardware acceleration components communicate with each other. Further, the local hardware acceleration component is configured to interact with remote hardware acceleration components of other respective server unit components without involvement of the local host component.
According to an eighteenth aspect, the server unit component includes plural local host components and/or plural local hardware acceleration components.
According to a nineteenth aspect, the local hardware acceleration component is coupled to a top-of-rack switch in a data center.
According to a twentieth aspect, the local hardware acceleration component is configured to receive a request for a service from the local host component, and the local hardware acceleration component is configured to: (a) locally perform the service when an address associated with the service pertains to functionality that is locally implemented by the local hardware acceleration component; and (b) route the request to a remote hardware acceleration component via the common network when the address pertains to functionality that is remotely implemented by the remote hardware acceleration component. The local hardware acceleration component is configured to perform routing to the remote hardware acceleration component without involvement of the local host component.
A twenty-first aspect corresponds to any combination (e.g., any permutation or subset) of the above-referenced first through twentieth aspects.
A twenty-second aspect corresponds to any method counterpart, device counterpart, system counterpart, means counterpart, computer readable storage medium counterpart, data structure counterpart, article of manufacture counterpart, graphical user interface presentation counterpart, etc. associated with the first through twenty-first aspects.
In closing, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
This application claims the benefit of U.S. Provisional Application No. 62/149,488 (the '488 applications), filed Apr. 17, 2015. The '488 application is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5600845 | Gilson | Feb 1997 | A |
5684980 | Casselman | Nov 1997 | A |
5748979 | Trimberger | May 1998 | A |
5774668 | Choquier et al. | Jun 1998 | A |
5802290 | Casselman | Sep 1998 | A |
5828858 | Athanas et al. | Oct 1998 | A |
6096091 | Hartmann | Aug 2000 | A |
6104211 | Alfke | Aug 2000 | A |
6256758 | Abramovici et al. | Jul 2001 | B1 |
6326806 | Fallside et al. | Dec 2001 | B1 |
6462579 | Camilleri et al. | Oct 2002 | B1 |
6496971 | Lesea et al. | Dec 2002 | B1 |
6526557 | Young et al. | Feb 2003 | B1 |
6530049 | Abramovici et al. | Mar 2003 | B1 |
6573748 | Trimberger | Jun 2003 | B1 |
6754881 | Kuhlmann | Jun 2004 | B2 |
6874108 | Abramovici et al. | Mar 2005 | B1 |
6915338 | Hunt et al. | Jul 2005 | B1 |
6973608 | Abramovici et al. | Dec 2005 | B1 |
6996443 | Marshall et al. | Feb 2006 | B2 |
7020860 | Zhao et al. | Mar 2006 | B1 |
7036059 | Carmichael et al. | Apr 2006 | B1 |
7111224 | Trimberger | Sep 2006 | B1 |
7146598 | Horanzy | Dec 2006 | B2 |
7224184 | Levi et al. | May 2007 | B1 |
7240127 | Dubreuil | Jul 2007 | B2 |
7263631 | VanBuren | Aug 2007 | B2 |
7286020 | O et al. | Oct 2007 | B1 |
7340596 | Crosland et al. | Mar 2008 | B1 |
7382154 | Ramos et al. | Jun 2008 | B2 |
7389460 | Demara | Jun 2008 | B1 |
7444454 | Yancey | Oct 2008 | B2 |
7444551 | Johnson et al. | Oct 2008 | B1 |
7482836 | Levi et al. | Jan 2009 | B2 |
7500083 | Trivedi et al. | Mar 2009 | B2 |
7533256 | Walter | May 2009 | B2 |
7546572 | Ballagh et al. | Jun 2009 | B1 |
7584345 | Doering | Sep 2009 | B2 |
7620883 | Carmichael et al. | Nov 2009 | B1 |
7685254 | Pandya | Mar 2010 | B2 |
7734895 | Agarwal et al. | Jun 2010 | B1 |
7809982 | Rapp et al. | Oct 2010 | B2 |
7822958 | Allen et al. | Oct 2010 | B1 |
7899864 | Margulis | Mar 2011 | B2 |
7906984 | Montminy et al. | Mar 2011 | B1 |
7925863 | Hundley | Apr 2011 | B2 |
7953014 | Toda et al. | May 2011 | B2 |
8018249 | Koch et al. | Sep 2011 | B2 |
8018866 | Kasturi et al. | Sep 2011 | B1 |
8046727 | Solomon | Oct 2011 | B2 |
8054172 | Jung et al. | Nov 2011 | B2 |
8117497 | Lesea | Feb 2012 | B1 |
8117512 | Sorensen et al. | Feb 2012 | B2 |
8127113 | Sinha et al. | Feb 2012 | B1 |
8145894 | Casselman | Mar 2012 | B1 |
8159259 | Lewis et al. | Apr 2012 | B1 |
8166289 | Owens et al. | Apr 2012 | B2 |
8171099 | Malmskog et al. | May 2012 | B1 |
8250578 | Krishnamurthy et al. | Aug 2012 | B2 |
8368423 | Yancey | Feb 2013 | B2 |
8434087 | Degenaro et al. | Apr 2013 | B2 |
8453013 | Chen | May 2013 | B1 |
8516268 | Woodall | Aug 2013 | B2 |
8554953 | Sorensen et al. | Oct 2013 | B1 |
8635571 | Goldman | Jan 2014 | B1 |
8635675 | Kruglick | Jan 2014 | B2 |
8713574 | Creamer et al. | Apr 2014 | B2 |
8803876 | Bohan et al. | Aug 2014 | B2 |
8803892 | Urbach | Aug 2014 | B2 |
8863072 | Jahnke | Oct 2014 | B1 |
8867545 | Viens et al. | Oct 2014 | B2 |
8901960 | Takano et al. | Dec 2014 | B2 |
8910109 | Othner | Dec 2014 | B1 |
8924907 | Jahnke et al. | Dec 2014 | B1 |
8943352 | Warneke | Jan 2015 | B1 |
8997033 | Hew | Mar 2015 | B1 |
9032343 | Goldman | May 2015 | B1 |
9294097 | Vassiliev | Mar 2016 | B1 |
9313364 | Tanaka | Apr 2016 | B2 |
9361416 | Fine et al. | Jun 2016 | B2 |
9483291 | Chen et al. | Nov 2016 | B1 |
9576332 | Streete et al. | Feb 2017 | B1 |
9612900 | Anderson et al. | Apr 2017 | B2 |
9647731 | Ardalan | May 2017 | B2 |
9652327 | Heil et al. | May 2017 | B2 |
9774520 | Kasturi et al. | Sep 2017 | B1 |
9819542 | Burger | Nov 2017 | B2 |
9912517 | Ramalingam et al. | Mar 2018 | B1 |
9983938 | Heil et al. | May 2018 | B2 |
10027543 | Lanka et al. | Jul 2018 | B2 |
10452605 | Wang et al. | Oct 2019 | B2 |
20020161902 | McMahan et al. | Oct 2002 | A1 |
20020188832 | Mirsky et al. | Dec 2002 | A1 |
20030033450 | Appleby-Alis | Feb 2003 | A1 |
20040081104 | Pan et al. | Apr 2004 | A1 |
20040141386 | Karlsson | Jul 2004 | A1 |
20040267920 | Hydrie et al. | Dec 2004 | A1 |
20050097305 | Doering et al. | May 2005 | A1 |
20050120110 | Curran-Gray et al. | Jun 2005 | A1 |
20060015866 | Ang et al. | Jan 2006 | A1 |
20060143350 | Miloushev et al. | Jun 2006 | A1 |
20070038560 | Ansley | Feb 2007 | A1 |
20070200594 | Levi et al. | Aug 2007 | A1 |
20070210487 | Schroder | Sep 2007 | A1 |
20070283311 | Karoubalis et al. | Dec 2007 | A1 |
20080028187 | Casselman et al. | Jan 2008 | A1 |
20080120500 | Kimmery et al. | May 2008 | A1 |
20080164907 | Mercaldi-Kim et al. | Jul 2008 | A1 |
20080184042 | Parks et al. | Jul 2008 | A1 |
20080270411 | Sedukhin et al. | Oct 2008 | A1 |
20080276262 | Munshi et al. | Nov 2008 | A1 |
20080279167 | Cardei et al. | Nov 2008 | A1 |
20080285581 | Maiorana et al. | Nov 2008 | A1 |
20080307259 | Vasudevan et al. | Dec 2008 | A1 |
20090063665 | Bagepalli et al. | Mar 2009 | A1 |
20090085603 | Paul et al. | Apr 2009 | A1 |
20090102838 | Bullard et al. | Apr 2009 | A1 |
20090147945 | Doi et al. | Jun 2009 | A1 |
20090153320 | Jung et al. | Jun 2009 | A1 |
20090182814 | Tapolcai et al. | Jul 2009 | A1 |
20090187733 | El-Ghazawi | Jul 2009 | A1 |
20090187756 | Nollet et al. | Jul 2009 | A1 |
20090189890 | Corbett et al. | Jul 2009 | A1 |
20090210487 | Westerhoff et al. | Aug 2009 | A1 |
20090254505 | Davis et al. | Oct 2009 | A1 |
20090278564 | DeHon et al. | Nov 2009 | A1 |
20090287628 | Indeck et al. | Nov 2009 | A1 |
20100011116 | Thornton et al. | Jan 2010 | A1 |
20100042870 | Amatsubo | Feb 2010 | A1 |
20100046546 | Ram et al. | Feb 2010 | A1 |
20100057647 | Davis et al. | Mar 2010 | A1 |
20100058036 | Degenaro et al. | Mar 2010 | A1 |
20100076915 | Xu et al. | Mar 2010 | A1 |
20100083010 | Kern et al. | Apr 2010 | A1 |
20100106813 | Voutilainen et al. | Apr 2010 | A1 |
20100121748 | Handelman et al. | May 2010 | A1 |
20100174770 | Pandya | Jul 2010 | A1 |
20100251265 | Hodson et al. | Sep 2010 | A1 |
20100262882 | Krishnamurthy | Oct 2010 | A1 |
20110068921 | Shafer | Mar 2011 | A1 |
20110078284 | Bomel et al. | Mar 2011 | A1 |
20110080264 | Clare et al. | Apr 2011 | A1 |
20110088038 | Kruglick | Apr 2011 | A1 |
20110153824 | Chikando et al. | Jun 2011 | A1 |
20110161495 | Ratering et al. | Jun 2011 | A1 |
20110167055 | Branscome et al. | Jul 2011 | A1 |
20110178911 | Parsons et al. | Jul 2011 | A1 |
20110218987 | Branscome et al. | Sep 2011 | A1 |
20110238792 | Phillips et al. | Sep 2011 | A1 |
20120047239 | Donahue et al. | Feb 2012 | A1 |
20120054770 | Krishnamurthy et al. | Mar 2012 | A1 |
20120092040 | Xu et al. | Apr 2012 | A1 |
20120110192 | Lu et al. | May 2012 | A1 |
20120110274 | Rosales et al. | May 2012 | A1 |
20120150592 | Govrik et al. | Jun 2012 | A1 |
20120150952 | Beverly | Jun 2012 | A1 |
20120151476 | Vincent | Jun 2012 | A1 |
20120260078 | Varnum et al. | Oct 2012 | A1 |
20120324068 | Jayamohan et al. | Dec 2012 | A1 |
20130055240 | Gondi | Feb 2013 | A1 |
20130151458 | Indeck et al. | Jun 2013 | A1 |
20130152099 | Bass et al. | Jun 2013 | A1 |
20130159452 | Saldana de Fuentes et al. | Jun 2013 | A1 |
20130177293 | Mate et al. | Jul 2013 | A1 |
20130182555 | Raaf et al. | Jul 2013 | A1 |
20130205295 | Ebcioglu et al. | Aug 2013 | A1 |
20130226764 | Battyani | Aug 2013 | A1 |
20130227335 | Dake et al. | Aug 2013 | A1 |
20130249947 | Reitan | Sep 2013 | A1 |
20130285739 | Blaquiere et al. | Oct 2013 | A1 |
20130297043 | Choi et al. | Nov 2013 | A1 |
20130305199 | He et al. | Nov 2013 | A1 |
20130314559 | Kim | Nov 2013 | A1 |
20130318277 | Dalal et al. | Nov 2013 | A1 |
20140007113 | Collin et al. | Jan 2014 | A1 |
20140055467 | Bittner et al. | Feb 2014 | A1 |
20140067851 | Asaad et al. | Mar 2014 | A1 |
20140092728 | Alvarez-icaza rivera et al. | Apr 2014 | A1 |
20140095928 | Obasawara et al. | Apr 2014 | A1 |
20140108481 | Davis et al. | Apr 2014 | A1 |
20140115151 | Kruglick | Apr 2014 | A1 |
20140118026 | Aldragen | May 2014 | A1 |
20140140225 | Wala | May 2014 | A1 |
20140208322 | Sasaki et al. | Jul 2014 | A1 |
20140215424 | Fine et al. | Jul 2014 | A1 |
20140245061 | Kobayashi | Aug 2014 | A1 |
20140258360 | Hebert et al. | Sep 2014 | A1 |
20140267328 | Banack et al. | Sep 2014 | A1 |
20140280499 | Basavaiah et al. | Sep 2014 | A1 |
20140282056 | Godsey | Sep 2014 | A1 |
20140282506 | Cadigan et al. | Sep 2014 | A1 |
20140282586 | Shear et al. | Sep 2014 | A1 |
20140310555 | Schulz et al. | Oct 2014 | A1 |
20140351811 | Kruglick | Nov 2014 | A1 |
20140380025 | Kruglick | Dec 2014 | A1 |
20150026450 | Adiki et al. | Jan 2015 | A1 |
20150058614 | Degenaro et al. | Feb 2015 | A1 |
20150089204 | Henry | Mar 2015 | A1 |
20150100655 | Pouzin et al. | Apr 2015 | A1 |
20150103837 | Dutta et al. | Apr 2015 | A1 |
20150169376 | Chang et al. | Jun 2015 | A1 |
20150186158 | Yalamanchili et al. | Jul 2015 | A1 |
20150199214 | Lee et al. | Jul 2015 | A1 |
20150261478 | Obayashi | Sep 2015 | A1 |
20150271342 | Gupta et al. | Sep 2015 | A1 |
20150339130 | Kruglick | Nov 2015 | A1 |
20150371355 | Chen | Dec 2015 | A1 |
20150373225 | Tanaka | Dec 2015 | A1 |
20150379099 | Vermeulen et al. | Dec 2015 | A1 |
20150379100 | Vermeulen | Dec 2015 | A1 |
20160087849 | Balasubramanian et al. | Mar 2016 | A1 |
20160147709 | Franke et al. | May 2016 | A1 |
20160154694 | Anderson et al. | Jun 2016 | A1 |
20160202999 | Van Den Heuvel et al. | Jul 2016 | A1 |
20160210167 | Bobo et al. | Jul 2016 | A1 |
20160306667 | Burger et al. | Oct 2016 | A1 |
20160306668 | Heil et al. | Oct 2016 | A1 |
20160306674 | Chiou et al. | Oct 2016 | A1 |
20160306700 | Heil et al. | Oct 2016 | A1 |
20160306701 | Heil et al. | Oct 2016 | A1 |
20160308649 | Burger et al. | Oct 2016 | A1 |
20160308718 | Lanka et al. | Oct 2016 | A1 |
20160308719 | Putnam et al. | Oct 2016 | A1 |
20160328222 | Arumugam et al. | Nov 2016 | A1 |
20160378460 | Chiou et al. | Dec 2016 | A1 |
20160380819 | Burger | Dec 2016 | A1 |
20160380912 | Burger et al. | Dec 2016 | A1 |
20170039089 | Xia et al. | Feb 2017 | A1 |
20170126487 | Xie et al. | May 2017 | A1 |
20170351547 | Burger et al. | Dec 2017 | A1 |
20190007263 | Lahiri et al. | Jan 2019 | A1 |
20190155669 | Chiou et al. | May 2019 | A1 |
20190190847 | Douglas et al. | Jun 2019 | A1 |
Number | Date | Country |
---|---|---|
1890998 | Jan 2007 | CN |
101276298 | Oct 2008 | CN |
101783812 | Jul 2010 | CN |
101794222 | Aug 2010 | CN |
101802789 | Aug 2010 | CN |
102023932 | Apr 2011 | CN |
102024048 | Apr 2011 | CN |
102117197 | Jul 2011 | CN |
102282542 | Dec 2011 | CN |
101545933 | Jan 2012 | CN |
102377778 | Mar 2012 | CN |
102662628 | Sep 2012 | CN |
102724478 | Oct 2012 | CN |
103034295 | Apr 2013 | CN |
103034536 | Apr 2013 | CN |
103220371 | Jul 2013 | CN |
103238305 | Aug 2013 | CN |
103246582 | Aug 2013 | CN |
103270492 | Aug 2013 | CN |
103493009 | Jan 2014 | CN |
103645950 | Mar 2014 | CN |
103677916 | Mar 2014 | CN |
104038570 | Sep 2014 | CN |
104040491 | Sep 2014 | CN |
104239088 | Dec 2014 | CN |
104299466 | Jan 2015 | CN |
104699508 | Jun 2015 | CN |
105824706 | Aug 2016 | CN |
107426138 | Dec 2017 | CN |
2199910 | Jun 2010 | EP |
2650786 | Oct 2013 | EP |
2722767 | Apr 2014 | EP |
2005235074 | Sep 2005 | JP |
2013062566 | Apr 2013 | JP |
2013049079 | Apr 2013 | WO |
2013049079 | May 2013 | WO |
2013158707 | Oct 2013 | WO |
2013177316 | Nov 2013 | WO |
2013167326 | Nov 2013 | WO |
2014019428 | Feb 2014 | WO |
2014088967 | Jun 2014 | WO |
2014094821 | Jun 2014 | WO |
2015026373 | Feb 2015 | WO |
2015042684 | Apr 2015 | WO |
Entry |
---|
Knodel etal; Integration of a Highly Scalable, Multi-FPGA-Based Hardware Accelerator in Common Cluster Infrastructures, IEEE, 2013 (Year: 2013). |
Saldana et al, TMD-MPI: An MPI Implementation for Multiple Processors Across Multiple FPGAs, IEEE 2006 (Year: 2006). |
Kwok et al, On the design of a self-reconfigurable SoPC cryptographic engine; IEEE 2004 (Year: 2004). |
Fox et al; Reliably Prototyping Large SoCs Using FPGA Clusters; IEEE 2014 (Year: 2014). |
Cilardo et al; Automated synthesis of FPGA-based heterogeneous interconnect topologies; IEEE 2013 (Year: 2013). |
“Notice of Allowance Issued in U.S. Appl. No. 14/752,785”, dated Sep. 21, 2018, 10 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 14/752,785”, dated Oct. 18, 2018, 9 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/752,807”, dated Oct. 18, 2018, 7 Pages. |
Bolchini, Cristiana, et al., “TMR and Partial Dynamic Reconfiguration to mitigate SEU faults in FPGAs”, In proceedings of the 22nd IEEE International Symposium on in Defect and Fault-Tolerance in VLSI Systems, Sep. 26, 2007, pp. 87-95. |
Danek, et al., “Increasing the Level of Abstraction in FPGA-Based Designs”, In Proceedings of International Conference on Field Programmable Logic and Applications, Sep. 23, 2008, pp. 5-10. |
Emmert, et al., “Dynamic Fault Tolerance in FPGAs via Partial Reconfiguration”, In Proceedings of IEEE Symposium on Field-Programmable Custom Computing Machines, Apr. 17, 2000, pp. 165-174. |
Heiner, Jonathan, et al., “FPGA Partial Reconfiguration via Configuration Scrubbing”, In Proceedings of the International Conference on in Field Programmable Logic and Applications, Aug. 31, 2009, pp. 99-104. |
Horta, Edson L.., et al., “Dynamic Hardware Plugins in an FPGA with Partial Run-time Reconfiguration”, In Proceedings of the 39th annual Design Automation Conference, Jun. 2002, pp. 343-348. |
Li, et al. “Configuration Prefetching Techniques for Partial Reconfigurable Coprocessor”, In Proceedings of the ACM/SIGDA tenth international symposium on Field-programmable gate arrays, Feb. 24, 2002, pp. 187-195. |
Lie, et al., “Dynamic partial reconfiguration in FPGAs”, In Proceedings of Third International Symposium on Intelligent Information Technology Application, Nov. 21, 2009, pp. 445-448. |
Lysaght, Patrick, et al., “Invited Paper: Enhanced Architectures, Design Methodologies and Cad Tools for Dynamic Reconfiguration of Xilinx FPGAs”, In International Conference on in Field Programmable Logic and Applications, Aug. 28, 2006, pp. 1-6. |
Rani, Sheeba J.., et al., “FPGA Based Partial Reconfigurable Fir Filter Design”, In Proceedings of the IEEE International Conference on in Advance Computing, Feb. 21, 2014, pp. 789-792. |
Steiger, et al., “Operating Systems for Reconfigurable Embedded Platforms”, In Journal of IEEE Transactions on Computers, vol. 53, Issue 11, Nov. 2004, pp. 1393-1407. |
International Search Report and Written Opinion dated Jun. 20, 2016 from PCT Patent Application No. PCT/US2016/026291, 11 pages. |
“Secure Computing Architecture”, retrieved at <<http://www.syprisresearch.com/home/secure-computing-architecture>> on Feb. 23, 2015, 4 pages. |
Abel et al., “Increasing Design Changeability using Dynamical Partial Reconfiguration”, Proceedings of the 16th IEEE NPSS Real Time Conference, May 10, 2009, 7 pages. |
Bharathi et al., “A Reconfigurable Framework for Cloud Computing Architecture”, Journal of Artificial Intelligence, vol. 6, Issue 1, Jan. 14, 2013, 4 pages. |
Conger et al., “FPGA Design Framework for Dynamic Partial Reconfiguration”, Proceedings of the 15th Reconfigurable Architecture Workshop, Apr. 14, 2008, 8 pages. |
Corbetta et al., “Two Novel Approaches to Online Partial Bitstream Relocation in a Dynamically Reconfigurable System”, Proceedings of IEEE Computer Society Annual Symposium on VLSI, Mar. 9, 2007, 2 pages. |
Eguro et al., “FPGAs for Trusted Cloud Computing”, Proceedings of the International Conference on Field-Programmable Logic and Applications, Aug. 2012, 8 pages. |
Emmert et al., “Online Fault Tolerance for FPGA Logic Blocks”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 15, Issue 2, Feb. 2007, pp. 216-226, 11 pages. |
Hammad et al, “Highly Expandable Reconfigurable Platform using Multi-FPGA based Boards”, International Journal of Computer Applications, vol. 51, No. 12, Aug. 2012, pp. 15-20, 6 pages. |
Harikrishna et al., “A Novel online Fault Reconfiguration of FPGA”, Proceedings of the Indian Journal of Applied Research, vol. 3, Issue 8, Aug. 2013, pp. 195-198, 4 pages. |
Jamuna et al., “Fault Tolerant Tecniques for Reconfigurable Devices: a brief Survey,” International Journal of Application or Innovation in Engineering & Management, vol. 2, Issue 1, Jan. 2013, 6 pages. |
Kearney et al., “Using Simulated Partial Dynamic Run-Time Reconfiguration to Share Embedded FPGA Compute and Power Resources across a Swarm of Unpiloted Airborne Vehicles”, Proceedings of EURASIP Journal of Embedded Systems, vol. 2007, Feb. 21, 2007, 12 pages. |
Kohn, Christian, “Partial Reconfiguration of a Hardware Accelerator on Zynq-7000 All Programmable SoC Devices”, Application Note: Zynq-7000 All Prgrammable SoC, vol. XAPP1159, No. UG1159, Jan. 21, 2013, 19 pages. |
Krieg et al., “Run-Time FPGA Health Monitoring using Power Emulation Techniques,” Proceedings of the IEEE 54th International Midwest Symposium on Circuits and Systems, Aug. 7, 2011, 4 pages. |
Machidon et al., “Cloud Perspective on Reconfigurable Hardware”, Review of the Air Force Academy, 2013, 6 pages. |
Madhavapeddy et al., “Reconfigurable Data Processing for Clouds,” Proceedings IEEE International Symposium on Field-Programmable Custom Computing Machines, May 1, 2011, 5 pages. |
Mamiit, Aaron, “Intel develops hybrid Xeon-FPGA chip for cloud services”, Jun. 20, 2014, retrieved at <<http://www.techtimes.com/articles/8794/20140620/intel-develops-hybrid-xeon-fpga-chip-for-cloud-services.html>>, 4 pages. |
Mcloughlin et al., “Achieving Low-cost High-reliability Computation Through Redundant Parallel Processing,” Proceedings of International Conference on Computing & Informatics, IEEE, Jun. 6, 2006, 6 pages. |
Mershad et al., “A Framework for Multi-cloud Cooperation with Hardware Reconfiguration Support”, Proceedings of IEEE Ninth World Congress on Services, Jun. 28, 2013, pp. 52-59, 8 pages. |
Mesquita et al., “Remote and Partial Reconfiguration of FPGAs: tools and trends”, Proceedings of the International Parallel and Distributed Processing Symposium, Apr. 22, 2003, 8 pages. |
Mysore et al., “PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric,” SIGCOMM '09, Aug. 17-21, 2009, 12 pages. |
Paulsson et al., “Exploitation of Run-Time Partial Reconfiguration for Dynamic Power Management in Xilinx Spartan III-based Systems”, Proceedings of the 3rd International Workshop on Reconfigurable Communication-centric Systems-on-Chip, Jun. 2007, 6 pages. |
Raaijmakers et al., “Run-Time Partial Reconfiguration for Removal, Placement and Routing on the Virtex-II Pro”, Proceedings of the International Conference on Field Programmable Logic and Applications, Aug. 27, 2007, 5 pages. |
Rana et al., “Partial Dynamic Reconfiguration in a Multi-FPGA Clustered Architecture Based on Linux”, Proceedings of the IEEE International Parallel and Distributed Processing Symposium, Mar. 26, 2007, 8 pages. |
Rath, John, “Microsoft Working on Re-configurable Processors to Accelerate Bing Search”, Jun. 27, 2014, retrieved at <<http://www.datacenterknowledge.com/archives/2014/06/27/programmable-fpga-chips-coming-to-microsoft-data-centers/>>, 3 pages. |
Rehman et al., “Test and Diagnosis of FPGA Cluster Using Partial Reconfiguration”, Proceedings of the 10th Conference on Ph.D. Research in Microelectronics and Electronics, Jun. 30, 2014, 4 pages. |
Saldana et al., “TMD-MPi: An MPI Implementation for Multiple Processors Across Multiple FPGAs”, Proceedings of the International Conference on Field Programmable Logic and Applications, Aug. 28, 2006, 6 pages. |
Singh, Satnam, “Computing without Processors,” Proceedings of ACM Computer Architecture, vol. 9, Issue 6, Jun. 27, 2011, 15 pages. |
Straka et al., “Modern Fault Tolerant Architectures Based on Partial Dynamic Reconfiguration in FPGAs”, IEEE 13th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, Apr. 16, 2010, pp. 173-176, 4 pages. |
Wilson, Richard, “Big FPGA design moves to the cloud”, Jun. 11, 2013, retrieved at <<http://www.electronicsweekly.com/news/components/programmable-logic-and-asic/big-fpga-design-moves-to-the-cloud-2013-06/>>, 6 pages. |
Wittig et al., “OneChip: An FPGA Processor With Reconfigurable Logic”, Department of Computer and Electrical Engineering, University of Toronto, IEEE, Apr. 17-19, 1996, 10 pages. |
Non-Final Office Action dated Jan. 27, 2017 from U.S. Appl. No. 14/717,721, 86 pages. |
Notice of Allowance dated Jan. 30, 2017 from U.S. Appl. No. 14/752,782, 13 pages. |
Amendment “A” and Response filed Jan. 6, 2017 to the Non-Final Office Action dated Aug. 11, 2016 from U.S. Appl. No. 14/752,785, 13 pages. |
Final Office Action dated Feb. 9, 2017 to U.S. Appl. No. 14/717,752, 26 pages. |
Non-Final Office Action dated Jan. 25, 2017 from U.S. Appl. No. 14/717,788, 81 pages. |
Non-Final Office Action dated Feb. 2, 2017 from U.S. Appl. No. 14/752,778, 23 pages. |
Non-Final Office Action dated Feb. 10, 2017 from U.S. Appl. No. 14/752,802, 28 pages. |
International Preliminary Report on Patentability dated May 24, 2017 from PCT Patent Application No. PCT/US2016/026286, 11 pages. |
Final Office Action and Examiner-Initiated Interview Summary dated May 16, 2017 from U.S. Appl. No. 14/752,785, 22 pages. |
Notice of Allowability and Applicant-Initiated Interview Summary dated May 12, 2017 from U.S. Appl. No. 14/717,752, 18 pages. |
Non-Final Office Action dated May 9, 2017 from U.S. Appl. No. 14/752,793, 21 pages. |
Amendment “A” and Response filed May 19, 2017 to the Non-Final Office Action dated Feb. 10, 2017 from U.S. Appl. No. 14/752,802, 12 pages. |
Inta et al., “The ‘Chimera’: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform”, International Journal of Reconfigurable Computing, vol. 2012, Article ID 241439, 2012, 10 pages. |
Supplemental Amendment/Response filed May 1, 2017 to the Advisory Action dated Apr. 5, 2017 from U.S. Appl. No. 14/717,752, 12 pages. |
Final Office Action dated May 2, 2017 from U.S. Appl. No. 14/717,788, 36 pages. |
Response filed Mar. 13, 2017 to the Non-Final Office Action dated Jan. 27, 2017 from U.S. Appl. No. 14/717,721, 15 pages. |
Final Office Action dated Apr. 5, 2017 from U.S. Appl. No. 14/717,721, 33 pages. |
After Final Consideration Pilot Program Request filed Mar. 13, 2017 with Response to the Final Office Action dated Feb. 9, 2017 from U.S. Appl. No. 14/717,752, 17 pages. |
Advisory Action and After Final Consideration Pilot Program Decision dated Apr. 5, 2017 from U.S. Appl. No. 14/717,752, 4 pages. |
Response filed Mar. 13, 2017 to the Non-Final Office Action dated Jan. 25, 2017 from U.S. Appl. No. 14/717,788, 13 pages. |
Amendment and Response filed Dec. 22, 2016 to the Notice of Allowance dated Oct. 27, 2016 from U.S. Appl. No. 14/752,782, 20 pages. |
Response filed Dec. 12, 2016 to the Non-Final Office Action dated Nov. 7, 2016 to U.S. Appl. No. 14/717,752, 17 pages. |
U.S. Appl. No. 62/149,311 titled “Reassinging Service Functionality Between Acceleration Components” filed Apr. 17, 2015 by Inventors Heil et al., 62 pages. |
Notice of Allowance dated Oct. 27, 2016 from U.S. Appl. No. 14/752,782, 20 pages. |
U.S. Appl. No. 62/149,308 titled “Reconfiguring Acceleration Components of a Composed Service” filed Apr. 17, 2015 by Inventors Lanka et al., 57 pages. |
U.S. Appl. No. 62/149,305 titled “Restoring Service Functionality at Acceleration Components” filed Apr. 17, 2015 by Inventors Heil et al., 66 pages. |
U.S. Appl. No. 62/149,303 titled “Changing Between Difference Programmed Functionalities at an Acceleration Component” filed Apr. 17, 2015 by Inventors Putnam et al., 63 pages. |
Notice of Allowance dated Aug. 30, 2017 from U.S. Appl. No. 14/717,752, 17 pages. |
Non-Final Office Action dated Aug. 22, 2017 from U.S. Appl. No. 14/717,788, 40 pages. |
Non-Final Office Action dated Aug. 11, 2017 from U.S. Appl. No. 14/752,793, 23 pages. |
Notice of Allowance and Examiner-Initiated Interview Summary dated Aug. 25, 2017 from U.S. Appl. No. 14/752,778, 15 pages. |
Non-Final Office Action dated Jan. 11, 2017 from U.S. Appl. No. 14/717,680, 68 pages. |
Response filed Mar. 13, 2017 to the Non-Final Office Action dated Jan. 11, 2017 from U.S. Appl. No. 14/717,680, 13 pages. |
Notice of Allowance and Examiner-Initiated Interview Summary dated May 18, 2017 from U.S. Appl. No. 14/717,680, 45 pages. |
Supplemental Notice of Allowability dated Jun. 22, 2017 from U.S. Appl. No. 14/717,680, 6 pages. |
Supplemental Notice of Allowability dated Aug. 21, 2017 from U.S. Appl. No. 14/717,680, 10 pages. |
Markettos et al., “Interconnect for commodity FPGA clusters: standardized or customized?,” in Proceedings of the 24th International Conference on Field Programmable Logic and Applications, Sep. 2014, 8 pages. |
Vaz et al., “Deferring Accelerator Offloading Decisions to Application Runtime,” in Proceedings of the International Conference on ReConFigurable Computing and FPGAs, Dec. 2014, 8 pages. |
Jun et al., “Scalable Multi-Access Flash Store for Big Data Analytics,” in Proceedings of 22nd ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Feb. 26, 2014, 10 pages. |
Moorhead, Patrick, “Moving Beyond CPUs in the Cloud: Will FPGAs Sink or Swim?,” available at <<http://www.moorinsightsstrategy.com/wp-content/uploads/2014/12/Moving-Beyond-CPUs-in-the-Cloud-Will-FPGAs-Sink-or-Swim-by-Moor-Insights-and-Strategy.pdf>>, Moor Insights and Strategies, published on Dec. 2, 2014, 5 pages. |
Morris, Kevin, “FPGAs Cool Off the Datacenter,” available at <<http://www.eejournal.com/archives/articles/20141118-datacenter/>>, in Electronic Engineering Journal, published on Nov. 18, 2014, 5 pages. |
Wilson, Ron, “Heterogeneous Computing Meets the Data Center,” available at <<http://www.altera.com/technology/system-design/articles/2014/heterogeneous-computing.html>>, Altera Corporation, San Jose, CA, published on Aug. 4, 2014, 3 pages. |
Macvittie, Lori, “Hardware Acceleration Critical Component for Cost-Conscious Data Centers,” available at <<https://devcentral.f5.com/articles/hardware-acceleration-critical-component-for-cost-conscious-data-centers>>, F5 DevCentral, published on Mar. 24, 2009, 10 pages. |
Schadt, et al., “Computational Solutions to Large-Scale Data Management and Analysis,” in Journal of Nature Reviews Genetics, vol. 11, Sep. 2010, 11 pages. |
Pereira, Karl Savio Pimenta, “Characterization of FPGA-based High Performance Computers,” Masters Thesis, Virginia Polytechnic Institute and State University, Aug. 9, 2011, 134 pages. |
Chalamalasetti et al., “Evaluating FPGA-Acceleration for Real-time Unstructured Search,” in Proceedings of the IEEE International Symposium on Performance Analysis of Systems & Software, Apr. 2012, 10 pages. |
Altera and IBM Unveil FPGA-Accelerated Power Systems, available at <<http://www.hpcwire.com/off-the-wire/altera-ibm-unveil-fpga-accelerated-power-systems/>>, HPC Wire, published on Nov. 17, 2014, 5 pages. |
“Altera and Baidu Collaborate on FPGA-Based Acceleration for Cloud Datacenters,” available at <<http://www.hpcwire.com/off-the-wire/altera-baidu-collaborate-fpga-based-acceleration-cloud-datacenters-2/>>, published on Sep. 24, 2014, HPC Wire, 5 pages. |
Kachris et al., “A Reconfigurable MapReduce Accelerator for Multi-Core All-Programmable SoCs,” in Proceedings of the International Symposium on System-on-Chip, Oct. 28, 2014, 6 pages. |
Adler et al., “Leap Scratchpads: Automatic Memory and Cache Management for Reconfigurable Logic”, in Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, Feb. 2011, 4 pages. |
“Nios II Processor Reference Handbook,” available at <<http://www.altera.com/literature/hb/nios2/n2cpu_nii5v1.pdf>>, Altera Corporation, San Jose, CA, Feb. 2014, 288 pages. |
“Stratix V Device Handbook,” available at <<http://www.altera.com/literature/hb/stratix-v/stx5_core.pdf and http://www.altera.com/literature/hb/stratix-vistx5_xcvr.pdf>>, vols. 1 and 2, Altera Corporation, San Jose, CA, Sep. 30, 2014, 563 pages. |
Baxter et al., “Maxwell—a 64 FPGA Supercomputer,” in Proceedings of the Second NASA/ESA Conference on Adaptive Hardware and Systems, Aug. 2007, 8 pages. |
“BEE4 Hardware Platform”, available at <<http://beecube.com/downloads/BEE42pages.pdf>>, BEEcube Inc., Fremont, CA, retrieved on Feb. 26, 2015, 2 pages. |
Blott et al., “Dataflow Architectures for 10Gbps Line-Rate Key-Value Stores,” Proceedings of the Symposium on High Performance Chips, Aug. 25, 2013, 25 pages. |
Hung et al., “CoRAM: An In-Fabric Memory Architecture for FPGA-based Computing,” Proceedings of the 19th ACM/SIGDA International Symposium on Field Programmable Gate Arrays, Feb. 2011, 10 pages. |
“The Convey HC-2 Computer: Architectural Overview,” available at <<http://www.conveycomputer.com/index.php/download_file/view/143/142/>>, Convey White Paper, Convey Computer Corporation, Richardson, Texas, 2012, 10 pages. |
“Cray XD1 Datasheet,” available at <<http://www.carc.unm.edu/˜tlthomas/buildout/Cray_XD1_Datasheet.pdf>>, Cray Inc., Seattle, WA, accessed on Mar. 4, 2015, 6 pages. |
Estlick et al., “Algorithmic Transformations in the Implementation of K-Means Clustering on Reconfigurable Hardware,” in Proceedings of the ACM/SIGDA Ninth International Symposium on Field Programmable Gate Arrays, Feb. 2001, 8 pages. |
George et al., “Novo-G: At the Forefront of Scalable Reconfigurable Supercomputing,” in Journal of Computing in Science & Engineering, vol. 13, Issue 1, Jan. 2011, 5 pages. |
Hussain et al., “Highly Parameterized K-means Clustering on FPGAs: Comparative Results with GPPs and GPUs,” in Proceedings of the International Conference on Reconfigurable Computing and FPGAs, Nov. 2011, 6 pages. |
“IBM PureData System for Analytics N2001,” available at <<http://public.dhe.ibm.com/common/ssi/ecm/wa/en/wad12353usen/WAD12353USEN.PDF>>, PureSystems, IBM Corporation, Armonk, NY, retrieved on Feb. 26, 2015, 8 pages. |
“An Introduction to the Intel Quickpath Interconnect,” available at <<http://www.intel.in/content/dam/doc/white-paper/quick-path-interconnect-introduction-paper.pdf>>, White Paper, Intel Corporation, Santa Clara, CA, Jan. 2009, 22 pages. |
Kirchgessner et al., “VirtualRC: A Virtual FPGA Platform for Applications and Tools Portability,” in Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, Feb. 2012, 4 pages. |
Lavasani et al., “An FPGA-based In-line Accelerator for Memcached,” in IEEE Computer Architecture Letters, vol. 13, No. 2, Jul. 15, 2013, 4 pages. |
Ling et al., “High-performance, Energy-efficient Platforms using In-socket FPGA Accelerators,” in Proceedings of the ACM/SIGDA International Symposium on Field Programmable Gate Arrays, Feb. 2009, 4 pages. |
“How Microsoft Designs its Cloud-Scale Servers,” retrieved at <<http://download.microsoft.com/download/5/7/6/576F498A-2031-4F35-A156-BF8DB1ED3452/How_MS_designs_its_cloud_scale_servers_strategy_paper_pdf>>, Microsoft Corporation, Redmond, WA, retrieved on Feb. 26, 2015, 6 pages. |
Pell et al., “Surviving the end of frequency scaling with reconfigurable dataflow computing,” in ACM SIGARCH Computer Architecture News, vol. 39, Issue 4, Sep. 2011, 6 pages. |
Showerman et al., “QA: A Heterogeneous Multi-Accelerator Cluster,” in Proceedings of the 10th LCI International Conference on High-Performance Clustered Computing, Mar. 2009, 8 pages. |
Slogsnat et al., “An Open-Source HyperTransport Core,” in Journal of ACM Transactions on Reconfigurable Technology and Systems, vol. 1, Issue 3, Sep. 2008, 21 pages. |
So et al., “A Unified Hardware/Software Runtime Environment for FPGA-Based Reconfigurable Computers using BORPH,” in Journal of ACM Transactions on Embedded Computing Systems, vol. 7, Issue 2, Feb. 2008, 28 pages. |
“SRC MAPstation Systems,” available at <<http://www.srccomp.com/sites/default/files/pdf/SRC7_MAPstation_70000-AG_pdf>>, SRC Computers, Colorado Springs, CO, retrieved on Feb. 26, 2015, 2 pages. |
Vanderbauwhede et al., “FPGA-accelerated Information Retrieval: High-Efficiency Document Filtering,” in Proceedings of the International Conference on Field Programmable Logic and Applications, Aug. 2009, 6 pages. |
“MicroBlaze Processor Reference Guide, Embedded Development Kit,” available at <<http://www.xilinx.com/support/documentation/sw_manuals/xilinx14_2/mb_ref_guide.pdf>>, Version EDK 14.2, Xilinx, Inc., San Jose, CA, 2012, 256 pages. |
Yan et al., “Efficient Query Processing for Web Search Engine with FPGAs,” in Proceedings of the IEEE 20th International Symposium on Field-Programmable Custom Computing Machines, Apr. 2012, 4 pages. |
Putnam et al., “A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services,” in Proceedings of the ACM/IEEE 41st International Symposium on Computer Architecture, Jun. 14, 2014, 12 pages. |
Chiou et al., “FPGA-Based Application Acceleration: Case Study with GZIP Compression/Decompression Streaming Engine,” ICCAD Special Session 7C, abstract only, Nov. 2013, 1 page. |
Stuecheli, Jeff, “Next Generation POWER Microprocessor,” in Hot Chips: A Symposium on High Performance Chips, Aug. 2013, 20 pages. |
“Accelium™ 3700 Coprocessor,” available at <<http://drccomputer.com/downloads/DRC%20Accelium%203700%20Datasheet%20-%20Oct%202013.pdf>>, DRC Computer Corporation, Santa Clara, CA, retrieved on Mar. 4, 2015, 1 page. |
Response filed Jul. 26, 2017 to the Final Office Action dated Apr. 5, 2017 from U.S. Appl. No. 14/717,721, 16 pages. |
Supplemental Notice of Allowability dated May 31, 2017 from U.S. Appl. No. 14/717,752, 13 pages. |
Supplemental Notice of Allowability dated Jun. 15, 2017 from U.S. Appl. No. 14/717,752, 6 pages. |
Response filed Jul. 31, 2017 to the Final Office Action dated May 2, 2017 from U.S. Appl. No. 14/717,788, 14 pages. |
Amendment “A” and Response filed Jul. 20, 2017 to the Non-Final Office Action dated May 9, 2017 from U.S. Appl. No. 14/752,800, 15 pages. |
Amendment “A” and Response filed Jun. 20, 2017 to the Non-Final Office Action dated Feb. 2, 2017 from U.S. Appl. No. 14/752,778, 13 pages. |
Final Office Action dated Jul. 7, 2017 from U.S. Appl. No. 14/752,802, 18 pages. |
“Final Office Action Issued in U.S. Appl. No. 14/752,802”, dated Jul. 7, 2017, 18 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/717,752”, dated Feb. 9, 2017, 26 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/752,785”, dated May 16, 2017, 22 Pages. |
“International Preliminary Report on Patentability Issued in PCT Patent Application No. PCT/ US2016/026286”, dated May 24, 2017, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/717,752”, dated Nov. 7, 2016, 51 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/752,778”, dated Feb. 2, 2017, 23 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/752,802”, dated Feb. 10, 2017, 28 Pages. |
“Non-Final Office Action from U.S. Appl. No. 14/717,721”, dated Jan. 24, 2018, 64 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/717,721”, dated Apr. 5, 2017, 33 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/717,721”, dated Jan. 27, 2017, 86 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/717,788”, dated May 2, 2017, 36 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 14/717,788”, dated Jan. 31, 2018, 18 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/717,788”, dated Jan. 25, 2017, 20 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/717,721”, dated Jun. 29, 2018, 51 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/752,793”, dated Aug. 11, 2017, 23 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 14/752,800”, dated May 3, 2018, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/752,800”, dated May 9, 2017, 12 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 14/752,807”, dated Jun. 21, 2018, 13 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/752,785”, dated Jul. 12, 2018, 19 Pages. |
“Office Action Issued in European Patent Application No. 16719599.9”, dated Aug. 9, 2018, 05 Pages. |
“Office Action Issued in European Patent Application No. 16719604.7”, dated Aug. 9, 2018, 7 Pages. |
“Office Action Issued in European Patent Application No. 16719605.4”, dated Aug. 9, 2018, 5 Pages. |
Bolchini, et al., “A Reliable Reconfiguration Controller for Fault-Tolerant Embedded Systems on Multi-FPGA 29 Platforms”, in Proceedings of the IEEE 25th International Symposium on Defect and Fault Tolerance in VLSI Systems, Oct. 6, 2010, Oct. 6, 2010, 9 Pages. |
“International Search Report and Written Opinion in PCT Application No. PCT/US2016/038841”, dated Sep. 28, 2016, 18 Pages. |
“International Preliminary Report on Patentability Issued in PCT Application No. PCT/US2016/026290”, dated Mar. 13, 2017, 8 Pages. |
Stott, “Degradation in FPGAs: Measurement and Modelling”, In Proceedings of the 18th Annual ACM/SIGDA International Symposium on Field Programmable Gate Array, Feb. 21, 2010, Feb. 21, 2010, 10 Pages. |
Non-Final Office Action dated Nov. 7, 2016 to U.S. Appl. No. 14/717,752, 51 pages. |
Niu et al., “Reconfiguring Distributed Applications in FPGA Accelerated Cluster with Wireless Networking,” IEEE 21st International Conference on Field Programmable Logic and Applications, 2011, pp. 545-550, 6 pages. |
International Preliminary Report on Patentability dated Oct. 26, 2017 from PCT Patent Application No. PCT/US2016/026285, 12 pages. |
International Preliminary Report on Patentability dated Oct. 26, 2017 from PCT Patent Application No. PCT/US2016/026287, 12 pages. |
Final Office Action dated Nov. 6, 2017 from U.S. Appl. No. 14/717,788, 24 pages. |
Final Office Action mailed Nov. 8, 2017 from U.S. Appl. No. 14/752,800, 14 pages. |
Non-Final Office Action dated Sep. 22, 2017 from U.S. Appl. No. 14/752,807, 24 pages. |
International Search Report and Written Opinion dated Jun. 20, 2016 from PCT Patent Application No. PCT/US2016/026284, 13 pages. |
Demand and Response filed Aug. 3, 2016 from PCT Patent Application No. PCT/US2016/026284, 18 pages. |
Non-Final Office Action dated Aug. 11, 2016 from U.S. Appl. No. 14/752,785, 29 pages. |
Demand and Response filed Aug. 17, 2016 from PCT Patent Application No. PCT/US2016/026286, 19 pages. |
International Search Report and Written Opinion dated Jun. 20, 2016 from PCT Patent Application No. PCT/US2016/026290, 12 pages. |
Demand and Response filed Aug. 1, 2016 from PCT Patent Application No. PCT/US2016/026290, 19 pages. |
International Search Report and Written Opinion dated Jun. 20, 2016 from PCT Patent Application No. PCT/US2016/026293, 10 pages. |
Demand and Response filed Jul. 27, 2016 from PCT Patent Application No. PCT/US2016/026293, 13 pages. |
Kachris et al., “A Configurable MapReduce Accelerator for Multi-core FPGAs,” abstract from FPGA'14, Proceedings of the 2014 ACM/SIGDA International Symposium on Field-programmable Gate Arrays, Feb. 26-28, 2014, 1 page. |
Caulfield et al., “A Cloud-Scale Acceleration Architecture”, Microarchitecture (MICRO), 49th Annual IEEE/ACM International Symposium, Oct. 15-19, 2016, 13 pages. |
Kim et al., “Polymorphic On-Chip Networks”, ISCA'08, 35th International Sumposium on Computer Architecture, IEEE 2008, 12 pages. |
Papadimitriou et al., “Performance of Partial Reconfiguration in FPGA Systems; A Survey and a Cost Model”, ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 4, No. 4, Article 36, Dec. 2011, 24 pages. |
Tan et al., “A Case for FAME: FPGA Architecture Model Execution”, ACM SIGARCH, Computer Architecture News, vol. 38, No. 3, Jun. 19-23, 2010, pp. 290-301, 12 pages. |
International Search Report and Written Opinion dated Jun. 23, 2016 from PCT Patent Application No. PCT/US2016/026285, 16 pages. |
International Search Report and Written Opinion dated Jul. 4, 2016 from PCT Patent Application No. PCT/US2016/026287, 17 pages. |
U.S. Appl. No. 62/149,488 titled “Data Processing System having a Hardware Acceleration Plane and a Software Plane,” filed Apr. 17, 2015 by Inventors Douglas C. Burger, Adrian M. Caulfield and Derek T. Chiou, 156 pages. |
International Search Report and Written Opinion dated Jul. 4, 2016 from PCT Patent Application No. PCT/US2016/026286, 15 pages. |
Oden et al., “GGAS: Global GPU Address Spaces for Efficient Communication in Heterogeneous Clusters”, 2013 IEEE International Conference on Cluster Computing (CLUSTER), IEEE, Sep. 23, 2013, pp. 1-8, 8 pages. |
Southard, Dale, “Best Practices for Deploying and Managing GPU Clusters”, Dec. 18, 2012 Internet Webinar retrieved from <<http://on-demand.gputechconf.com/gtc-express/2012/presentations;deploying-managing-gpu-clusters.pdf>> on Jun. 20, 2016, 17 pages. |
Non-Final Office Action dated Feb. 23, 2018 from U.S. Appl. No. 14/752,785, 21 pages. |
Notice of Allowance dated Feb. 7, 2018 from U.S. Appl. No. 14/752,802, 6 pages. |
Non-Final Office Action dated Jan. 24, 2018 from U.S. Appl. No. 14,717,721, 62 pages. |
International Search Report and Written Opinion dated Sep. 16, 2016 from PCT Patent Application No. PCT/US2016/038837, 18 pages. |
International Search Report and Written Opinion dated Sep. 5, 2016 from PCT Patent Application No. PCT/US2016/038838, 12 pages. |
International Search Report and Written Opinion dated Sep. 28, 2016 from PCT Patent Application No. PCT/US2016/038841, 18 pages. |
Demand and Response filed Aug. 10, 2016 from PCT Patent Application No. PCT/US2016/026087, 7 pages. |
Second Written Opinion dated Oct. 14, 2016 from PCT Patent Application No. PCT/US2016/026286, 9 pages. |
Cervero et al., “A resource manager for dynamically reconfigurable FPGA-based embedded systems,” Proceedings of the Euromicro Conference on Digital System Design, Sep. 2013, 8 pages. |
Unnikrishnan et al., “Reconfigurable Data Planes for Scalable Network Virtualization,” IEEE Transactions on Computers, vol. 62, No. 1, Jan. 2013, 14 pages. |
Romoth et al., “Optimizing Inter-FPGA Communication by Automatic Channel Adaptation,” Proceedings of the International Conference on Reconfigurable Computing and FPGAs, Dec. 2012, 7 pages. |
“An Introduction to the NI LabVIEW RIO Architecture,” retrieved at <<http://www.ni.com/white-paper/10894/en/>>, gational Instruments Corporation, Austin, TX, published on Jan. 28, 2015, 4 pages. |
Eshelman, DJ, “Think You Don't Need GPUs in the Datacenter? Think Again,” available at <<http://www.gtri.com/think-you-dont-need-gpus-in-the-datacenter-think-again/>>, Global Technologies Resources, Inc., Jul. 23, 2014, 9 pages. |
Alachiotis et al., “Efficient PC-FPGA Communication Over Gigabit Ethernet,” Proceedings of the 10th IEEE International Conference on Computer and Information Technology, Jun. 2010, 8 pages. |
Khalilzad et al., “FPGA implementation of Real-time Ethernet communication using RMII Interface,” Proceedings of the IEEE 3rd International Conference on Communication Software and Networks, May 2011, 7 pages. |
Inoue et al., “20Gbps C-Based Complex Event Processing,” Proceedings of the 2011 21st International Conference on Field Programmable Logic and Applications, 2011, 6 pages. |
Tan et al., “Datacenter-Scale Network Research on FPGAs,” Proceedings of the Exascale Evaluation and Research Techniques Workshop, 2011, 6 pages. |
Sverdlik, Yevgeniy, “Intel to Offer Hyper-Scale Operators Ability to Reconfigure CPUs on a Dime,” retrieved at <<http://www.datacenterknowledge.com/archives/2014/06/19/intel-offer-hyper-scale-operators-ability-reconfigure-cpusdime/>>, Data Center Knowledge, Jun. 19, 2014, 3 pages. |
“Altera Programmable Logic is Critical DNA in Software Defined Data Centers,” retrieved at <<http://newsroom.altera.com/press-releases/altera-microsoft-datacenter.htm>>, Altera Corporation, San Jose, CA, Jun. 16, 2014, 2 pages. |
“Cisco UCS C240-M3 Rack Server with NVIDIA GRID GPU cards on Citrix XenServer 6.2 and XenDesktop 7.5,” available at <<http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/whitepaper_C11-732283.pdf>>, White Paper, Cisco Systems, Inc., San Jose, CA, Jul. 2014, 38 pages. |
Gazzano et al., “Integrating Reconfigurable Hardware-Based Grid for High Performance Computing,” Scientific World Journal, vol. 2015, accessed on Apr. 8, 2015, 15 pages. |
Yin et al., “Customizing Virtual Networks with Partial FPGA Reconfiguration,” Proceedings of the Second ACM SIGCOMM Workshop on Virtualized Infrastructure Systems and Architectures, Sep. 2010, 8 pages. |
Chen et al., “Enabling FPGAs in the Cloud,” Proceedings of the 11th ACM Conference on Computing Frontiers, May 2014, 10 pages. |
Fahmy et al., “A Case for FPGA Accelerators in the Cloud,” Poster in of ACM Symposium on Cloud Computing, Nov. 2014, 1 page. |
Chiou et al., “Handling Tenant Requests in a System that Uses Acceleration Components,” U.S. Appl. No. 14/717,752, filed May 20, 2015, 120 pages. |
“Summons to Attend Oral Proceedings Issued in European Patent Application No. 16719599.9”, dated Jun. 3, 6 Pages. |
“Summons to Attend Oral Proceedings Issued in European Patent Application No. 16719605.4”, dated May 31, 6 Pages. |
“Extended Search Report Received for European Patent Application No. 11834944.8”, dated Jun. 12, 2019, 13 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 16/283,878”, dated Dec. 12, 2019, 8 Pages. |
“First Office Action Issued in Chinese Patent Application No. 201680022186.8”, dated Mar. 3, 2020, 6 Pages. (Without English Translation). |
“First Office Action Issued in Chinese Patent Application No. 201680022187.2”, dated Mar. 5, 2020, 5 Pages. (Without English Translation). |
“First Office Action Issued in Chinese Patent Application No. 201680022447.6”, dated Feb. 25, 2020, 7 Pages. (Without English Translation). |
“First Office Action Issued in Chinese Patent Application No. 201680022845.8”, dated Mar. 26, 2020, 10 Pages. (Without English Translation). |
“Non-Final Office Action Issued in U.S. Appl. No. 16/100,110”, dated May 12, 2020, 24 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 16/258,181”, dated Apr. 24, 2020, 9 Pages. |
Tumeo, et al., “A Reconfigurable Multiprocessor Architecture for a Reliable Face Recognition Implementation”, In Proceedings of Design, Automation and Test in Europe Conference and Exhibition, Mar. 8, 2010, pp. 319-322. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201680021987.2”, dated Apr. 2020, 28 Pages. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201680022171.1”, dated Apr. 2020, 8 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 16/258,181”, dated Jan. 15, 2020, 13 Pages. |
“Second Office Action Issued in Chinese Patent Application No. 201680022447.6”, dated Aug. 28, 2020, 10 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 16/128,224”, dated Jun. 17, 2020, 21 Pages. |
Charitopoulos, et al., “Hardware Task Scheduling for Partially Reconfigurable FPGAs”, In Journal of International Symposium on Applied Reconfigurable Computing, Mar. 31, 2015, pp. 487-498. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201680037389.4”, dated May 28, 2020, 11 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 201680022845.8”, dated Jun. 23, 2020, 5 Pages. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201680037768.3”, dated Jun. 2020, 8 Pages. |
“Final Office Action Issued in U.S. Appl. No. 16/128,224”, dated Dec. 8, 2020, 33 Pages. |
“Final Office Action Issued in U.S. Appl. No. 16/100,110”, dated Nov. 4, 2020, 30 Pages. |
“Office Action Issued in European Patent Application No. 16716441.7”, dated Nov. 24, 2020, 5 Pages. |
“Office Action Issued in European Patent Application No. 16719601.3”, dated Oct. 30, 2020, 8 Pages. |
“Office Action Issued in European Patent Application No. 16719600.5”, dated Oct. 16, 2020, 9 Pages. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201680035401.8”, dated Jan. 6, 2021, 18 Pages. (W/O English Translation). |
“Notice of Allowance Issued in Chinese Patent Application No. 201680022447.6”, dated Jan. 20, 2021, 5 Pages. |
Number | Date | Country | |
---|---|---|---|
20170351547 A1 | Dec 2017 | US |
Number | Date | Country | |
---|---|---|---|
62149488 | Apr 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14717680 | May 2015 | US |
Child | 15669652 | US |