Extended write combining using a write continuation hint flag

Information

  • Patent Grant
  • 8458282
  • Patent Number
    8,458,282
  • Date Filed
    Tuesday, June 26, 2007
    17 years ago
  • Date Issued
    Tuesday, June 4, 2013
    11 years ago
Abstract
A computing apparatus for reducing the amount of processing in a network computing system which includes a network system device of a receiving node for receiving electronic messages comprising data. The electronic messages are transmitted from a sending node. The network system device determines when more data of a specific electronic message is being transmitted. A memory device stores the electronic message data and communicating with the network system device. A memory subsystem communicates with the memory device. The memory subsystem stores a portion of the electronic message when more data of the specific message will be received, and the buffer combines the portion with later received data and moves the data to the memory device for accessible storage.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present invention is related to the following commonly-owned, co-pending United States Patent Applications filed on even date herewith, the entire contents and disclosure of each of which is expressly incorporated by reference herein as if fully set forth herein. U.S. patent application Ser. No. 11/768,777, for “A SHARED PERFORMANCE MONITOR IN A MULTIPROCESSOR SYSTEM”; U.S. patent application Ser. No. 11/768,645, for “OPTIMIZED COLLECTIVES USING A DMA ON A PARALLEL COMPUTER”; U.S. patent application Ser. No. 11/768,781, for “DMA SHARED BYTE COUNTERS IN A PARALLEL COMPUTER”; U.S. patent application Ser. No. 11/768,784, for “MULTIPLE NODE REMOTE MESSAGING”; U.S. patent application Ser. No. 11/768,697, for “A METHOD AND APPARATUS OF PREFETCHING STREAMS OF VARYING PREFETCH DEPTH”; U.S. patent application Ser. No. 11/768,532, for “PROGRAMMABLE PARTITIONING FOR HIGH-PERFORMANCE COHERENCE DOMAINS IN A MULTIPROCESSOR SYSTEM”; U.S. patent application Ser. No. 11/768,857, for “METHOD AND APPARATUS FOR SINGLE-STEPPING COHERENCE EVENTS IN A MULTIPROCESSOR SYSTEM UNDER SOFTWARE CONTROL”; U.S. patent application Ser. No. 11/768,547, for “INSERTION OF COHERENCE EVENTS INTO A MULTIPROCESSOR COHERENCE PROTOCOL”; U.S. patent application Ser. No. 11/768,791, for “METHOD AND APPARATUS TO DEBUG AN INTEGRATED CIRCUIT CHIP VIA SYNCHRONOUS CLOCK STOP AND SCAN”; U.S. patent application Ser. No. 11/768,795, for “DMA ENGINE FOR REPEATING COMMUNICATION PATTERNS”; U.S. patent application Ser. No. 11/768,799, for “METHOD AND APPARATUS FOR A CHOOSE-TWO MULTI-QUEUE ARBITER”; U.S. patent application Ser. No. 11/768,800, for “METHOD AND APPARATUS FOR EFFICIENTLY TRACKING QUEUE ENTRIES RELATIVE TO A TIMESTAMP”; U.S. patent application Ser. No. 11/768,572, for “BAD DATA PACKET CAPTURE DEVICE”; U.S. patent application Ser. No. 11/768,805, for “A SYSTEM AND METHOD FOR PROGRAMMABLE BANK SELECTION FOR BANKED MEMORY SUBSYSTEMS”; U.S. patent application Ser. No. 11/768,905, for “AN ULTRASCALABLE PETAFLOP PARALLEL SUPERCOMPUTER”; U.S. patent application Ser. No. 11/768,810, for “SDRAM DDR DATA EYE MONITOR METHOD AND APPARATUS”; U.S. patent application Ser. No. 11/768,812, for “A CONFIGURABLE MEMORY SYSTEM AND METHOD FOR PROVIDING ATOMIC COUNTING OPERATIONS IN A MEMORY DEVICE”; U.S. patent application Ser. No. 11/768,559, for “ERROR CORRECTING CODE WITH CHIP KILL CAPABILITY AND POWER SAVING ENHANCEMENT”; U.S. patent application Ser. No. 11/768,552, for “STATIC POWER REDUCTION FOR MIDPOINT-TERMINATED BUSSES”; U.S. patent application Ser. No. 11/768,527, for “COMBINED GROUP ECC PROTECTION AND SUBGROUP PARITY PROTECTION”; U.S. patent application Ser. No. 11/768,669, for “A MECHANISM TO SUPPORT GENERIC COLLECTIVE COMMUNICATION ACROSS A VARIETY OF PROGRAMMING MODELS”; U.S. patent application Ser. No. 11/768,813, for “MESSAGE PASSING WITH A LIMITED NUMBER OF DMA BYTE COUNTERS”; U.S. patent application Ser. No. 11/768,619, for “ASYNCRONOUS BROADCAST FOR ORDERED DELIVERY BETWEEN COMPUTE NODES IN A PARALLEL COMPUTING SYSTEM WHERE PACKET HEADER SPACE IS LIMITED”; U.S. patent application Ser. No. 11/768,682, for “HARDWARE PACKET PACING USING A DMA IN A PARALLEL COMPUTER”; and U.S. patent application Ser. No. 11/768,752, for “POWER THROTTLING OF COLLECTIONS OF COMPUTING ELEMENTS”.


FIELD OF THE INVENTION

The present invention relates generally to data processing systems, and more particularly, relates to write combining and pre-fetching in computer memory systems.


BACKGROUND OF THE INVENTION

Packet based network devices receive electronic messages or streams as sequences of packets. A packet is a formatted block of data carried by a computer network. Data from packets may be aligned arbitrarily when stored in memory causing fractions of cache memory lines to be written at packet boundaries. These fractions can cause expensive Read-Modify-Write (RMW) cycles to read the data, modify it, and then write the data back to memory. Further, write combining buffers may store these fractions and combine them with cache line fractions provided by subsequent packets from the same stream or message.


However, packets of a stream or message may be interleaved with packets from other streams or messages, separating accesses that could be write-combined, and thus reducing the probability of write-combining due to premature eviction of fractions from the write-combining buffer. Also, other store traffic, e.g., stores from a local processor, may use the write combining buffers, separating write-combinable accesses even further.


Therefore, a need exists for a method and/or apparatus to reduce interleaving packets of a stream or message and reduce separating write-combinable accesses. Moreover, it would be desirable for a method and/or apparatus to reduce the amount of Read-Modify-Write cycles caused by the alignment of packet boundaries when storing data in memory.


SUMMARY OF THE INVENTION

In an aspect of the present invention, a computing apparatus for reducing the amount of processing in a network computing system which includes a network system device of a receiving node for receiving electronic messages including data. The electronic messages are transmitted from a sending node, and the network system device determines when more data of a specific electronic message is being transmitted. A memory device stores the electronic message data and communicates with the network system device. A memory subsystem communicates with the memory device, and the memory subsystem stores a portion of the electronic message when more data of the specific message is being transmitted. The buffer combines the portion with later received data and moves the combined data to the memory device for accessible storage.


In a related aspect, the processor moves the data to the memory device using a Read-Modify-Write cycle.


In a related aspect, the memory subsystem includes a buffer.


In a related aspect, the memory subsystem includes a write combining buffer.


In a related aspect, the network system device includes a computer program for determining when more data is being transmitted of the specific electronic message.


In a related aspect, the network system device includes a hardware device for determining when more data is being transmitted of a specific electronic message.


In a related aspect, the electronic message includes an indicator communicating to the network system device that more data is being transmitted after the network system device receives the specific electronic message.


In a related aspect, the indicator is a write continuation flag indicating a write continuation.


In a related aspect, the flag tags a last portion of the electronic message to indicate to the memory subsystem to store the last portion longer than non-tagged portions.


In a related aspect, the apparatus further including a pre-fetch device executing a fetch of metadata upon initiation from the network system device for a next electronic message being stored in the memory device.


In a related aspect the memory device includes cache memory.


In a related aspect, the electronic messages include data packets.


In a related aspect, the network system device of the receiving node communicates with a communication link communicating with the sending node.


In a related aspect, the network system device is a computer having a processor.


In another aspect, a method for producing a computing apparatus for reducing the amount of processing in a network computing system comprises receiving electronic messages including data on a receiving node; transmitting the electronic messages from a sending node; determining when more data of a specific electronic message is being transmitted; storing the electronic message data; storing a portion of the electronic message when more data of the specific message is being transmitted; and combining the portion with later received data and moving the combined data to the memory device for accessible storage.


In a related aspect, the method further includes fetching metadata for a next electronic message being stored in a memory device.





BRIEF DESCRIPTION OF THE DRAWINGS

These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings, in which:



FIG. 1 is a block diagram according to an embodiment of the invention depicting a receiving node including a computing apparatus having a communication link, a processor, a memory device and a memory subsystem; and



FIG. 2 is block diagram of a representative data packet structure received by the computing apparatus depicted in FIG. 1.





DETAILED DESCRIPTION OF THE INVENTION

An illustrative embodiment of a computing apparatus 20 according to the present invention and shown in FIG. 1 includes a bi-directional communication link 24 connecting a network computing system including sending and receiving nodes. A node is defined herein as a point in a communication topology where data packets being carried through the bi-directional communication link are stored in a memory device for further processing, which may include, for example, reading, modifying, and writing to the memory device. A sending node or other nodes in the network computing system, which are not shown in the figures, are envisioned to be of like composition with the receiving node 10 shown in FIG. 1. A node may include, for example, a processor, a computer system, a server, or a router. A link 24 software protocol includes instructions for the transmission and receiving of packets. The receiving node 10 includes a network interface/system device embodied as a processor 40 communicating with link 24, and the network system device or processor 40 includes a hardware device 44 for determining when more data is being transmitted of a specific electronic message.


The processor 40 further communicates with a memory subsystem embodied as a write combining buffer 60. The write combining buffer is adapted to hold packet information including addresses, 64, a write continuation flag 70, and data 80. The buffer 60 holds a data packet while waiting for more packet data of the same message to be received. The buffer 60 communicates 62 with a memory device embodied as cache memory 100 for storing the data transmitted. The buffer 60 can execute a Read-Modify-Write command to the cache memory when it cannot combine a packet fraction with further write data.


An example of a data packet format is shown in FIG. 2. A packet 200 includes a packet header 201, a packet data payload 202 and a packet cyclic redundancy check (CRC) verification 203. A sending node implements the CRC verification by computing the packet CRC to verify that the data is valid or good before transmitting the data packet. Each packet has a link level sequence number 204 in the packet header 201. The sequence number 204 is incremented for every subsequent packet transmitted over the link 24.


In operation, referring to FIG. 1, a data packet 200 of a stream or message is transmitted by a sending network device (not shown) and received by the processor 40. The packet can either contain sender-provided information that more packets of the message or stream will be received, or the receiver can, based on its message completion detection method, determine if further packets are expected. When the data stream or message is not continuous, the information that more data packets of the message are being transmitted or intended to be transmitted is communicated or handed-off along with the packet data from the processor 40 as a write continuation flag 70 (or high bit) to a memory subsystem device embodied as a write combining buffer 60. The buffer 60 stores the packet data 80 into the cache memory 100 except for the last fraction or portion of the packet if more packet data will be received. In this case, the last fraction of the received data packet is stored into the write combining buffer 60 and held active for combination with later received packet data of the same message. The buffer 60 holds the data packet 200 components including the data 80, the address 64, and the write continuation flag 70. The flag 70 sent along with packet data 80 indicates a write continuation and is used to tag the write buffer entry. This causes the replacement policy of the buffer 60 to keep the data active longer than other line fragments, thereby allowing time for the buffer 60 to receive more data packets of the same message. Expensive Read-Modify-Write cycles are only required if the fragment cannot be combined with subsequent packets even with the extended active time in the buffer. This can occur for example if the delivery of the next packet is severely delayed by exceptional events including link errors and exception processing on the sending node.


The write continuation information is also useful when retrieving metadata from the cache memory device 100 upon initiation from the processor 40 needed for the reception of the next packet. Metadata is data about a data packet which is descriptive information about a set of data, e.g., control information about whether to verify checksums of the packet, whether to discard a packet upon detection of an incorrect checksum or whether to notify a processor about the arrival of the packet. The memory subsystem buffer 60 uses the write continuation information to direct pre-fetch hardware 110 to fetch the metadata from main memory for the next packet and store it in the cache memory 100. This is beneficial as it reduces the time to retrieve the metadata when the next packet arrives, as it is then readily available in the cache memory 100, shortening overall packet processing time.


The illustrative embodiment of the apparatus 10 reduces the amount of Read-Modify-Write cycles to a memory device. Numerous Read-Modify-Write (RMW) cycles are caused by the alignment of packet boundaries when storing the packet to the cache memory 100. The RMW cycles are reduced by communicating message continuation information along with packet data, thus, extending the active time of the fragment in the write combining buffer, and increasing the probability of write combining. More specifically, the processor must initiate and execute a Read-Modify-Write command as new packets of data are received for the same message. The apparatus of the present invention reduces the amount of Read-Modify-Write cycles by explicitly signaling or flagging to the write combing buffer 60 that a write continuation is likely to occur in the near future and to wait for additional data packets 200 before writing the data associated with the flagged message to the cache memory 100, thereby changing the replacement policy decisions of the write combining buffer 60.


While the present invention has been particularly shown and described with respect to preferred embodiments thereof, it will be understood by those skilled in the art that changes in forms and details may be made without departing from the spirit and scope of the present application. It is therefore intended that the present invention not be limited to the exact forms and details described and illustrated herein, but falls within the scope of the appended claims.

Claims
  • 1. A computing apparatus for reducing the amount of processing in a network computing system, comprising: a network system device of a receiving node for receiving electronic messages including data, the electronic messages being transmitted from a sending node, and the network system device determining when more data of a specific electronic message is being transmitted;a memory device for storing the electronic message data and communicating with the network system device;a memory subsystem communicating with the memory device, and the memory subsystem storing a portion of the electronic message when more data of the specific message is being transmitted, a buffer included in the memory subsystem wherein the buffer includes a first holding time for storing non tagged data, the buffer combining the portion with later received data and moving the combined data to the memory device for accessible storage;a pre-fetch device for pre-fetching identified metadata stored in the memory device from a selected data packet of the electronic message using the network system device, the identified metadata being additionally stored in a cache memory, the identified metadata being stored in the cache memory and being used when the buffer combines the portion with the later received data; anda flag being included in the specific electronic message being transmitted from the sending node, the flag indicating that more data of the specific electronic message is being transmitted, the flag tagging the stored data in the buffer resulting in the tagged stored data being stored for a second holding time, and the second holding time being greater than the first holding time for non tagged stored data;wherein, after the second holding time has elapsed, a processor moves the tagged stored data from the buffer to the memory device using a Read-Modify-Write cycle.
  • 2. The apparatus of claim 1, wherein the network system device includes a computer program for determining when more data is being transmitted of the specific electronic message.
  • 3. The apparatus of claim 1, wherein the network system device includes a hardware device for determining when more data is being transmitted of a specific electronic message.
  • 4. The apparatus of claim 1, wherein the electronic message includes an indicator communicating to the network system device that more data is being transmitted after the network system device receives the specific electronic message.
  • 5. The apparatus of claim 4, wherein the indicator is a write continuation flag indicating a write continuation.
  • 6. The apparatus of claim 5, wherein the flag tags a last portion of the electronic message to indicate to the memory subsystem to store the last portion longer than non-tagged portions.
  • 7. The apparatus of claim 1, wherein the network system device of the receiving node communicates with a communication link communicating with the sending node.
  • 8. The apparatus of claim 1, wherein the network system device is a computer having a processor.
  • 9. A method for producing a computing apparatus for reducing the amount of processing in a network computing system, comprising: receiving electronic messages including data on a receiving node;transmitting the electronic messages from a sending node;determining when more data of a specific electronic message is being transmitted;storing the electronic message data;storing a portion of the electronic message when more data of the specific message is being transmitted using a buffer included in a memory device wherein the buffer includes a first holding time for storing non tagged data;combining the portion with later received data and moving the combined data to the memory device for accessible storage;identifying metadata stored in the memory device from a selected data packet of the electronic message;executing a pre-fetch of the identified metadata using a pre-fetch device and a network system device;storing the identified metadata in a cache memory for a predetermined amount of time for use when combining the portion with the later received data, the pre-fetch being executed before the step of combining the portion with later received data;transmitting a flag included in the specific electronic message transmitted from the sending node, the flag indicating that more data of the specific electronic message is being transmitted;tagging the stored data in the buffer using the flag; andstoring the tagged stored data for a second holding time, and the second holding time being greater than the first holding time for the non tagged stored data;wherein, after the second holding time has elapsed, the tagged stored data is moved from the buffer to the memory device using a Read-Modify-Write cycle.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OF DEVELOPMENT

The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract. No. B554331 awarded by the Department of Energy.

US Referenced Citations (116)
Number Name Date Kind
4777595 Strecker et al. Oct 1988 A
5063562 Barzilai et al. Nov 1991 A
5142422 Zook et al. Aug 1992 A
5349587 Nadeau-Dostie et al. Sep 1994 A
5353412 Douglas et al. Oct 1994 A
5452432 Macachor Sep 1995 A
5524220 Verma et al. Jun 1996 A
5526510 Akkary et al. Jun 1996 A
5634007 Calta et al. May 1997 A
5659710 Sherman et al. Aug 1997 A
5671444 Akkary et al. Sep 1997 A
5680572 Akkary et al. Oct 1997 A
5708779 Graziano et al. Jan 1998 A
5748613 Kilk et al. May 1998 A
5761464 Hopkins Jun 1998 A
5796735 Miller et al. Aug 1998 A
5809278 Watanabe et al. Sep 1998 A
5825748 Barkey et al. Oct 1998 A
5890211 Sokolov et al. Mar 1999 A
5917828 Thompson Jun 1999 A
6023732 Moh et al. Feb 2000 A
6061511 Marantz et al. May 2000 A
6072781 Feeney et al. Jun 2000 A
6122715 Palanca et al. Sep 2000 A
6185214 Schwartz et al. Feb 2001 B1
6205520 Palanca et al. Mar 2001 B1
6219300 Tamaki Apr 2001 B1
6263397 Wu et al. Jul 2001 B1
6295571 Scardamalia et al. Sep 2001 B1
6311249 Min et al. Oct 2001 B1
6324495 Steinman Nov 2001 B1
6356106 Greeff et al. Mar 2002 B1
6366984 Carmean et al. Apr 2002 B1
6442162 O'Neill et al. Aug 2002 B1
6466227 Pfister et al. Oct 2002 B1
6564331 Joshi May 2003 B1
6594234 Chard et al. Jul 2003 B1
6598123 Anderson et al. Jul 2003 B1
6601144 Arimilli et al. Jul 2003 B1
6631447 Morioka et al. Oct 2003 B1
6647428 Bannai et al. Nov 2003 B1
6662305 Salmon et al. Dec 2003 B1
6735174 Hefty et al. May 2004 B1
6775693 Adams Aug 2004 B1
6799232 Wang Sep 2004 B1
6874054 Clayton et al. Mar 2005 B2
6880028 Kurth Apr 2005 B2
6889266 Stadler May 2005 B1
6894978 Hashimoto May 2005 B1
6954887 Wang et al. Oct 2005 B2
6986026 Roth et al. Jan 2006 B2
7007123 Golla et al. Feb 2006 B2
7058826 Fung Jun 2006 B2
7065594 Ripy et al. Jun 2006 B2
7143219 Chaudhari et al. Nov 2006 B1
7191373 Wang et al. Mar 2007 B2
7239565 Liu Jul 2007 B2
7280477 Jeffries et al. Oct 2007 B2
7298746 De La Iglesia et al. Nov 2007 B1
7363629 Springer et al. Apr 2008 B2
7373420 Lyon May 2008 B1
7401245 Fischer et al. Jul 2008 B2
7454640 Wong Nov 2008 B1
7454641 Connor et al. Nov 2008 B2
7461236 Wentzlaff Dec 2008 B1
7463529 Matsubara Dec 2008 B2
7502474 Kaniz et al. Mar 2009 B2
7539845 Wentzlaff et al. May 2009 B1
7613971 Asaka Nov 2009 B2
7620791 Wentzlaff et al. Nov 2009 B1
7698581 Oh Apr 2010 B2
20010055323 Rowett et al. Dec 2001 A1
20020078420 Roth et al. Jun 2002 A1
20020087801 Bogin et al. Jul 2002 A1
20020100020 Hunter et al. Jul 2002 A1
20020129086 Garcia-Luna-Aceves et al. Sep 2002 A1
20020138801 Wang et al. Sep 2002 A1
20020156979 Rodriguez Oct 2002 A1
20020184159 Tadayon et al. Dec 2002 A1
20030007457 Farrell et al. Jan 2003 A1
20030028749 Ishikawa et al. Feb 2003 A1
20030050714 Tymchenko Mar 2003 A1
20030050954 Tayyar et al. Mar 2003 A1
20030074616 Dorsey Apr 2003 A1
20030105799 Khan et al. Jun 2003 A1
20030163649 Kapur et al. Aug 2003 A1
20030177335 Luick Sep 2003 A1
20030188053 Tsai Oct 2003 A1
20030235202 Van Der Zee et al. Dec 2003 A1
20040003174 Yamazaki Jan 2004 A1
20040003184 Safranek et al. Jan 2004 A1
20040019730 Walker et al. Jan 2004 A1
20040024925 Cypher et al. Feb 2004 A1
20040073780 Roth et al. Apr 2004 A1
20040103218 Blumrich et al. May 2004 A1
20040210694 Shenderovich Oct 2004 A1
20040243739 Spencer Dec 2004 A1
20050007986 Malladi et al. Jan 2005 A1
20050053057 Deneroff et al. Mar 2005 A1
20050076163 Malalur Apr 2005 A1
20050160238 Steely et al. Jul 2005 A1
20050216613 Ganapathy et al. Sep 2005 A1
20050251613 Kissell Nov 2005 A1
20050270886 Takashima Dec 2005 A1
20050273564 Lakshmanamurthy et al. Dec 2005 A1
20060050737 Hsu Mar 2006 A1
20060080513 Beukema et al. Apr 2006 A1
20060206635 Alexander et al. Sep 2006 A1
20060248367 Fischer et al. Nov 2006 A1
20070055832 Beat Mar 2007 A1
20070079044 Mandal et al. Apr 2007 A1
20070133536 Kim et al. Jun 2007 A1
20070168803 Wang et al. Jul 2007 A1
20070174529 Rodriguez et al. Jul 2007 A1
20070195774 Sherman et al. Aug 2007 A1
20080147987 Cantin et al. Jun 2008 A1
Non-Patent Literature Citations (14)
Entry
Information Sciences Institute University of Southern California. “RFC 791—Internet Protocol.” Published on Sep. 1981. Retrieved from the Internet on May 31, 2012. <URL: http://tools.ietf.org/html/rfc791>.
Definition of “mechanism”, Oxford English Dictionary, http://dictionary.oed.com/cgi/entry/00304337?query—type=word&queryword=mechanism&first=1&max—to—show=10&sort—type=alpha&result—place=2&search—id=y2atEIGc-11603&hilite+00304337.
Almasi, et al., “MPI on BlueGene/L: Designing an Efficient General Purpose Messaging Solution for a Large Cellular System,” IBM Research Report RC22851 (W037-150) Jul. 22, 2003.
Almasi, et al.,“Optimization of MPI Collective Communication on BlueGene/L Systems,” ICS'05, Jun. 20-22, 2005, Boston, MA.
Gara, et al., “Overview of the Blue Gene/L system architecture,” IBM J. Res. & Dev., vol. 49, No. 2/3, Mar./May 2005, pp. 195-212.
Huang, et al., “Performance Evaluation of Adaptive MPI,” PPoPP'06, Mar. 29-31, 2006, New York, New York.
MPI (Message Passing Interface) standards documents, errata, and archives http://www.mpi-forum.org visited Jun. 16, 2007 (Sections 4.2, 4.4 and 10.4).
David Chaiken, Craig Fields, Kiyoshi Kurihara, Anant Agarwal, Directory-Based Cache Coherence in Large-Scale Multiprocessors, Computer, v.23 n. 6, p. 49-58, Jun. 1990.
Michel, Dubois, Christoph Scheurich, Faye A. Briggs, Synchronization, Coherence, and Event Ordering in Multiprocessors, Computer, v.21 n. 2, p. 9-21, Feb. 1988.
Giampapa, et al., “Blue Gene/L advanced diagnostics environment,” IBM J. Res. & Dev., vol. 49, No. 2/3, Mar./May 2005, pp. 319-331.
IBM Journal of Research and Development, Special Double Issue on Blue Gene, vol. 49, Nos. 2/3, Mar./May 2005 (“Preface”).
IBM Journal of Research and Development, Special Double Issue on Blue Gene, vol. 49, Nos. 2/3, Mar./May 2005 (“Intro”).
“Intel 870: A Building Block for Cost-Effective, Scalable Servers”, Faye Briggs, Michel et al., pp. 36-47, Mar.-Apr. 2002.
Pande, et al., Performance Evaluation and Design Trade-Offs for Network-On-Chip Interconnect Architectures, 2005, IEEE, pp. 1025-1040.
Related Publications (1)
Number Date Country
20090006605 A1 Jan 2009 US