Multi-chip module (MCM) with multi-port unified memory

Information

  • Patent Grant
  • 11893242
  • Patent Number
    11,893,242
  • Date Filed
    Friday, November 25, 2022
    2 years ago
  • Date Issued
    Tuesday, February 6, 2024
    11 months ago
Abstract
Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a multi-chip module (MCM) is disclosed. The MCM includes a common substrate and a first integrated circuit (IC) chip disposed on the common substrate. The first IC chip includes a first memory interface. A second IC chip is disposed on the common substrate and includes a second memory interface. A first memory device is disposed on the common substrate and includes memory and a first port coupled to the memory. The first port is configured for communicating with the first memory interface of the first IC chip. A second port is coupled to the memory and communicates with the second memory interface of the second IC chip. In-memory processing circuitry is coupled to the memory and controls transactions between the first memory device and the first and second IC chips.
Description
TECHNICAL FIELD

The disclosure herein relates to semiconductor devices, packaging and associated methods.


BACKGROUND

As integrated circuit (IC) chips such as system on chips (SoCs) become larger, the yields realized in manufacturing the chips become smaller. Decreasing yields for larger chips increases overall costs for chip manufacturers. To address the yield problem, chiplet architectures have been proposed that favor a modular approach to SoCs. The solution employs smaller sub-processing chips, each containing a well-defined subset of functionality. Chiplets thus allow for dividing a complex design, such as a high-end processor or networking chip, into several small die instead of one large monolithic die.


When accessing memory, traditional chiplet architectures often provide for a given chip accessing data from a dedicated memory space, processing the data, then returning the data back to the memory space, or sending the processed data to a different memory space for access by a second chip. In some situations, this may result in considerable latency or delay in fully processing the data by the multiple chips.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:



FIG. 1 illustrates a high-level embodiment of a multi-chip module (MCM), including a memory device that is interconnected to two IC chips.



FIG. 2 illustrates a cross-sectional view of one embodiment of the MCM of FIG. 1.



FIG. 3 illustrates a block diagram of one embodiment of a logic die incorporated into a memory device of the MCM of FIG. 2.



FIG. 4 illustrates one embodiment of the network on chip (NoC) circuitry of FIG. 3.



FIG. 5 illustrates one embodiment of an interconnection topology for an MCM architecture that is similar to the MCM of FIG. 1.



FIG. 6 illustrates a further interconnection topology similar to that of FIG. 5.



FIG. 7 illustrates another interconnection topology similar to that of FIG. 5.



FIG. 8 illustrates a further interconnection topology similar to that of FIG. 5.



FIG. 9 illustrates another interconnection topology similar to that of FIG. 8.





DETAILED DESCRIPTION

Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a multi-chip module (MCM) is disclosed. The MCM includes a common substrate and a first integrated circuit (IC) chip disposed on the common substrate. The first IC chip includes a first memory interface. A second IC chip is disposed on the common substrate and includes a second memory interface. A first memory device is disposed on the common substrate and includes memory and a first port coupled to the memory. The first port is configured for communicating with the first memory interface of the first IC chip. A second port is coupled to the memory and communicates with the second memory interface of the second IC chip. In-memory processing circuitry is coupled to the memory and controls transactions between the first memory device and the first and second IC chips. By including the in-memory processing circuitry on the memory device, controlled accesses to the memory for operations associated with the first IC chip and the second IC chip may be carried out with lower latency and lower cost. For some embodiments, the in-memory processing circuitry takes the form of a co-processor or accelerator that is capable of carrying out a processing function that is off-loaded from the first IC chip or second IC chip on data retrieved from the memory. In other embodiments, the in-memory processing circuitry may include network-on-chip (NoC) circuitry to control the transactions between the memory and the first IC chip and the second IC chip.


Throughout the disclosure provided herein, the term multi-chip module (MCM) is used to represent a semiconductor device that incorporates multiple semiconductor die or sub-packages in a single unitary package. An MCM may also be referred to as a system in a chip (SiP). With reference to FIG. 1, a multi-chip module (MCM) is shown, generally designated 100. For one embodiment, the MCM includes a substrate 102 that serves as a common substrate for a first integrated circuit (IC) chip 104, a second IC chip 106 and a memory device 108. For some embodiments, the various chips are interconnected in a manner that allows for use of a relatively inexpensive non-silicon or organic substrate as the common substrate. The use of a non-silicon common substrate 102 avoids size and signaling constraints typically associated with silicon-based substrates. This allows the substrate 102 to be larger, incorporate a more relaxed bump pitch for external interface contacts, and provide low-loss traces.


With continued reference to FIG. 1, the first IC chip 104 is mounted to the common substrate 102 and may take the form of a computer processing unit (CPU), graphics processing unit (GPU), artificial intelligence (AI) processing circuitry or the like. For one embodiment, the first IC chip 104 includes first interface circuitry 105 for communicating with the memory device 108. For one embodiment, the first interface circuitry 105 supports transactions with the first memory device 108 via a high-speed link 118. Various embodiments for compatible interface schemes are disclosed in U.S. patent application Ser. No. 17/973,905, titled “Method and Apparatus to Reduce Complexity and Cost For Multi-Chip Modules (MCMs)”, filed Oct. 26, 2022, incorporated by reference in its entirety, and assigned to the assignee of the instant application. The second IC chip 106 may be formed similar to the first IC chip 104, including second interface circuitry 107 for communicating with the memory device 108. Like the first IC chip 104, the second IC chip 106 may take the form of a computer processing unit (CPU), graphics processing unit (GPU), artificial intelligence (AI) processing circuitry or the like.


With continued reference to FIG. 1, one embodiment of the memory device 108 includes a first port 112 for interfacing with the first IC chip 104 via the first high-speed link 118, and a second port 114 for interfacing with the second IC chip 106 via a second link 120. Memory 110 is coupled to the first port 112 and the second port 114 and is configured with a unified memory space that, for one embodiment, is fully accessible to each of the first and second ports 112 and 114. While only two ports are shown for clarity, for some embodiments, three or more ports may be employed, corresponding to the edges of a standard IC chip and the available edge space for the interface circuitry.


Further referring to FIG. 1, in-memory processing circuitry 116 provides processing resources in the memory device 108 to provide a variety of functions. For some embodiments, described more fully below, the in-memory processing circuitry 116 may take the form of a co-processor or accelerator that carries out functions offloaded from the first IC chip 104 or the second IC chip 106. In other embodiments, the in-memory processing circuitry 116 may instead (or additionally) include a router functionality in the form of network-on-chip (NoC) circuitry for controlling access between the memory device 108 and the first and second IC chips 104 and 106, and, in some embodiments, controlling forwarding and receiving operations involving other IC chips (not shown) that may be disposed on the MCM 100. Further detail regarding embodiments of the NoC circuitry are provided below.



FIG. 2 illustrates a cross-sectional view of one embodiment of the MCM 100 of FIG. 1 that employs one specific embodiment of the memory device 108. As shown, for one embodiment, the memory device 108 may be configured as a 3-dimensional (3D) packaging architecture with one or more memory die 202 stacked and assembled as a sub-package 203 that is vertically stacked with a logic base die 204. For some embodiments, the logic base die 204 is configured as an interface die for the stack of memory die 203 and may be compatible with various dynamic random access memory (DRAM) standards, such as high-bandwidth memory (HBM), or non-volatile memory standards such as Flash memory. The stack of memory die 203 and the logic base die 204 may be packaged together as a sub-package to define the memory device 108, with the logic base die 204 further formed with an external interface in the form of an array of contact bumps, at 206. Various alternative 3D embodiments for the memory device are disclosed in the above-referenced U.S. patent application Ser. No. 17/973,905. Additionally, while shown as a 3D stacked architecture, the memory device 108 may alternatively take the form of a 2.5D architecture, where the various die are positioned in a horizontal relationship. Such architectures are also described in U.S. patent application Ser. No. 17/973,905.


Referring now to FIG. 3, for one embodiment, the logic base die 204 incorporated in the memory device 108 is manufactured in accordance with a logic process that incorporates node feature sizes similar to those of the first IC chip and the second IC chip, but with a much smaller overall size and footprint. As a result, operations carried out by the logic base die 204 may be more power efficient than those carried out by the larger IC chips 104 and 106. In some embodiments, the logic base die 204 includes memory interface circuitry 302 that defines the first and second ports 112 and 114 (FIG. 1), allowing the first and second IC chips 104 and 106 to access the entirety of the memory space of the memory 110. For one embodiment, the first and second ports 112 and 114 take the form of spatial signaling path resources that access the memory via multiplexer or switch circuitry, such that either IC chip has access to any portion of the memory during a given time interval. In this manner, where both of the first and second IC chips share the entirety of the memory 110, the memory device 108 becomes unified, thereby avoiding many of the latency problems associated with separately disposed memory spaces dedicated to separate IC chips.


Further referring to FIG. 3, for one embodiment, the logic base die 204 realizes at least a portion of the in-memory processing circuitry 116 as co-processing circuitry 304. The co-processing circuitry 304 provides co-processor or accelerator resources in the memory device 108 to allow for off-loading of one or more CPU/GPU/AI processing tasks involving data retrieved from the memory 110 without the need to transfer the data to either of the first or second IC chips 104 or 106. For example, in some embodiments, the co-processing circuitry 304 may be optimized to perform straightforward multiply-accumulate operations on data retrieved from the memory 110, thus avoiding the need for the larger and more power-hungry IC chips 104 or 106 to perform the same operations. The co-processing circuitry 304 may be accessed by providing application programming interfaces (APIs) in software frameworks (such as, for example, Pytorch, Spark, Tensorflow) in a manner that avoids re-writing application software. By carrying out offloaded processing tasks in this manner, data transfer latencies may be reduced, while power efficiency associated with the processing tasks may be increased.


For some embodiments, and with continued reference to FIG. 3, the logic base die 204 also provides network-on-chip (NoC) circuitry 306 for the memory device 108. The NoC circuitry 306 generally serves as a form of network router or switch for cooperating with other NoC circuits that may be disposed in various other IC chips or memory devices disposed on the MCM 100. Thus, the NoC circuitry 306 is generally capable of transferring and/or receiving data and/or control signals via a packet-switched protocol to any other nodes within the MCM 100 that also have NoC circuitry.



FIG. 4 illustrates one specific embodiment of the NoC circuitry 306 of FIG. 3. The NoC circuitry 306 includes input buffer circuitry 410 that receives data and/or control signals from a separate NoC circuit associated with another IC chip or node on the MCM 100. Depending on how many separate edge interfaces, or ports, are employed by the memory device 108, the input buffer circuitry 410 may include two (corresponding to, for example, “east” and “west” ports such as those shown in FIG. 1), three, or four queues (“N INPUT”, “S INPUT”, “E INPUT” OR “W INPUT”) to temporarily store signals received from the multiple ports. The memory interface 302 of the memory device 108 may also provide input data/control signals for transfer by the NoC circuitry 306 to another NoC node in the MCM 100.


Further referring to FIG. 4, the input buffer circuitry 410 feeds a crossbar switch 406 that is controlled by a control unit 408 in cooperation with a scheduler or arbiter 404. Output buffer circuitry 412 couples to the crossbar switch 406 to receive data/control signals from the memory device 108 or the data/control signals from the input buffer circuitry 410 for transfer to a selected output port/interface (“N OUTPUT”, “S OUTPUT”, “E OUTPUT” OR “W OUTPUT”). The crossbar switch 406 may also feed any of the signals from the input buffer circuitry 410 to the memory interface 302 of the memory device 108.



FIG. 5 illustrates a chip topology on an MCM, generally designated 500, that is similar to the architecture of FIG. 1, including a CPU as the first IC chip 104, a GPU as the second IC chip 106, and an HBM/NoC memory device as the first memory device 108. The MCM 500 also includes additional memory devices 504 and 506 that are configured as single-port memory devices and are disposed on the common substrate 102 in a distributed manner.



FIG. 6 illustrates an additional architecture that incorporates the topology of FIG. 5, and also includes further memory devices 602 and 604 coupled to the memory device 504. For one embodiment, the additional memory devices 602 and 604 provide additional memory capacity for the first IC chip 104 without the need for additional corresponding I/O interface circuitry at the edge of the first IC chip 104. The first IC chip 104 thus may access memory device 602 via the first and second ports of memory device 504. Accessing memory device 604 by the first IC chip 104 is performed similarly via the first and second ports of memory device 504 and 602. The connection of additional memory devices 602 and 604 through memory device 504 to the first IC chip 104 can be purely for extending the total memory to the first IC chip 104, and such memory extension does not necessarily need a NOC to connect them to other chips in the package. In some embodiments, the interconnected memory devices 504, 602 and 604 may, for example, provide different memory hierarchies for the first IC chip 104. As a result, for the first IC chip 104, the memory device 504 may serve as low-latency memory (such as cache memory) for data accessed more often with minimal latency, while the second and third memory devices 602 and 604 may serve as backing store media and/or other forms of storage where additional latency may be tolerated. Further, the addition of the memory devices 602 and 604 has little to no electrical impact on the MCM due to the buffering nature of the memory device 504 (where the aggregate load of the memory devices 504, 602 and 604 is seen as a single load from the perspective of the first IC chip 104). As a result, system software memory management tasks may be simplified as memory capacity is added to the MCM. Use of the unified memory architecture described above for each memory device contributes to a lower cost of use since the unified architecture is able to provide a variety of storage functions for a myriad of applications.



FIG. 7 illustrates yet another topology that is similar to the MCM of FIG. 5, but further scales the architecture to include a further disaggregated second level of processing and memory resources that are straightforwardly interconnected. Such a topology enables complex application specific integrated circuit (ASIC) chips to be partitioned into smaller interconnected chiplets, such as at 702 and 704, that together form a virtual ASIC 706. Having the smaller processing chiplets 702 and 704 virtualized in this manner allows for beneficial pairing and sized matching of memory device chiplet packages 708 to the smaller processing chiplets. Moreover, for embodiments where each memory device and processor chip includes NoC circuitry, any of the IC chips and memory devices of the MCM of FIG. 7 may communicate with any other of the IC chips and memory devices.



FIG. 8 illustrates one embodiment of an MCM 800 that is similar to the architecture of FIG. 6, with a CPU resource 104 coupled to a pair of inline memory devices 108 and 504 via a single link 802. This allows for memory capacity upgrades without requiring additional physical I/O space (multiple interfaces for coupling to multiple links) along the edge of the CPU 104. By adding an additional single-port memory device 504 and coupling it to the multi-port memory device 108, accesses to the added memory device 504 may be made by the CPU 104 via the in-memory processing circuitry, such as the NoC circuitry, that is disposed in the multi-port memory device 108. A similar configuration is shown at the far right of the MCM 800 with memory devices 110 and 506 that are in communication with a GPU 106 via a second link 804. FIG. 8 also shows a pair of multi-port memory devices 112 and 114 that are interconnected by a simultaneous bidirectional link, at 806. The simultaneous bidirectional link 806 allows for concurrent accesses to a given distal memory device by the CPU 104 (where it accesses memory device 114 via memory device 112) and the GPU 106 (where it accesses memory device 112 via memory device 114). Having the ability to perform concurrent accesses significantly increases the bandwidth of the system. As an example of scaling the architecture of FIG. 8 even larger, FIG. 9 illustrates an MCM 900 that adds a second row of devices, at 902, that interconnect to a first row of devices, at 904, essentially doubling the resources provided in the architecture of FIG. 8. Additional rows of devices may also be employed to scale the capacity even further, if desired.


When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described circuits may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs including, without limitation, net-list generation programs, place and route programs and the like, to generate a representation or image of a physical manifestation of such circuits. Such representation or image may thereafter be used in device fabrication, for example, by enabling generation of one or more masks that are used to form various components of the circuits in a device fabrication process.


In the foregoing description and in the accompanying drawings, specific terminology and drawing symbols have been set forth to provide a thorough understanding of the present invention. In some instances, the terminology and symbols may imply specific details that are not required to practice the invention. For example, any of the specific numbers of bits, signal path widths, signaling or operating frequencies, component circuits or devices and the like may be different from those described above in alternative embodiments. Also, the interconnection between circuit elements or circuit blocks shown or described as multi-conductor signal links may alternatively be single-conductor signal links, and single conductor signal links may alternatively be multi-conductor signal links. Signals and signaling paths shown or described as being single-ended may also be differential, and vice-versa. Similarly, signals described or depicted as having active-high or active-low logic levels may have opposite logic levels in alternative embodiments. Component circuitry within integrated circuit devices may be implemented using metal oxide semiconductor (MOS) technology, bipolar technology or any other technology in which logical and analog circuits may be implemented. With respect to terminology, a signal is said to be “asserted” when the signal is driven to a low or high logic state (or charged to a high logic state or discharged to a low logic state) to indicate a particular condition. Conversely, a signal is said to be “deasserted” to indicate that the signal is driven (or charged or discharged) to a state other than the asserted state (including a high or low logic state, or the floating state that may occur when the signal driving circuit is transitioned to a high impedance condition, such as an open drain or open collector condition). A signal driving circuit is said to “output” a signal to a signal receiving circuit when the signal driving circuit asserts (or deasserts, if explicitly stated or indicated by context) the signal on a signal line coupled between the signal driving and signal receiving circuits. A signal line is said to be “activated” when a signal is asserted on the signal line, and “deactivated” when the signal is deasserted. Additionally, the prefix symbol “/” attached to signal names indicates that the signal is an active low signal (i.e., the asserted state is a logic low state). A line over a signal name (e.g., ‘<signal name>’) is also used to indicate an active low signal. The term “coupled” is used herein to express a direct connection as well as a connection through one or more intervening circuits or structures. Integrated circuit device “programming” may include, for example and without limitation, loading a control value into a register or other storage circuit within the device in response to a host instruction and thus controlling an operational aspect of the device, establishing a device configuration or controlling an operational aspect of the device through a one-time programming operation (e.g., blowing fuses within a configuration circuit during device production), and/or connecting one or more selected pins or other contact structures of the device to reference voltage lines (also referred to as strapping) to establish a particular device configuration or operation aspect of the device. The term “exemplary” is used to express an example, not a preference or requirement.


While the invention has been described with reference to specific embodiments thereof, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, features or aspects of any of the embodiments may be applied, at least where practicable, in combination with any other of the embodiments or in place of counterpart features or aspects thereof. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A multi-chip module (MCM), comprising: a common substrate;a first integrated circuit (IC) chip disposed on the common substrate and including a first memory interface; anda first memory device disposed on the common substrate and including a memory having a memory space;a first port for communicating with the first memory interface of the first IC chip;a second; andin-memory processing circuitry to control transfers associated with the first and the second port.
  • 2. The MCM of claim 1, wherein: the in-memory processing circuitry is configured to allow at least one of the first port of the second port access to any portion of the memory space.
  • 3. The MCM of claim 1, wherein: the memory includes at least one memory IC die; andthe first port, the second port, and the in-memory processing circuitry are formed on a logic IC chip that is coupled to the at least one memory IC die.
  • 4. The MCM of claim 3, wherein: the at least one memory die is disposed in a horizontal relationship with the logic IC chip.
  • 5. The MCM of claim 3, wherein: the at least one memory die is disposed in a vertical relationship with the logic IC chip.
  • 6. The MCM of claim 1, wherein the in-memory processing circuitry includes: coprocessing circuitry to perform a processing operation on data stored in the at least one memory die on behalf of the first IC chip.
  • 7. The MCM of claim 1, wherein the in-memory processing circuitry includes: first network-on-chip (NoC) circuitry to control transfers to and from the first memory device.
  • 8. The MCM of claim 1, further comprising: a second memory device having a third port coupled to the second port of the first memory device, the second memory device including second in-memory processing circuitry in communication with the first in-memory processing circuitry.
  • 9. The MCM of claim 8, wherein: the second in-memory processing circuity comprises second NoC circuitry.
  • 10. The MCM of claim 8, wherein: the first memory device is configured to store first data consistent with a first memory hierarchy; andwherein the second memory device is configured to store second data consistent with a second memory hierarchy that is different than the first memory hierarchy.
  • 11. The MCM of claim 1, wherein: the first IC chip comprises a first processing unit.
  • 12. The MCM of claim 11, wherein: the first processing unit comprises a first central processing unit (CPU) or a first graphics processing unit (GPU).
  • 13. The MCM of claim 1, wherein the first memory device further comprises: a third port coupled to the memory for communicating with a second IC device.
  • 14. The MCM of claim 13, wherein the second IC device comprises: a third memory device.
  • 15. A multi-chip module (MCM), comprising: a common substrate;a first integrated circuit (IC) chip disposed on the common substrate and including a first memory interface;anda first memory device disposed on the common substrate and including at least one dynamic random access memory (DRAM) memory die having a memory space;a logic die coupled to the at least one memory die, the logic die including a first port for communicating with the first memory interface of the first IC chip;a second port; andin-memory processing circuitry to control transactions associated with the first port and the second port.
  • 16. The MCM of claim 15, wherein the in-memory processing circuitry includes: coprocessing circuitry to perform a processing operation on data stored in the at least one memory die on behalf of the first IC chip.
  • 17. The MCM of claim 15, wherein the in-memory processing circuitry includes: first network-on-chip (NoC) circuitry to control transfers to and from the first memory device.
  • 18. The MCM of claim 17, further comprising: a second memory device coupled to the first memory device, the second memory device including second NoC circuitry in communication with the first NoC circuitry.
  • 19. The MCM of claim 18, wherein: the first memory device is configured to store first data consistent with a first memory hierarchy; andwherein the second memory device is configured to store second data consistent with a second memory hierarchy that is different than the first memory hierarchy.
  • 20. A method of operation in a multi-chip module (MCM), the MCM including a common substrate, a first integrated circuit (IC) chip disposed on the common substrate, and a first memory device disposed on the common substrate and having memory with a memory space and in-memory processing circuity, the method comprising: controlling transactions associated with the first port and the second port with the in-memory processing circuitry.
  • 21. The method of claim 20, wherein the controlling of the transactions associated with the first port and the second port with the in-memory processing circuitry comprises: performing a processing operation on data stored in the memory on behalf of the first IC chip.
  • 22. The method of claim 20, wherein the controlling of the transactions between the first port and the second port with the in-memory processing circuitry comprises: controlling transfers to and from the first memory device via a packet-based networking protocol.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a Non-Provisional that claims priority to U.S. Provisional Application No. 63/283,265, filed Nov. 25, 2022, entitled ENABLING ADVANCE SYSTEM-IN-PACKAGE ARCHITECTURES AT LOW-COST USING HIGH-BANDWIDTH ULTRA-SHORT-REACH (USR) CONNECTIVITY IN MCM PACKAGES, which is incorporated herein by reference in its entirety.

US Referenced Citations (123)
Number Name Date Kind
6031729 Berkely Feb 2000 A
6417737 Moloudi Jul 2002 B1
6690742 Chan Feb 2004 B2
6721313 Van Duyne Apr 2004 B1
6932618 Nelson Aug 2005 B1
7027529 Ohishi Apr 2006 B1
7248890 Raghavan Jul 2007 B1
7269212 Chau Sep 2007 B1
7477615 Oshita Jan 2009 B2
7535958 Best May 2009 B2
7701957 Bicknell Apr 2010 B1
7978754 Yeung Jul 2011 B2
8004330 Acimovic Aug 2011 B1
8024142 Gagnon Sep 2011 B1
8121541 Rofougaran Feb 2012 B2
8483579 Fukuda Jul 2013 B2
8546955 Wu Oct 2013 B1
8861573 Chu Oct 2014 B2
8948203 Nolan Feb 2015 B1
8982905 Kamble Mar 2015 B2
9088334 Chakraborty Jul 2015 B2
9129935 Chandrasekar Sep 2015 B1
9294313 Prokop Mar 2016 B2
9349707 Sun May 2016 B1
9379878 Lugthart Jun 2016 B1
9432298 Smith Aug 2016 B1
9832006 Bandi Nov 2017 B1
9886275 Carlson Feb 2018 B1
9961812 Suorsa May 2018 B2
10171115 Shirinfar Jan 2019 B1
10410694 Arbel Sep 2019 B1
10439661 Heydari Oct 2019 B1
10642767 Farjadrad May 2020 B1
10678738 Dai Jun 2020 B2
10735176 Heydari Aug 2020 B1
10855498 Farjadrad Dec 2020 B1
11088876 Farjadrad Aug 2021 B1
11100028 Subramaniam Aug 2021 B1
20020122479 Agazzi Sep 2002 A1
20020136315 Chan Sep 2002 A1
20040088444 Baumer May 2004 A1
20040113239 Prokofiev Jun 2004 A1
20040130347 Moll Jul 2004 A1
20040156461 Agazzi Aug 2004 A1
20050041683 Kizer Feb 2005 A1
20050134306 Stojanovic Jun 2005 A1
20050157781 Ho Jul 2005 A1
20050205983 Origasa Sep 2005 A1
20060060376 Yoon Mar 2006 A1
20060103011 Andry May 2006 A1
20060158229 Hsu Jul 2006 A1
20060181283 Wajcer Aug 2006 A1
20060188043 Zerbe Aug 2006 A1
20060250985 Baumer Nov 2006 A1
20060251194 Bublil Nov 2006 A1
20070281643 Kawai Dec 2007 A1
20080063395 Royle Mar 2008 A1
20080143422 Lalithambika Jun 2008 A1
20080186987 Baumer Aug 2008 A1
20080222407 Carpenter Sep 2008 A1
20090113158 Schnell Apr 2009 A1
20090154365 Diab Jun 2009 A1
20090174448 Zabinski Jul 2009 A1
20090220240 Abhari Sep 2009 A1
20090225900 Yamaguchi Sep 2009 A1
20090304054 Tonietto Dec 2009 A1
20100177841 Yoon Jul 2010 A1
20100197231 Kenington Aug 2010 A1
20100294547 Hatanaka Nov 2010 A1
20110029803 Redman-White Feb 2011 A1
20110038286 Ta Feb 2011 A1
20110167297 Su Jul 2011 A1
20110187430 Tang Aug 2011 A1
20110204428 Erickson Aug 2011 A1
20110267073 Chengson Nov 2011 A1
20110293041 Luo Dec 2011 A1
20120082194 Tam Apr 2012 A1
20120182776 Best Jul 2012 A1
20120192023 Lee Jul 2012 A1
20120216084 Chun Aug 2012 A1
20120327818 Takatori Dec 2012 A1
20130222026 Havens Aug 2013 A1
20130249290 Buonpane Sep 2013 A1
20130285584 Kim Oct 2013 A1
20140016524 Choi Jan 2014 A1
20140048947 Lee Feb 2014 A1
20140126613 Zhang May 2014 A1
20140192583 Rajan Jul 2014 A1
20140269860 Brown Sep 2014 A1
20140269983 Baeckler Sep 2014 A1
20150012677 Nagarajan Jan 2015 A1
20150172040 Pelekhaty Jun 2015 A1
20150180760 Rickard Jun 2015 A1
20150206867 Lim Jul 2015 A1
20150271074 Hirth Sep 2015 A1
20150326348 Shen Nov 2015 A1
20150358005 Chen Dec 2015 A1
20160056125 Pan Feb 2016 A1
20160071818 Wang Mar 2016 A1
20160111406 Mak Apr 2016 A1
20160217872 Hossain Jul 2016 A1
20160294585 Rahman Oct 2016 A1
20170317859 Hormati Nov 2017 A1
20170331651 Suzuki Nov 2017 A1
20180010329 Golding, Jr. Jan 2018 A1
20180082981 Gowda Mar 2018 A1
20180175001 Pyo Jun 2018 A1
20180190635 Choi Jul 2018 A1
20180315735 Delacruz Nov 2018 A1
20190044764 Hollis Feb 2019 A1
20190058457 Ran Feb 2019 A1
20190108111 Levin Apr 2019 A1
20190198489 Kim Jun 2019 A1
20200373286 Dennis Nov 2020 A1
20210082875 Nelson Mar 2021 A1
20210117102 Grenier Apr 2021 A1
20210181974 Ghosh Jun 2021 A1
20210183842 Fay Jun 2021 A1
20210258078 Meade Aug 2021 A1
20220159860 Winzer May 2022 A1
20220223522 Scearce Jul 2022 A1
20230039033 Zarkovsky Feb 2023 A1
20230090061 Zarkovsky Mar 2023 A1
Non-Patent Literature Citations (2)
Entry
U.S. Appl. No. 16/812,234; Mohsen F. Rad, filed Mar. 6, 2020.
Farjadrad et al., “A Bunch of Wires (BOW) Interface for Inter-Chiplet Communication”, 2019 IEEE Symposium on High-Performance Interconnects (HOTI), pp. 27-30, Oct. 2019.
Provisional Applications (1)
Number Date Country
63283265 Nov 2021 US