Multi-chip module (MCM) with multi-port unified memory

Information

  • Patent Grant
  • 12190038
  • Patent Number
    12,190,038
  • Date Filed
    Tuesday, January 23, 2024
    a year ago
  • Date Issued
    Tuesday, January 7, 2025
    a month ago
  • CPC
    • G06F30/392
    • G06F2115/12
  • Field of Search
    • CPC
    • G06F30/392
    • G06F2115/12
  • International Classifications
    • G06F30/392
    • G06F115/12
    • Disclaimer
      This patent is subject to a terminal disclaimer.
      Term Extension
      0
Abstract
Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a multi-chip module (MCM) is disclosed. The MCM includes a package substrate and an interposer disposed on a portion of the package substrate. A first integrated circuit (IC) chip is disposed on the interposer. A first memory device is disposed on the interposer and includes a first port interface including an interposer-compliant mechanical interface for coupling to the first IC chip via a first set of traces formed in the interposer. A second port interface includes a non-interposer-compliant mechanical interface for coupling to an off-interposer device. Transactions between the first IC chip and the off-interposer device pass through the first port interface and the second port interface of the first memory device.
Description
TECHNICAL FIELD

The disclosure herein relates to semiconductor devices, packaging and associated methods.


BACKGROUND

As integrated circuit (IC) chips such as system on chips (SoCs) become larger, the yields realized in manufacturing the chips become smaller. Decreasing yields for larger chips increases overall costs for chip manufacturers. To address the yield problem, chiplet architectures have been proposed that favor a modular approach to SoCs. The solution employs smaller sub-processing chips, each containing a well-defined subset of functionality. Chiplets thus allow for dividing a complex design, such as a high-end processor or networking chip, into several small die instead of one large monolithic die.


When accessing memory, traditional chiplet architectures often provide for a given chip accessing data from a dedicated memory space, processing the data, then returning the data back to the memory space, or sending the processed data to a different memory space for access by a second chip. In some situations, this may result in considerable latency or delay in fully processing the data by the multiple chips.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:



FIG. 1 illustrates a high-level embodiment of a multi-chip module (MCM), including a memory device that is interconnected to two IC chips.



FIG. 2 illustrates a cross-sectional view of one embodiment of the MCM of FIG. 1.



FIG. 3 illustrates a block diagram of one embodiment of a logic die incorporated into a memory device of the MCM of FIG. 2.



FIG. 4 illustrates one embodiment of the network on chip (NoC) circuitry of FIG. 3.



FIG. 5 illustrates one embodiment of an interconnection topology for an MCM architecture that is similar to the MCM of FIG. 1.



FIG. 6 illustrates a further interconnection topology similar to that of FIG. 5.



FIG. 7 illustrates another interconnection topology similar to that of FIG. 5.



FIG. 8 illustrates a further interconnection topology similar to that of FIG. 5.



FIG. 9 illustrates another interconnection topology similar to that of FIG. 8.



FIG. 10 illustrates a further embodiment of an MCM similar to FIG. 1.



FIG. 11 illustrates an MCM configuration that is similar to the MCM of FIG. 10, and including alternative topologies for memory capacity expansion.



FIG. 12 illustrates an MCM configuration that is similar to the MCM of FIG. 11, and including alternative topologies for memory capacity expansion.



FIG. 13 illustrates a further MCM configuration that is similar to the MCM of FIG. 12, and including alternative topologies for off-interposer device expansion.





DETAILED DESCRIPTION

Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a multi-chip module (MCM) is disclosed. The MCM includes a common substrate and a first integrated circuit (IC) chip disposed on the common substrate. The first IC chip includes a first memory interface. A second IC chip is disposed on the common substrate and includes a second memory interface. A first memory device is disposed on the common substrate and includes memory and a first port coupled to the memory. The first port is configured for communicating with the first memory interface of the first IC chip. A second port is coupled to the memory and communicates with the second memory interface of the second IC chip. In-memory processing circuitry is coupled to the memory and controls transactions between the first memory device and the first and second IC chips. By including the in-memory processing circuitry on the memory device, controlled accesses to the memory for operations associated with the first IC chip and the second IC chip may be carried out with lower latency and lower cost. For some embodiments, the in-memory processing circuitry takes the form of a co-processor or accelerator that is capable of carrying out a processing function that is off-loaded from the first IC chip or second IC chip on data retrieved from the memory. In other embodiments, the in-memory processing circuitry may include network-on-chip (NoC) circuitry to control the transactions between the memory and the first IC chip and the second IC chip.


Throughout the disclosure provided herein, the term multi-chip module (MCM) is used to represent a semiconductor device that incorporates multiple semiconductor die or sub-packages in a single unitary package. An MCM may also be referred to as a system in a chip (SiP). With reference to FIG. 1, a multi-chip module (MCM) is shown, generally designated 100. For one embodiment, the MCM includes a substrate 102 that serves as a common substrate for a first integrated circuit (IC) chip 104, a second IC chip 106 and a memory device 108. For some embodiments, the various chips are interconnected in a manner that allows for use of a relatively inexpensive non-silicon or organic substrate as the common substrate. The use of a non-silicon common substrate 102 avoids size and signaling constraints typically associated with silicon-based substrates. This allows the substrate 102 to be larger, incorporate a more relaxed bump pitch for external interface contacts, and provide low-loss traces.


With continued reference to FIG. 1, the first IC chip 104 is mounted to the common substrate 102 and may take the form of a computer processing unit (CPU), graphics processing unit (GPU), artificial intelligence (AI) processing circuitry or the like. For one embodiment, the first IC chip 104 includes first interface circuitry 105 for communicating with the memory device 108. For one embodiment, the first interface circuitry 105 supports transactions with the first memory device 108 via a high-speed link 118. Various embodiments for compatible interface schemes are disclosed in U.S. patent application Ser. No. 17/973,905, titled “Method and Apparatus to Reduce Complexity and Cost For Multi-Chip Modules (MCMs)”, filed Oct. 26, 2022, incorporated by reference in its entirety, and assigned to the assignee of the instant application. The second IC chip 106 may be formed similar to the first IC chip 104, including second interface circuitry 107 for communicating with the memory device 108. Like the first IC chip 104, the second IC chip 106 may take the form of a computer processing unit (CPU), graphics processing unit (GPU), artificial intelligence (AI) processing circuitry or the like.


With continued reference to FIG. 1, one embodiment of the memory device 108 includes a first port 112 for interfacing with the first IC chip 104 via the first high-speed link 118, and a second port 114 for interfacing with the second IC chip 106 via a second link 120. Memory 110 is coupled to the first port 112 and the second port 114 and is configured with a unified memory space that, for one embodiment, is fully accessible to each of the first and second ports 112 and 114. While only two ports are shown for clarity, for some embodiments, three or more ports may be employed, corresponding to the edges of a standard IC chip and the available edge space for the interface circuitry.


Further referring to FIG. 1, in-memory processing circuitry 116 provides processing resources in the memory device 108 to provide a variety of functions. For some embodiments, described more fully below, the in-memory processing circuitry 116 may take the form of a co-processor or accelerator that carries out functions offloaded from the first IC chip 104 or the second IC chip 106. In other embodiments, the in-memory processing circuitry 116 may instead (or additionally) include a router functionality in the form of network-on-chip (NoC) circuitry for controlling access between the memory device 108 and the first and second IC chips 104 and 106, and, in some embodiments, controlling forwarding and receiving operations involving other IC chips (not shown) that may be disposed on the MCM 100. Further detail regarding embodiments of the NoC circuitry are provided below.



FIG. 2 illustrates a cross-sectional view of one embodiment of the MCM 100 of FIG. 1 that employs one specific embodiment of the memory device 108. As shown, for one embodiment, the memory device 108 may be configured as a 3-dimensional (3D) packaging architecture with one or more memory die 202 stacked and assembled as a sub-package 203 that is vertically stacked with a logic base die 204. For some embodiments, the logic base die 204 is configured as an interface die for the stack of memory die 203 and may be compatible with various dynamic random access memory (DRAM) standards, such as high-bandwidth memory (HBM), or non-volatile memory standards such as Flash memory. The stack of memory die 203 and the logic base die 204 may be packaged together as a sub-package to define the memory device 108, with the logic base die 204 further formed with an external interface in the form of an array of contact bumps, at 206. Various alternative 3D embodiments for the memory device are disclosed in the above-referenced U.S. patent application Ser. No. 17/973,905. Additionally, while shown as a 3D stacked architecture, the memory device 108 may alternatively take the form of a 2.5D architecture, where the various die are positioned in a horizontal relationship. Such architectures are also described in U.S. patent application Ser. No. 17/973,905.


Referring now to FIG. 3, for one embodiment, the logic base die 204 incorporated in the memory device 108 is manufactured in accordance with a logic process that incorporates node feature sizes similar to those of the first IC chip and the second IC chip, but with a much smaller overall size and footprint. As a result, operations carried out by the logic base die 204 may be more power efficient than those carried out by the larger IC chips 104 and 106. In some embodiments, the logic base die 204 includes memory interface circuitry 302 that defines the first and second ports 112 and 114 (FIG. 1), allowing the first and second IC chips 104 and 106 to access the entirety of the memory space of the memory 110. For one embodiment, the first and second ports 112 and 114 take the form of spatial signaling path resources that access the memory via multiplexer or switch circuitry, such that either IC chip has access to any portion of the memory during a given time interval. In this manner, where both of the first and second IC chips share the entirety of the memory 110, the memory device 108 becomes unified, thereby avoiding many of the latency problems associated with separately disposed memory spaces dedicated to separate IC chips.


Further referring to FIG. 3, for one embodiment, the logic base die 204 realizes at least a portion of the in-memory processing circuitry 116 as co-processing circuitry 304. The co-processing circuitry 304 provides co-processor or accelerator resources in the memory device 108 to allow for off-loading of one or more CPU/GPU/AI processing tasks involving data retrieved from the memory 110 without the need to transfer the data to either of the first or second IC chips 104 or 106. For example, in some embodiments, the co-processing circuitry 304 may be optimized to perform straightforward multiply-accumulate operations on data retrieved from the memory 110, thus avoiding the need for the larger and more power-hungry IC chips 104 or 106 to perform the same operations. The co-processing circuitry 304 may be accessed by providing application programming interfaces (APIs) in software frameworks (such as, for example, Pytorch, Spark, Tensorflow) in a manner that avoids re-writing application software. By carrying out offloaded processing tasks in this manner, data transfer latencies may be reduced, while power efficiency associated with the processing tasks may be increased.


For some embodiments, and with continued reference to FIG. 3, the logic base die 204 also provides network-on-chip (NoC) circuitry 306 for the memory device 108. The NoC circuitry 306 generally serves as a form of network router or switch for cooperating with other NoC circuits that may be disposed in various other IC chips or memory devices disposed on the MCM 100. Thus, the NoC circuitry 306 is generally capable of transferring and/or receiving data and/or control signals via a packet-switched protocol to any other nodes within the MCM 100 that also have NoC circuitry.



FIG. 4 illustrates one specific embodiment of the NoC circuitry 306 of FIG. 3. The NoC circuitry 306 includes input buffer circuitry 410 that receives data and/or control signals from a separate NoC circuit associated with another IC chip or node on the MCM 100. Depending on how many separate edge interfaces, or ports, are employed by the memory device 108, the input buffer circuitry 410 may include two (corresponding to, for example, “east” and “west” ports such as those shown in FIG. 1), three, or four queues (“N INPUT”, “S INPUT”, “E INPUT” OR “W INPUT”) to temporarily store signals received from the multiple ports. The memory interface 302 of the memory device 108 may also provide input data/control signals for transfer by the NoC circuitry 306 to another NoC node in the MCM 100.


Further referring to FIG. 4, the input buffer circuitry 410 feeds a crossbar switch 406 that is controlled by a control unit 408 in cooperation with a scheduler or arbiter 404. Output buffer circuitry 412 couples to the crossbar switch 406 to receive data/control signals from the memory device 108 or the data/control signals from the input buffer circuitry 410 for transfer to a selected output port/interface (“N OUTPUT”, “S OUTPUT”, “E OUTPUT” OR “W OUTPUT”). The crossbar switch 406 may also feed any of the signals from the input buffer circuitry 410 to the memory interface 302 of the memory device 108.



FIG. 5 illustrates a chip topology on an MCM, generally designated 500, that is similar to the architecture of FIG. 1, including a CPU as the first IC chip 104, a GPU as the second IC chip 106, and an HBM/NOC memory device as the first memory device 108. The MCM 500 also includes additional memory devices 504 and 506 that are configured as single-port memory devices and are disposed on the common substrate 102 in a distributed manner.



FIG. 6 illustrates an additional architecture that incorporates the topology of FIG. 5, and also includes further memory devices 602 and 604 coupled to the memory device 504. For one embodiment, the additional memory devices 602 and 604 provide additional memory capacity for the first IC chip 104 without the need for additional corresponding I/O interface circuitry at the edge of the first IC chip 104. The first IC chip 104 thus may access memory device 602 via the first and second ports of memory device 504. Accessing memory device 604 by the first IC chip 104 is performed similarly via the first and second ports of memory device 504 and 602. The connection of additional memory devices 602 and 604 through memory device 504 to the first IC chip 104 can be purely for extending the total memory to the first IC chip 104, and such memory extension does not necessarily need a NOC to connect them to other chips in the package. In some embodiments, the interconnected memory devices 504, 602 and 604 may, for example, provide different memory hierarchies for the first IC chip 104. As a result, for the first IC chip 104, the memory device 504 may serve as low-latency memory (such as cache memory) for data accessed more often with minimal latency, while the second and third memory devices 602 and 604 may serve as backing store media and/or other forms of storage where additional latency may be tolerated. Further, the addition of the memory devices 602 and 604 has little to no electrical impact on the MCM due to the buffering nature of the memory device 504 (where the aggregate load of the memory devices 504, 602 and 604 is seen as a single load from the perspective of the first IC chip 104). As a result, system software memory management tasks may be simplified as memory capacity is added to the MCM. Use of the unified memory architecture described above for each memory device contributes to a lower cost of use since the unified architecture is able to provide a variety of storage functions for a myriad of applications.



FIG. 7 illustrates yet another topology that is similar to the MCM of FIG. 5, but further scales the architecture to include a further disaggregated second level of processing and memory resources that are straightforwardly interconnected. Such a topology enables complex application specific integrated circuit (ASIC) chips to be partitioned into smaller interconnected chiplets, such as at 702 and 704, that together form a virtual ASIC 706. Having the smaller processing chiplets 702 and 704 virtualized in this manner allows for beneficial pairing and sized matching of memory device chiplet packages 708 to the smaller processing chiplets. Moreover, for embodiments where each memory device and processor chip includes NoC circuitry, any of the IC chips and memory devices of the MCM of FIG. 7 may communicate with any other of the IC chips and memory devices.



FIG. 8 illustrates one embodiment of an MCM 800 that is similar to the architecture of FIG. 6, with a CPU resource 104 coupled to a pair of inline memory devices 108 and 504 via a single link 802. This allows for memory capacity upgrades without requiring additional physical I/O space (multiple interfaces for coupling to multiple links) along the edge of the CPU 104. By adding an additional single-port memory device 504 and coupling it to the multi-port memory device 108, accesses to the added memory device 504 may be made by the CPU 104 via the in-memory processing circuitry, such as the NoC circuitry, that is disposed in the multi-port memory device 108. A similar configuration is shown at the far right of the MCM 800 with memory devices 110 and 506 that are in communication with a GPU 106 via a second link 804.



FIG. 8 also shows a pair of multi-port memory devices 112 and 114 that are interconnected by a simultaneous bidirectional link, at 806. The simultaneous bidirectional link 806 allows for concurrent accesses to a given distal memory device by the CPU 104 (where it accesses memory device 114 via memory device 112) and the GPU 106 (where it accesses memory device 112 via memory device 114). Having the ability to perform concurrent accesses significantly increases the bandwidth of the system. As an example of scaling the architecture of FIG. 8 even larger, FIG. 9 illustrates an MCM 900 that adds a second row of devices, at 902, that interconnect to a first row of devices, at 904, essentially doubling the resources provided in the architecture of FIG. 8. Additional rows of devices may also be employed to scale the capacity even further, if desired.



FIG. 10 illustrates a further embodiment of an MCM, generally designated 1000, that employs a package substrate 1002 that may take the form of any package substrate described in the previous embodiments described above. A silicon interposer 1004 is mounted on or embedded in the package substrate 1002 to provide fine-pitch and high-density routing between devices that are mounted on the interposer 1004. In some situations, size constraints associated with the interposer 1004 may limit the number of additional devices that may be mounted on the interposer 1004.


Further referring to FIG. 10, a first IC chip 1006, such as an application-specific integrated circuit (ASIC) chip or processor chip, is mounted to the interposer 1004 and includes a first ASIC port interface 1008 for communicating with a first memory device 1010 that is also mounted on the interposer 1004. For one embodiment, the first ASIC port interface 1008 may be compatible with advanced standards-based interfaces such as Universal Chiplet Interconnect Express (UCIe), Bunch of Wires (BoW), Universal Memory Interconnect (UMI) or Joint Electron Device Engineering Council (JEDEC), among others, and incorporates an advanced-packaging mechanical interface such as a high-density micro-bump interface for contacting correspondingly formed contacts in the interposer 1004. A first width of 2N channels 1012 couple the first memory device 1010 to the first IC chip 1006 and are configured to support an aggregate bandwidth based on, among other things, the number of memory devices coupled to the first width of 2N channels 1012 and/or the generation of the memory devices. Additional ASIC port interfaces, such as at 1014, may be employed by the ASIC for additional memory resources for additional memory channels.


With continued reference to FIG. 10, for one embodiment, the first memory device 1010 takes the form of a multi-port memory device configured similar to embodiments described above. The first memory device 1010 includes a first memory port interface 1016 that is interposer-compliant and generally corresponds to the first ASIC port interface 1008. Memory 1018, such as high-bandwidth memory (HBM) or other form of dynamic random access memory (DRAM) such as DDR(N), GDDR(N), or LPDDR(N), where “N” represents a given generation, may be employed in the first memory device 1010 and couples to the first memory port interface 1016 over at least a first subset 1017 of the first width of 2N channels 1012.


As noted above, the silicon interposer 1004 may be limited in size due to its silicon nature. In an effort to provide additional memory capacity and/or memory bandwidth to the ASIC 1006 over the first set of 2N channels 1012, the first memory device 1010 includes a second port interface 1020 that is not necessarily interposer compliant, but rather compliant with an off-interposer technology, such as a standard bump interface or silicon bridge interface. In this way, off-interposer devices, such as an off-interposer memory device 1022 may be coupled to the first memory device 1010 in a daisy-chained fashion to increase the memory resources available to the ASIC 1006. For some embodiments, the off-interposer device(s) may be mounted on the package substrate 1002 as part of the MCM 1000, while in other embodiments, the off-interposer device(s) may reside remotely off of the MCM 1000.


Further referring to FIG. 10, for one embodiment, the off-interposer memory device 1022 includes a third memory port interface 1024 that generally corresponds to the second memory port interface 1020. Memory 1026 included in the off-interposer memory device 1022 couples to the third memory port interface 1024 over a width of N or 2N channels 1028. In situations where the first and second memory devices are of a legacy generation, such as HBM3, which supports sixteen channels per device, the first width of 2N channels corresponds to thirty-two channels. Half of the channels may be dedicated to the first memory device 1010, while the other half of the channels may be dedicated to the off-interposer memory device 1022. Not only is the memory capacity doubled for the ASIC 1006, but the memory bandwidth is also doubled. In the event that next-generation memory devices are employed, such as HBM4 devices with an expected channel count of thirty-two channels per device, the number of channels between the ASIC and the first memory device 1010 generally corresponds to the number of channels between the first memory device 1010 and the second memory device 1022, such that a doubling of memory capacity occurs while maintaining the available memory bandwidth. For one embodiment, where 2N channels are deployed between the second port interface 1020 and the third port interface 1024, a selector switch 1030 (in phantom) may be employed to allow for a memory stack selection between memory 1018 or memory 1026. In either memory device configuration, the first ASIC port interface 1008 may remain the same as originally designed, enabling use of the same ASIC design through multiple memory device generations, thereby reducing overall costs and increasing the number of applications available for the ASIC design.



FIG. 11 illustrates an MCM topology, generally designated 1100, that is similar to the MCM 1000 of FIG. 10, with a first IC chip 1102, such as an ASIC, mounted on an interposer 1104 which sits upon a package substrate 1106. While two separate rows of differently configured devices are shown at 1108 and 1110, and are coupled to the first IC chip 1102, actual embodiments don't necessarily include both types of device configurations, and instead the different configurations may be viewed as alternatives. For example, in one embodiment, the first IC chip 1102 employs on-chip memory control circuitry 1112 for controlling the first row 1108 of memory devices, while for another embodiment, memory controllers 1114 and 1116 may be included in each of memory devices 1118 and 1120 for the second row 1110 of devices. In yet another embodiment, the memory controller circuitry to control both memory devices 1118 and 1120 may be placed entirely in the memory device 1118, allowing the interface circuitry 1128 of the second memory device 1120 to be less complex.


Further referring to FIG. 11, for one embodiment, each port interface for all of the devices associated with the second row of devices 1110 is associated with network-on-chip (NoC) circuitry such that all of the devices cooperate to form a communications fabric. Thus, for one example, a first ASIC port interface associated with the second row of devices 1120, such as at 1122, may incorporate not only an interposer-compliant mechanical interface, but also first NoC circuitry. Similar port interfaces for memory devices 1118 and 1120 are shown at 1124, 1126 and 1128. In other embodiments, and consistent with the NoC circuitry 306 (FIG. 3), the NoC circuitry may be a block of logic separate from either port interface, yet shared by the first and second port interfaces to control transactions between the port interfaces.



FIG. 12 illustrates an MCM, generally designated 1200, that is configured similar to the MCM 1100 of FIG. 11, and includes silicon bridges 1202 and 1204 to connect adjacent memory devices together in a daisy-chained manner, such as at 1206 and 1208, with the memory devices being disposed on respective interposer and package substrates 1210 and 1212. In other embodiments, additional silicon bridges 1214 and 1216 (both in phantom) may be embedded in the interposer 1210 or provide a substitute for the interposer altogether. For embodiments where the silicon bridge substitutes for the interposer 1210, a first interface 1218 of the memory device 1206 may be compliant for a silicon bridge connection, while a second interface 1220 may be compliant for a standard package interface.


As noted above, for some embodiments, off-interposer devices (such as memory devices or non-memory devices) that are coupled to the on-interposer devices, may reside remotely off of the MCM. FIG. 13 illustrates an MCM 1300 with alternative device configuration schemes for separate rows of devices, at 1302 and 1304. In the configuration shown in the top row 1302 of devices, a first multi-port memory device 1306 is disposed on an interposer 1307 and couples to a first IC device 1308 via a set of channels 1310. For one specific embodiment, the first multi-port memory device 1306 takes the form of an HBM3 memory device configured with an on-chip memory controller 1312 and a first port interface 1313 that includes first NoC circuitry. A second port interface 1314 also includes NoC circuitry and an off-interposer compatible packaging interface for coupling to either a package substrate 1315, a silicon bridge (not shown), or any other form of off-interposer substrate. A second device 1316, such as a high-capacity GDDR, LPDDR, PCIe, SerDes, silicon photonic, or other device chiplet, couples to the first multi-port memory device 1306 via M channels 1317, and includes multiple port interfaces 1318 and 1320, similar to the first multi-port memory device 1306. A first one of the interfaces 1318 corresponds to the second port interface 1314 of the first multi-port memory device 1306, while the second port interface 1320 is free to take the form of any port interface compatible with a remotely-disposed and off-MCM device (not shown).


Further referring to FIG. 13, the second row 1304 of devices is configured similar to the first row 1302, but may omit the second device entirely, and instead provide a first multi-port memory device 1322 (of row 1304) with a first port interface 1323 that takes the form of a short reach interface similar to any of the interfaces described above, and a second port interface 1324 that is free to take the form of any port interface compatible with a remotely-disposed and off-MCM device (not shown). This flexible port interface 1324, often a long-reach interface, thus provides the first IC device 1308 with a connection, via the first memory device 1322, to external devices such as GDDR/LPDDR/DDR DRAM with higher memory capacities or to a CXL memory expansion card through a PCIe serial interface, or to another ASIC through a SerDes port or to a silicon photonics chiplet, to name but a few possibilities.


Further referring to FIG. 13, configuring devices to correspond to either the top row 1302 of devices versus the bottom row of devices 1304 involves certain tradeoffs. For example, the top row configuration of devices 1302 allows for a straightforward base die design by replacing a larger and potentially more complex interface (that may provide a longer-reach connection) with a less-complex die-to-die (D2D) interface. Such D2D interfaces are often easier to employ and port into new processes. Thus, if a longer-reach interface is unavailable in the base die process node, one may instead employ a D2D interface on the base die to connect with another chiplet in a process node that the complex longer-reach interface is available. In some embodiments, such as when silicon photonic interfaces are employed, the longer-reach complex interfaces are typically unavailable. Additionally, use of less-complex D2D interfaces typically consumes less power by an order of magnitude, and also exhibits less area. Cooling requirements may thus be relaxed for base die implementations where temperature-sensitive DRAM die reside on top. For the bottom configuration of devices 1304, by implementing a longer-reach interface into the base die, the additional interface chiplet may be eliminated, resulting in cost savings and a lower-latency connection to any external chips.


When received within a computer system via one or more computer-readable media, such data and/or instruction-based expressions of the above described circuits may be processed by a processing entity (e.g., one or more processors) within the computer system in conjunction with execution of one or more other computer programs including, without limitation, net-list generation programs, place and route programs and the like, to generate a representation or image of a physical manifestation of such circuits. Such representation or image may thereafter be used in device fabrication, for example, by enabling generation of one or more masks that are used to form various components of the circuits in a device fabrication process.


In the foregoing description and in the accompanying drawings, specific terminology and drawing symbols have been set forth to provide a thorough understanding of the present disclosure. In some instances, the terminology and symbols may imply specific details that are not required to practice embodiments of the disclosure. For example, any of the specific numbers of bits, signal path widths, signaling or operating frequencies, component circuits or devices and the like may be different from those described above in alternative embodiments. Also, the interconnection between circuit elements or circuit blocks shown or described as multi-conductor signal links may alternatively be single-conductor signal links, and single conductor signal links may alternatively be multi-conductor signal links. Signals and signaling paths shown or described as being single-ended may also be differential, and vice-versa. Similarly, signals described or depicted as having active-high or active-low logic levels may have opposite logic levels in alternative embodiments. Component circuitry within integrated circuit devices may be implemented using metal oxide semiconductor (MOS) technology, bipolar technology or any other technology in which logical and analog circuits may be implemented. With respect to terminology, a signal is said to be “asserted” when the signal is driven to a low or high logic state (or charged to a high logic state or discharged to a low logic state) to indicate a particular condition. Conversely, a signal is said to be “deasserted” to indicate that the signal is driven (or charged or discharged) to a state other than the asserted state (including a high or low logic state, or the floating state that may occur when the signal driving circuit is transitioned to a high impedance condition, such as an open drain or open collector condition). A signal driving circuit is said to “output” a signal to a signal receiving circuit when the signal driving circuit asserts (or deasserts, if explicitly stated or indicated by context) the signal on a signal line coupled between the signal driving and signal receiving circuits. A signal line is said to be “activated” when a signal is asserted on the signal line, and “deactivated” when the signal is deasserted. Additionally, the prefix symbol “/” attached to signal names indicates that the signal is an active low signal (i.e., the asserted state is a logic low state). A line over a signal name (e.g., ‘<signal name> ’) is also used to indicate an active low signal. The term “coupled” is used herein to express a direct connection as well as a connection through one or more intervening circuits or structures. Integrated circuit device “programming” may include, for example and without limitation, loading a control value into a register or other storage circuit within the device in response to a host instruction and thus controlling an operational aspect of the device, establishing a device configuration or controlling an operational aspect of the device through a one-time programming operation (e.g., blowing fuses within a configuration circuit during device production), and/or connecting one or more selected pins or other contact structures of the device to reference voltage lines (also referred to as strapping) to establish a particular device configuration or operation aspect of the device. The term “exemplary” is used to express an example, not a preference or requirement.


While has aspects of the disclosure have been described with reference to specific embodiments thereof, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure. For example, features or aspects of any of the embodiments may be applied, at least where practicable, in combination with any other of the embodiments or in place of counterpart features or aspects thereof. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A chiplet-based multi-chip module (MCM) to mount to a base substrate, the chiplet-based MCM comprising: a package substrate that is separate from the base substrate;a micro-bump advanced-package routing layer coupled to a portion of the package substrate;a first integrated circuit (IC) chiplet coupled to the micro-bump advanced-package routing layer;a first memory chiplet coupled to the micro-bump advanced-package routing layer and comprising a first port interface to couple to the first IC chiplet via a first set of traces formed in the micro-bump advanced-package routing layer;a second port interface to couple to an on-package chiplet device; andwherein a transaction between the first IC chiplet and the on-package chiplet device passes through the first port interface and the second port interface of the first memory chiplet.
  • 2. The chiplet-based MCM of claim 1, wherein: the first memory chiplet comprises a first high-bandwidth memory (HBM) chiplet.
  • 3. The chiplet-based MCM of claim 1, wherein: the second port interface comprises a standard bump-compliant mechanical interface that comprises a connection density that is less than a micro-bump-compliant mechanical interface.
  • 4. The chiplet-based MCM of claim 1, wherein: the first memory chiplet comprises a memory controller.
  • 5. The chiplet-based MCM of claim 1, wherein: the first memory chiplet comprises network on chip (NoC) circuitry.
  • 6. The chiplet-based MCM of claim 1, wherein: the on-package chiplet device is coupled to a non-micro-bump standard-package layer that has a first trace density that is less than a second trace density of the micro-bump advanced-package routing layer.
  • 7. The chiplet-based MCM of claim 1, further comprising the on-package chiplet device, and wherein: the on-package chiplet device comprises a non-memory chiplet.
  • 8. The chiplet-based MCM of claim 1, further comprising the on-package chiplet device, and wherein: the on-package chiplet device comprises a second memory chiplet that is coupled to the package substrate, the second memory chiplet including a third port interface that corresponds to the second port interface of the first memory chiplet.
  • 9. The chiplet-based MCM of claim 8, wherein: the first memory chiplet comprises a first high-bandwidth memory (HBM) chiplet; andthe second memory chiplet comprises a second high-bandwidth memory (HBM) chiplet.
  • 10. The chiplet-based MCM of claim 9, wherein: the first port interface is configured with a first width and a first bandwidth;the second port interface is configured with a second width that corresponds to the first width and a second bandwidth that corresponds to the first bandwidth.
  • 11. The chiplet-based MCM of claim 1, wherein: the first port interface is configured with an aggregate width and an aggregate bandwidth, the aggregate bandwidth comprising a first bandwidth component for the first memory chiplet, the first bandwidth component being a portion of the aggregate bandwidth;the second port interface is configured with a second width that is equal to or less than the aggregate width, the second port interface comprising a second bandwidth component, the first bandwidth component and the second bandwidth component cooperating to form the aggregate bandwidth.
  • 12. The chiplet-based MCM of claim 1, further comprising: first network-on-chip (NoC) circuitry formed in the first IC chiplet; andsecond NoC circuitry formed in the first memory chiplet.
  • 13. The chiplet-based MCM of claim 1, wherein: the transaction passes through the first port interface and the second port interface in accordance with a packet-switched protocol.
  • 14. The chiplet-based MCM of claim 1, further comprising the on-package chiplet device, and wherein: the second port interface comprises a first die-to-die interface;the on-package chiplet device coupled to the package substrate and comprising a third port interface that includes a second die-to-die interface that corresponds to the first die-to-die interface; andwherein the on-package chiplet device comprises a fourth port interface to communicate with an off-package device, the fourth port interface comprises at least a link controller and physical interface that is compliant with at least one data packet-switched protocol.
  • 15. The memory chiplet of claim 14, wherein: the link controller and the physical interface that is compliant with at least one data packet-switched protocol is compliant with at least one of a PCIe interface, a SerDes interface, or an optical interface.
  • 16. The chiplet-based MCM of claim 1, realized as a system-in-package (SiP), and wherein: the package substrate comprises an standard package substrate;the micro-bump advanced-package routing layer comprises a higher first routing density than a second routing density of the standard package substrate; andthe first set of traces comprises a chiplet interconnect.
  • 17. The chiplet-based MCM of claim 1, wherein: the on-package chiplet device comprises a SerDes chiplet or a photonic chiplet.
  • 18. A memory chiplet, comprising: a semiconductor base die, including: a memory interface to couple to a memory array of storage cells;a first port interface comprising a micro-bump-compliant mechanical interface to couple to a first integrated circuit (IC) chiplet;a second port interface to couple to an on-package chiplet device; andwherein a transaction between the first IC chiplet and the on-package chiplet device passes through the first port interface and the second port interface in accordance with a packet-switched protocol.
  • 19. The memory chiplet of claim 18, wherein: the second port interface comprises a non-micro-bump-compliant mechanical interface.
  • 20. The memory chiplet of claim 18, wherein: the first port interface is configured to support a first bandwidth of the memory interface and a second bandwidth of the on-package chiplet device.
  • 21. The memory chiplet of claim 18, wherein: the first port interface is configured with a first interface width and a first bandwidth;the second port interface is configured with a second interface width that corresponds to the first interface width and a second bandwidth that corresponds to the first bandwidth.
  • 22. The memory chiplet of claim 18, further comprising: network-on-chip (NoC) circuitry to control the transaction passing through the first port interface and the second port interface.
  • 23. The memory chiplet of claim 18, wherein: the memory interface comprises a memory controller.
  • 24. The memory chiplet of claim 18, wherein: the second port interface is configured to communicate with an off-package device and comprises a link controller and a physical interface of at least one data packet-switched protocol.
  • 25. The memory chiplet of claim 24, wherein: the link controller and the physical interface of the at least one data packet-switched protocol is compliant with at least one of a PCIe interface, a SerDes interface, or an optical interface.
  • 26. The memory chiplet of claim 18, wherein: the micro-bump-compliant mechanical interface comprises a chiplet interconnect to couple to the first IC chiplet.
  • 27. The memory chiplet of claim 26, realized as a high-bandwidth memory (HBM) chiplet.
  • 28. A chiplet-based multi-chip module (MCM) to mount to a base substrate, the chiplet-based MCM comprising: a package substrate that is separate from the base substrate;a first integrated circuit (IC) chiplet coupled to the package substrate;a second chiplet device coupled to the package substrate; anda first memory chiplet coupled to the package substrate and comprising a first port interface to couple to the first IC chiplet and configured with an aggregate interface width and an aggregate bandwidth, the aggregate bandwidth comprising a first bandwidth component for the first memory chiplet;a second port interface configured with a second interface width that is equal to or less than the aggregate interface width, the second port interface comprising a second bandwidth component, the first bandwidth component and the second bandwidth component cooperating to form the aggregate bandwidth; andwherein a transaction between the first IC chiplet and the second chiplet device passes through the first port interface and the second port interface.
  • 29. The chiplet-based MCM of claim 28, further comprising: a micro-bump advanced-package routing layer coupled to the package substrate, the first IC chiplet coupled to the package substrate via the micro-bump advanced-package routing layer.
  • 30. The chiplet-based MCM of claim 28, wherein: the transaction passes through the first port interface and the second port interface in accordance with a packet-switched protocol.
  • 31. The chiplet-based MCM of claim 28, wherein: the first memory chiplet comprises a first high-bandwidth memory (HBM) chiplet.
  • 32. The chiplet-based MCM of claim 28, realized as a system-in-package (SiP), and wherein: the package substrate comprises an standard package substrate;the micro-bump advanced-package routing layer comprises a higher first routing density than a second routing density of the standard package substrate; andwherein the micro-bump advanced-package routing layer comprises a chiplet interconnect.
  • 33. The chiplet-based MCM of claim 28, wherein: the first memory chiplet comprises network-on-chip (NoC) circuitry to control the transaction passing through the first port interface and the second port interface.
  • 34. The chiplet-based MCM of claim 28, wherein: the second chiplet device comprises a SerDes chiplet or a photonic chiplet.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application No. 63/532,908, filed Aug. 16, 2023, entitled HBM/DRAM EXPANSION WITHOUT CHANGE TO ASIC ARCHITECTURE/DESIGN, and is a Continuation-In-Part of U.S. patent application Ser. No. 17/994,123, filed Nov. 25, 2022 entitled MULTI-CHIP MODULE (MCM) WITH MULTI-PORT UNIFIED MEMORY, which a Non-Provisional that claims priority to U.S. Provisional Application No. 63/283,265, filed Nov. 25, 2021, entitled ENABLING ADVANCE SYSTEM-IN-PACKAGE ARCHITECTURES AT LOW-COST USING HIGH-BANDWIDTH ULTRA-SHORT-REACH (USR) CONNECTIVITY IN MCM PACKAGES, which is incorporated herein by reference in its entirety.

US Referenced Citations (209)
Number Name Date Kind
4334305 Girardi Jun 1982 A
5396581 Mashiko Mar 1995 A
5677569 Choi Oct 1997 A
5892287 Hoffman Apr 1999 A
5910010 Nishizawa Jun 1999 A
6031729 Berkely Feb 2000 A
6055235 Blanc Apr 2000 A
6417737 Moloudi Jul 2002 B1
6492727 Nishizawa Dec 2002 B2
6690742 Chan Feb 2004 B2
6721313 Van Duyne Apr 2004 B1
6932618 Nelson Aug 2005 B1
7027529 Ohishi Apr 2006 B1
7248890 Raghavan Jul 2007 B1
7269212 Chau Sep 2007 B1
7477615 Oshita Jan 2009 B2
7535958 Best May 2009 B2
7593271 Ong Sep 2009 B2
7701957 Bicknell Apr 2010 B1
7907469 Sohn et al. Mar 2011 B2
7978754 Yeung Jul 2011 B2
8004330 Acimovic Aug 2011 B1
8024142 Gagnon Sep 2011 B1
8121541 Rofougaran Feb 2012 B2
8176238 Yu et al. May 2012 B2
8468381 Jones Jun 2013 B2
8483579 Fukuda Jul 2013 B2
8546955 Wu Oct 2013 B1
8704364 Banijamali et al. Apr 2014 B2
8861573 Chu Oct 2014 B2
8948203 Nolan Feb 2015 B1
8982905 Kamble Mar 2015 B2
9088334 Chakraborty Jul 2015 B2
9106229 Hutton Aug 2015 B1
9129935 Chandrasekar Sep 2015 B1
9294313 Prokop Mar 2016 B2
9349707 Sun May 2016 B1
9379878 Lugthart Jun 2016 B1
9432298 Smith Aug 2016 B1
9558143 Leidel Jan 2017 B2
9832006 Bandi Nov 2017 B1
9842784 Nasrullah Dec 2017 B2
9843538 Woodruff Dec 2017 B2
9886275 Carlson Feb 2018 B1
9934842 Mozak Apr 2018 B2
9961812 Suorsa May 2018 B2
9977731 Pyeon May 2018 B2
10171115 Shirinfar Jan 2019 B1
10402363 Long et al. Sep 2019 B2
10410694 Arbel Sep 2019 B1
10439661 Heydari Oct 2019 B1
10642767 Farjadrad May 2020 B1
10678738 Dai Jun 2020 B2
10735176 Heydari Aug 2020 B1
10748852 Sauter Aug 2020 B1
10769073 Desai Sep 2020 B2
10803548 Matam et al. Oct 2020 B2
10804204 Rubin et al. Oct 2020 B2
10825496 Murphy Nov 2020 B2
10826536 Beukema Nov 2020 B1
10855498 Farjadrad Dec 2020 B1
10935593 Goyal Mar 2021 B2
11088876 Farjadrad Aug 2021 B1
11100028 Subramaniam Aug 2021 B1
11164817 Rubin et al. Nov 2021 B2
11204863 Sheffler Dec 2021 B2
11581282 Elsherbini Feb 2023 B2
11669474 Lee Jun 2023 B1
11694940 Mathuriya Jul 2023 B1
11782865 Kochavi Oct 2023 B1
11789649 Chatterjee et al. Oct 2023 B2
11841815 Farjadrad Dec 2023 B1
11842986 Ramin Dec 2023 B1
11855043 Farjadrad Dec 2023 B1
11855056 Rad Dec 2023 B1
11892242 Mao Feb 2024 B2
11893242 Farjadrad Feb 2024 B1
11983125 Soni May 2024 B2
12001355 Dreier Jun 2024 B1
12001725 Chatterjee Jun 2024 B2
20020122479 Agazzi Sep 2002 A1
20020136315 Chan Sep 2002 A1
20040088444 Baumer May 2004 A1
20040113239 Prokofiev Jun 2004 A1
20040130347 Moll Jul 2004 A1
20040156461 Agazzi Aug 2004 A1
20050041683 Kizer Feb 2005 A1
20050134306 Stojanovic Jun 2005 A1
20050157781 Ho Jul 2005 A1
20050205983 Origasa Sep 2005 A1
20060060376 Yoon Mar 2006 A1
20060103011 Andry May 2006 A1
20060158229 Hsu Jul 2006 A1
20060181283 Wajcer Aug 2006 A1
20060188043 Zerbe Aug 2006 A1
20060250985 Baumer Nov 2006 A1
20060251194 Bublil Nov 2006 A1
20070281643 Kawai Dec 2007 A1
20080063395 Royle Mar 2008 A1
20080086282 Artman Apr 2008 A1
20080143422 Lalithambika Jun 2008 A1
20080186987 Baumer Aug 2008 A1
20080222407 Carpenter Sep 2008 A1
20090113158 Schnell Apr 2009 A1
20090154365 Diab Jun 2009 A1
20090174448 Zabinski Jul 2009 A1
20090220240 Abhari Sep 2009 A1
20090225900 Yamaguchi Sep 2009 A1
20090304054 Tonietto Dec 2009 A1
20100177841 Yoon Jul 2010 A1
20100197231 Kenington Aug 2010 A1
20100294547 Hatanaka Nov 2010 A1
20110029803 Redman-White Feb 2011 A1
20110038286 Ta Feb 2011 A1
20110167297 Su Jul 2011 A1
20110187430 Tang Aug 2011 A1
20110204428 Erickson Aug 2011 A1
20110267073 Chengson Nov 2011 A1
20110293041 Luo Dec 2011 A1
20120082194 Tam Apr 2012 A1
20120182776 Best Jul 2012 A1
20120192023 Lee Jul 2012 A1
20120216084 Chun Aug 2012 A1
20120327818 Takatori Dec 2012 A1
20130181257 Ngai Jul 2013 A1
20130222026 Havens Aug 2013 A1
20130249290 Buonpane Sep 2013 A1
20130285584 Kim Oct 2013 A1
20140016524 Choi Jan 2014 A1
20140048947 Lee Feb 2014 A1
20140126613 Zhang May 2014 A1
20140192583 Rajan Jul 2014 A1
20140269860 Brown Sep 2014 A1
20140269983 Baeckler Sep 2014 A1
20150012677 Nagarajan Jan 2015 A1
20150046612 Gupta Feb 2015 A1
20150172040 Pelekhaty Jun 2015 A1
20150180760 Rickard Jun 2015 A1
20150206867 Lim Jul 2015 A1
20150271074 Hirth Sep 2015 A1
20150326348 Shen Nov 2015 A1
20150358005 Chen Dec 2015 A1
20160056125 Pan Feb 2016 A1
20160071818 Wang Mar 2016 A1
20160111406 Mak Apr 2016 A1
20160217872 Hossain Jul 2016 A1
20160294585 Rahman Oct 2016 A1
20170255575 Niu Sep 2017 A1
20170286340 Ngo Oct 2017 A1
20170317859 Hormati Nov 2017 A1
20170331651 Suzuki Nov 2017 A1
20180010329 Golding, Jr. Jan 2018 A1
20180082981 Gowda Mar 2018 A1
20180137005 Wu May 2018 A1
20180175001 Pyo Jun 2018 A1
20180190635 Choi Jul 2018 A1
20180196767 Linstadt Jul 2018 A1
20180210830 Malladi et al. Jul 2018 A1
20180315735 Delacruz Nov 2018 A1
20190044764 Hollis Feb 2019 A1
20190058457 Ran Feb 2019 A1
20190108111 Levin Apr 2019 A1
20190198489 Kim Jun 2019 A1
20190267062 Tan Aug 2019 A1
20190319626 Dabral Oct 2019 A1
20200051961 Rickard Feb 2020 A1
20200105718 Collins et al. Apr 2020 A1
20200257619 Sheffler Aug 2020 A1
20200320026 Kabiry Oct 2020 A1
20200364142 Lin Nov 2020 A1
20200373286 Dennis Nov 2020 A1
20210056058 Lee Feb 2021 A1
20210082875 Nelson Mar 2021 A1
20210117102 Grenier Apr 2021 A1
20210149763 Ranganathan May 2021 A1
20210181974 Ghosh Jun 2021 A1
20210183842 Fay Jun 2021 A1
20210193567 Cheah et al. Jun 2021 A1
20210225827 Lanka Jul 2021 A1
20210258078 Meade Aug 2021 A1
20210311900 Malladi Oct 2021 A1
20210365203 O Nov 2021 A1
20210405919 K Dec 2021 A1
20220051989 Agarwal Feb 2022 A1
20220121381 Brewer Apr 2022 A1
20220159860 Winzer May 2022 A1
20220179792 Banerjee Jun 2022 A1
20220189934 Kim Jun 2022 A1
20220222198 Lanka Jul 2022 A1
20220223522 Scearce Jul 2022 A1
20220237138 Lanka Jul 2022 A1
20220254390 Gans Aug 2022 A1
20220327276 Seshan Oct 2022 A1
20220334995 Das Sharma Oct 2022 A1
20220342840 Das Sharma Oct 2022 A1
20220350756 Burstein Nov 2022 A1
20220391114 Richter Dec 2022 A1
20230039033 Zarkovsky Feb 2023 A1
20230068802 Wang Mar 2023 A1
20230090061 Zarkovsky Mar 2023 A1
20230092541 Dugast Mar 2023 A1
20230161599 Erickson May 2023 A1
20230181599 Erickson May 2023 A1
20230289311 Noguera Serra Sep 2023 A1
20230359579 Madhira Nov 2023 A1
20240007234 Harrington Jan 2024 A1
20240028208 Kim Jan 2024 A1
20240241840 Im Jul 2024 A1
20240273041 Lee Aug 2024 A1
Non-Patent Literature Citations (16)
Entry
U.S. Appl. No. 16/812,234; Mohsen F. Rad; filed Mar. 6, 2020.
Farjadrad et al., “A Bunch of Wires (B0W) Interface for Inter-Chiplet Communication”, 2019 IEEE Symposium on High-Performance Interconnects (HOTI), pp. 27-30, Oct. 2019.
Universal Chiplet Interconnect Express (UCIe) Specification Rev. 1.0, Feb. 24, 2022.
Brinda Ganesh et. al., “Fully-Buffered DIMM Memory Architectures: Understanding Mechanisms, Overheads and Scaling”, 2007, IEEE, 2007 IEEE 13th International Symposium on High Performance Computer Architecture, pp. 1-12 (Year: 2007).
Anu Ramamurthy, “Chiplet Technology & Heterogeneous Integration” Jun. 2021, NASA, 2021 NEPP ETW, slides 1-17 (Year: 2021).
Wikipedia, “Printed circuit board”, Nov. 9, 2021, Wayback Machine, as preserved by the Internet Archive on Nov. 9, 2021, pp. 1-23 (Year: 2021).
Block Memory Generator v8.2 LogiCORE IP Product Guide Vivado Design Suite; Xilinx; Apr. 1, 2015.
Kurt Lender et al., “Questions from the Compute Express Link Exploring Coherent Memory and Innovative Cases Webinar”, Apr. 13, 2020, CXL consortium.
Planet Analog, “The basics of SerDes (serializers/deserializers) for interfacing”, Dec. 1, 2020, Planet Analog.
Universal Chiplet Interconnect Express (UCIe) Specification, Revision 1.1, Version 1.0, Jul. 10, 2023.
Hybrid Memory Cube Specification 2.1, Hybrid Memory Cube Consortium, HMC-30G-VSR PHY, 2014.
“Hot Chips 2017: Intel Deep Dives Into EMIB”, TomsHardware.com; Aug. 25, 2017.
“Using Chiplet Encapsulation Technology to Achieve Processing-In-Memory Functions”; Micromachines 2022, 13, 1790; https://www.mdpi.com/journal/micromachines; Tian et al.
Quartus II Handbook Version 9.0 vol. 4: SOPC Builder; “System Interconnect Fabric for Memory-Mapped Interfaces”; Mar. 2009.
“Using Dual Port Memory as Interconnect”, EE Times, Apr. 26, 2005, Daniel Barry.
“Multiport memory for high-speed interprocessor communication in MultiCom;” Scientia Iranica, vol. 8, No. 4, pp 322-331; Sharif University of Technology, Oct. 2001; Asgari et al.
Provisional Applications (2)
Number Date Country
63532908 Aug 2023 US
63283265 Nov 2021 US
Continuation in Parts (1)
Number Date Country
Parent 17994123 Nov 2022 US
Child 18420006 US