The present invention relates generally to computer systems and memory technology. More specifically, the present invention relates to a multi-processor computing system with a memory-centric architecture around a multi-ported shared memory. The memory may appear to the processors as a random-access memory (RAM) without regard to the underlying implementing technology.
In a system with non-uniform memory access (NUMA), the memory is not shared and is specific to the computing environment. For example,
Better data sharing is a long-felt need, as the overhead in existing systems becomes increasingly intolerable. A centralized memory with low latency and high throughput CPU attachments is desired. One example of a memory shared among many processors is a HADOOP-style system in which each processor has its own memory but shares it over a network of clustered memory servers (e.g., over ethernet). In one implementation, one server in each cluster is designated “master” and keeps a master record of all files within that cluster. A master server in each cluster receives client memory access requests, locates the slave servers with control over the desired data in each client request, and dividing service of the client request to those servers. In a HADOOP system, each file typically spreads out in data blocks (e.g., each 64 or 128 MB) among numerous working servers in the cluster, and each block may be operated on by the processor having control of the block. In this manner, substantial parallel processing is possible and achieves a very fast operation. HADOOP systems are widely used in “data analytics” (also known as “Big Data”), social media, and other large enterprise applications. The large block sizes put a heavy burden on the communication channel, however, such that high-speed channels are necessary for performance. Some HADOOP systems suffer from long access times.
In a Big Data HADOOP system, capability expansion is achieved by heaving additional servers and additional memory units. Very often, an update consists mainly of increasing the size of a memory unit of a server, based on a desire to take better advantage of the local computational capability, rather than to further distribute the data.
Many conventional systems (e.g., systems not large enough for HADOOP data structures) also use clustered servers that run software to achieve parallel operations, and backup and recovery methods. Many such systems increase the size of their memory by adding accelerator boards to the processors. To enable data sharing, the accelerator boards communicate over a fast local-area network (LAN) to allow large file transfers, which are time-consuming and intensive in both power and bandwidth. To achieve better file-sharing, an additional layer of software control may be implemented, which may not be desirable in a non-HADOOP type system.
Conventional mass data storage is achieved using hard drives, which have notably slow access times. Even solid-state drives (SSDs) do not qualitatively alleviate the access time bottleneck in many applications. One example of such applications is a server or a cluster of servers running virtual machines (VM). A VM of the prior art is typically scalable. At run time, each instance of the VM is loaded as a separate process from hard disk into memory (e.g., dynamic random-access memory (DRAM)). The process may be swapped out to hard disk or reloaded into memory numerous times during its lifetime, which are very inefficient operations that represent substantial overhead on system performance Recent memory interface standards, e.g., Gen-Z, CXL and CCIX, are developed to specifically address this system performance issue. These standards provide high-speed connections to accelerators for caching, memory buffering and input/output (I/O) expansion.
Social media and Big Data applications require performance that conventional system solutions are inadequate. A method for quickly transferring data from mass storage (e.g., flash memory) to RAM without the communication medium bottleneck (i.e., reduced latency and high through-put) is needed, for example.
According to one embodiment of the present invention, a memory channel controller for a multi-ported shared memory includes: (a) multiple host interface circuits each configured to receive memory access requests from one or more host processors to the shared memory; (b) a priority circuit which prioritizes the memory access requests to avoid a memory access conflict and which designates each prioritized memory access request to one of the memory ports for carrying out the memory access specified in that prioritized request; (c) a switch circuit; and (d) multiple memory interface circuits, each coupled to an associated one of the memory ports. The switch circuit routes to each memory interface circuit the prioritized memory access requests designated for its associated memory port.
The present invention is better understood upon consideration of the detailed description below in conjunction with the drawings.
Although the drawings depict numerous examples of the invention, the invention is not limited by the depicted examples. In the drawings, like reference numerals designate like elements. Also, elements in the figures are not necessarily depicted to scale.
Various embodiments or examples may be implemented in numerous ways, including as a system, a process, an apparatus, or a series of program instructions on a computer readable medium such as a computer readable storage medium or a computer network where the program instructions are sent over optical, electronic, or wireless communication links. In general, operations of disclosed processes may be performed in an arbitrary order, unless otherwise stated.
The detailed description below is provided along with accompanying figures in connection with examples of the present invention but the invention is not limited by any of the examples. Numerous alternatives, modifications, and equivalents are possible within the scope of the present invention, which is set forth in the claims. For clarity, some technical material that is known in the art has not been described in detail to avoid unnecessarily distracting from the description.
According to one embodiment of the present invention,
Taking the shared memory systems of
Solely for illustrative purpose, this detailed description uses as an example a memory channel controller that is accessed by 3 host processors (i.e., servers) and that accesses 5 memory ports of a partitioned QV memory module. A practical implementation (e.g., for a HADOOP Big Data application) may have 16 or 32 host channels and a shared memory of a few terabytes (TB) to hundreds of TB. On one hand, numerous partitions that are relatively small in capacity provide efficiency because of a correspondingly smaller probability of access contentions. On the other hand, the resulting large number of memory channels increases the complexity of memory channel controller 205. The optimal trade-off between partition size and memory channel controller depends on the specific application the memory system is put to.
Under the “memory-centric” approach, each server may directly connect to memory channel controller 205 to access any part of shared memory 206. For example, based on the specified memory address in an incoming memory request and the data in configuration address registers, main channel logic circuit 392 in memory channel controller 205 selects the appropriate memory channel, generates the physical address and submits a corresponding memory access request to the selected memory channel. The memory channel logic circuits handle the different control requirements of the memory channels and their respective timing domains.
Priority resolution circuit 605 then issues the memory requests to the memory ports based on a priority scheme that is based on (i) arrival time, (ii) host channel and (iii) specified memory address. The specified memory address is used to determine the memory partition (i.e., memory channel) to which access is requested. In one embodiment, priority resolution circuit 605 allows access requests to different memory partitions to proceed in parallel. When two host channels request access to the same partition, priority resolution circuit 605 allows the memory request that has an earlier arrival time or the one assigned a higher priority to proceed first. When the two memory requests arrive substantially simultaneously, priority resolution circuit allows the memory request of the host channel that has a higher assigned priority to proceed first. When a host channel is granted access to a memory channel, the memory channel is locked out to other host channels. The host thus having exclusive access may relinquish the memory channel when its memory access request is complete to allow the memory channel to be available for bidding. Prior to the memory channel is released from the lockout, further arbitration for that memory channel is disabled.
Priority resolution circuit 605 may also implement an equitable scheme in which the lowest priority host channel can get a minimal amount of access to each memory channel. Priority mode circuit 604 configures priority resolution circuit 605, respectively, a priority scheme (e.g., host channel assigned priorities) and a memory partition map.
The memory access requests issued from priority circuit 403 are converted in memory interface 405 into channel requests to be executed in their corresponding memory channels 406-1 to 406-5 using suitable memory interface protocols and formats. When shared memory 206 is implemented by memories of different memory types with differing signaling protocols, the memory channels may require more than one type of memory interface. These memory interfaces may be implemented by modular units. For example, a memory channel controller may initially be implemented by a memory module having a DDR4 memory interface. In an upgrade, the memory module may be replaced by another memory module that has a DDR5 or an HBM interface. To accommodate such an architecture, memory channel controller 205 has a generic architecture that is generic enough as to be agnostic to a specific memory interface. The same approach is beneficial with respect to the host interface in the host channels.
Address configuration registers 607 may be set at power-up or at installation. The configuration bits in address configuration registers 607 maps the specified address in the memory access request to the address and command structure specific to each memory port and memory request type. For example, many DRAMs are organized as blocks and banks, which are incorporated into the signaling protocols used in their accesses. In a QV memory, memory channel controller 205 may take advantage of its organization, which may include bank groups, banks or tile structures. Description of these features in a QVM may be found, for example, in Provisional Application III-V, incorporated by reference above.
In some embodiments, host channel priorities may be assigned according to a round-robin scheme, in which a ring counter select one of the host channels at a given time, changing the select host channel at a regular interval. Under that scheme, only a selected host channel may request access to a memory channel Some embodiments use a combination of a strict hierarchical scheme and a rotating priority scheme in which a selected group of host channels under the round-robin scheme bid for memory channel access against host channels in another group that are allowed to bid at all times or more frequently.
While a strict hierarchical scheme may result in extreme circumstances some host channels being constantly blocked from requesting memory access. In most applications, the extreme circumstances seldom occur, and a strict hierarchical scheme may be acceptable or even preferred. Returning to
In some applications, an efficient address-based conflict resolution may be a more significant design parameter than the host channel-based priority scheme. With suitable partitioning, many if not most memory accesses may proceed in parallel.
The priority resolution circuit (e.g., priority resolution circuit 605 of
In the current example, DMA circuits are present in both a host interface circuit (e.g., DMA circuit 503 of
For some applications, data buffers (e.g., SRAM buffers) may be optimized for large data packets, and the DMA circuits may support remote direct memory access (RDMA) transfers. In addition, host interface circuit 1402 also includes archival port 1408 and network port 1409. Archival port 1408 allows memory channel controller 1420 to boot from storage device 1410 (e.g., a high-speed hard disk drive or a solid-state disk drive), to store data to the storage device, or to transfer data among the storage device, shared memory 1421, and an external device (e.g., any of servers 1401-1 to 1401-n, or another device over network port 1409).
Archival port 1408 may be a PCIe port. Memory channel controller 1420 may log data write activities and updates to the storage device to provide a reliable data back-up and recovery by replay. Through archival port 1408, memory channel controller 1420 enables data transfers between the storage device and shared memory 1421 without intervention by servers 1401-1 to 1401-n, thus providing both performance and power efficiency. Each of servers 1401-1 to 1401-n may provide a high-level command that enables such transfers over archival port 1408. Archival port 1408 also performs conventional disk integrity tasks (e.g., encryption and RAID error encoding) under control by memory channel controller 1420.
Network port 1409 (e.g., an ethernet port to a local area network or wide area network) allows access to shared memory 1421 from anywhere on the network. For example, network port 1409 may handle connections to a server cluster (e.g., as is customary in a Hadoop system), offering the server cluster a shared large-capacity memory bank. Network port 1409 may also provide automatic remote backup to another system without involvement by servers 1401-1 to 1401-n. Through network port 1409, memory channel controller 1420 may act as a web server. In some embodiments, network port 1409 may include data packet buffers and high speed command queue that supports RDMA transfers.
In the present example, each memory device 1501 is a QV DIMM built using 256-Gb QV memory dies. Each DIMM includes 8 QV memory dies so that the QV DIMM has 256 GB (giga-bytes) of memory. Each DIMM group includes four QV DIMMs. Therefore, each DIMM group has 1 TB (tera bytes) of memory. Each partition consists of a row of 4 DIMM groups for a total 16 DIMMs. Thus, each partition or each DIMM row has 4 TB of memory. The memory array 1500 includes 5 DIMM rows and thus the memory array 1500 can have 20 TB of memory.
In the present example, the memory array 1550 is constructed in a similar manner as memory array 1500 of
In some instances, where address-based conflicts cannot be completely avoided, memory accesses by multiple ports attempting to access the same memory partition may be detected at switch circuit 1602 in shared memory 1600. In the event of detection of such a conflict, an error signal may be generated to initiate recovery actions in the memory interface circuits of the conflict memory ports. In some embodiment, an arbitration (e.g., using a channel-based priority scheme) may determine which of the conflicting accesses may be allowed to proceed. In that case, recovery action need only be taken at the losing memory port. A simple arbitration may be based, for example, which memory request arrives first. The recovery action for losing the arbitration may be resubmission of the memory access later. A wait or queuing mechanism may be provided for resubmission timing efficiency. When a conflict arises and the error signal is activated for all ports attempting similar access, the error signal will stay active until the winning port is done. A dynamically adjusted priority scheme avoids any memory port from being shut out over an extended period.
As discussed in Provisional Applications IV and VI, a memory module may include multiple memory dies stacked on top one over another and over a controller die. The controller die may have multiple memory ports formed on it to allow parallel accesses to the memory dies in the memory module. Thus, such a memory module has a very efficient footprint but a large capacity. Even higher densities can be achieved by interconnecting a number of such memory modules over an interposer substrate.
A memory channel controller of the present invention may also include, for example, error detection and correction circuits, a diagnostic port to allow access to configuration and other registers, error logging circuits, for monitoring and probing device integrity, and circuits for dynamically mapping and removal of defective memory elements in the shared memory. Memory interface circuits (e.g., memory interface circuits 405-1 to 405-5 of
The same type of memory interface circuits may also be used in a host interface. By assigning a suitable priority to each host interface or channel, a server may have access to a high-capacity memory or a virtual storage device. Because of the high capacity in the shared memory, the physical memory may be used directly in some applications without mediation by a virtual memory system.
In one embodiment, the refresh circuits in the memory channel controller are implemented as a host port that bids for memory access in the same manner as other hosts (e.g., host interface circuits 391-1 to 391-n of
The above detailed description is provided to illustrate specific embodiments of the present invention and is not intended to be limiting. Numerous modification and variations within the scope of the present invention are possible. The present invention is set forth in the following claims.
The present application relates to and claims priority of U.S. provisional application (“Provisional Application I”), Ser. No. 62/980,571, entitled “Channel Controller For Shared Memory Access,” filed on Feb. 24, 2020. This application also claims priority to U.S. provisional application (“Provisional Application II”), Ser. No. 63/040,347, entitled “Channel Controller For Shared Memory Access,” filed on Jun. 17, 2020. Provisional Application I and Provisional Application II are hereby incorporated by reference in their entireties. The present application is also related to (i) U.S. provisional patent application (“Provisional Application III”), Ser. No. 62/971,859, entitled “Quasi-volatile Memory System,” filed on Feb. 7, 2020; (ii) U.S. provisional patent application (“Provisional Application IV”), Ser. No. 62/980,596, entitled “Quasi-volatile Memory System-Level Memory,” filed on Feb. 24, 2020; (iii) U.S. provisional patent application (“Provisional Application V”), Ser. No. 62/971,720, entitled “High-Capacity Memory Circuit with Low Effective Latency,” filed on Feb. 7, 2020; Provisional Application V is now U.S. Patent application Ser. No. 17/169,87, filed Feb. 5, 2021, and published as U.S. Publication No. 2021/0247910 A1; (iv) U.S. provisional patent application (“Provisional Application VI”), Ser. No. 63/027,850, entitled “Quasi-volatile Memory System-Level Memory,” filed on May 20, 2020; Provisional Applications III, IV and VI are now U.S. patent application Ser. No. 17/169,212, filed Feb. 5, 2021, and published as U.S. Publication No. 2021/0248094 A1; and (v) U.S. provisional application (“Provisional Application VII”), Ser. No. 62/980,600, entitled “Memory Modules or Memory-Centric Structures,” filed on Feb. 24, 2020; Provisional Application VII is now U.S. patent application Ser. No. 17/176,860, filed Feb. 16, 2021, and published as U.S. Publication No. 2021/0263673 A1; Provisional Applications III-VII (collectively, the “Provisional Applications”) are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4213139 | Rao | Jul 1980 | A |
4984153 | Kregness | Jan 1991 | A |
5388246 | Kasai | Feb 1995 | A |
5583808 | Brahmbhatt | Dec 1996 | A |
5646886 | Brahmbhatt | Jul 1997 | A |
5656842 | Iwamatsu | Aug 1997 | A |
5768192 | Eitan | Jun 1998 | A |
5789776 | Lancaster et al. | Aug 1998 | A |
5915167 | Leedy | Jun 1999 | A |
6040605 | Sano et al. | Mar 2000 | A |
6057862 | Margulis | May 2000 | A |
6107133 | Furukawa et al. | Aug 2000 | A |
6118171 | Davies et al. | Sep 2000 | A |
6130838 | Kim et al. | Oct 2000 | A |
6434053 | Fujiwara | Aug 2002 | B1 |
6580124 | Cleeves et al. | Jun 2003 | B1 |
6744094 | Forbes | Jun 2004 | B2 |
6774458 | Fricke et al. | Aug 2004 | B2 |
6873004 | Han et al. | Mar 2005 | B1 |
6881994 | Lee et al. | Apr 2005 | B2 |
6946703 | Ryu et al. | Sep 2005 | B2 |
7005350 | Walker et al. | Feb 2006 | B2 |
7284226 | Kondapalli | Oct 2007 | B1 |
7307308 | Lee | Dec 2007 | B2 |
7489002 | Forbes et al. | Feb 2009 | B2 |
7524725 | Chung | Apr 2009 | B2 |
7612411 | Walker | Nov 2009 | B2 |
8026521 | Or-Bach et al. | Sep 2011 | B1 |
8139418 | Carman | Mar 2012 | B2 |
8178396 | Sinha et al. | May 2012 | B2 |
8417917 | Emma et al. | Apr 2013 | B2 |
8630114 | Lue | Jan 2014 | B2 |
8653672 | Leedy | Feb 2014 | B2 |
8767473 | Shim et al. | Jul 2014 | B2 |
8848425 | Schloss | Sep 2014 | B2 |
8878278 | Alsmeier et al. | Nov 2014 | B2 |
9190293 | Wang et al. | Nov 2015 | B2 |
9202694 | Konevecki et al. | Dec 2015 | B2 |
9230985 | Wu et al. | Jan 2016 | B1 |
9256026 | Thacker et al. | Feb 2016 | B2 |
9297971 | Thacker et al. | Mar 2016 | B2 |
9412752 | Yeh et al. | Aug 2016 | B1 |
9455268 | Oh et al. | Sep 2016 | B2 |
9502345 | Youn et al. | Nov 2016 | B2 |
9620605 | Liang et al. | Apr 2017 | B2 |
9633944 | Kim | Apr 2017 | B2 |
9748172 | Takaki | Aug 2017 | B2 |
9799761 | Or-Bach et al. | Oct 2017 | B2 |
9842651 | Harari | Dec 2017 | B2 |
9892800 | Harari | Feb 2018 | B2 |
9911497 | Harari | Mar 2018 | B1 |
10074667 | Higashi | Sep 2018 | B1 |
10096364 | Harari | Oct 2018 | B2 |
10121553 | Harari | Nov 2018 | B2 |
10217719 | Watanabe et al. | Feb 2019 | B2 |
10249370 | Harari | Apr 2019 | B2 |
10254968 | Gazit et al. | Apr 2019 | B1 |
10283493 | Nishida | May 2019 | B1 |
10319696 | Nakano | Jun 2019 | B1 |
10373956 | Gupta et al. | Aug 2019 | B2 |
10381370 | Shin et al. | Aug 2019 | B2 |
10381378 | Harari | Aug 2019 | B1 |
10395737 | Harari | Aug 2019 | B2 |
10431596 | Herner et al. | Oct 2019 | B2 |
10475812 | Harari | Nov 2019 | B2 |
10622377 | Harari et al. | Apr 2020 | B2 |
10644826 | Wuu et al. | Apr 2020 | B2 |
10692837 | Nguyen et al. | Jun 2020 | B1 |
10692874 | Harari et al. | Jun 2020 | B2 |
10725099 | Ware | Jul 2020 | B2 |
10742217 | Dabral et al. | Aug 2020 | B2 |
10978427 | Li et al. | Apr 2021 | B2 |
11152343 | Dokania et al. | Oct 2021 | B1 |
20010030340 | Fujiwara | Oct 2001 | A1 |
20010053092 | Kosaka et al. | Dec 2001 | A1 |
20020028541 | Lee et al. | Mar 2002 | A1 |
20020051378 | Ohsawa | May 2002 | A1 |
20020193484 | Albee | Dec 2002 | A1 |
20040043755 | Shimooka | Mar 2004 | A1 |
20040214387 | Madurawe et al. | Oct 2004 | A1 |
20040246807 | Lee | Dec 2004 | A1 |
20040262681 | Masuoka et al. | Dec 2004 | A1 |
20040262772 | Ramanathan et al. | Dec 2004 | A1 |
20050128815 | Ishikawa et al. | Jun 2005 | A1 |
20050280061 | Lee | Dec 2005 | A1 |
20060080457 | Hiramatsu | Apr 2006 | A1 |
20060155921 | Gorobets et al. | Jul 2006 | A1 |
20060212651 | Ashmore | Sep 2006 | A1 |
20070192518 | Rupanagunta | Aug 2007 | A1 |
20070236979 | Takashima | Oct 2007 | A1 |
20080022026 | Yang | Jan 2008 | A1 |
20080239812 | Naofumi et al. | Oct 2008 | A1 |
20090057722 | Masuoka et al. | Mar 2009 | A1 |
20090157946 | Arya | Jun 2009 | A1 |
20090237996 | Kirsch et al. | Sep 2009 | A1 |
20090279360 | Peter et al. | Nov 2009 | A1 |
20090316487 | Lee et al. | Dec 2009 | A1 |
20100121994 | Kim et al. | May 2010 | A1 |
20100124116 | Takashi et al. | May 2010 | A1 |
20110115011 | Masuoka et al. | May 2011 | A1 |
20110134705 | Jones et al. | Jun 2011 | A1 |
20110208905 | Shaeffer et al. | Aug 2011 | A1 |
20110298013 | Hwang et al. | Dec 2011 | A1 |
20110310683 | Gorobets | Dec 2011 | A1 |
20120182801 | Lue | Jul 2012 | A1 |
20120243314 | Takashi | Sep 2012 | A1 |
20120307568 | Banna et al. | Dec 2012 | A1 |
20130256780 | Kai et al. | Oct 2013 | A1 |
20140015036 | Fursin et al. | Jan 2014 | A1 |
20140075135 | Choi et al. | Mar 2014 | A1 |
20140117366 | Saitoh | May 2014 | A1 |
20140151774 | Rhie | Jun 2014 | A1 |
20140173017 | Takagi | Jun 2014 | A1 |
20140247674 | Karda et al. | Sep 2014 | A1 |
20140328128 | Louie et al. | Nov 2014 | A1 |
20140340952 | Ramaswamy et al. | Nov 2014 | A1 |
20150155876 | Jayasena et al. | Jun 2015 | A1 |
20150194440 | Noh et al. | Jul 2015 | A1 |
20150220463 | Fluman | Aug 2015 | A1 |
20150249143 | Sano | Sep 2015 | A1 |
20150263005 | Zhao et al. | Sep 2015 | A1 |
20150372099 | Chen et al. | Dec 2015 | A1 |
20160013156 | Zhai et al. | Jan 2016 | A1 |
20160035711 | Hu | Feb 2016 | A1 |
20160086970 | Peng | Mar 2016 | A1 |
20160225860 | Karda et al. | Aug 2016 | A1 |
20160248631 | Duchesneau | Aug 2016 | A1 |
20160314042 | Plants | Oct 2016 | A1 |
20160321002 | Jung | Nov 2016 | A1 |
20170092370 | Harari | Mar 2017 | A1 |
20170092371 | Harari | Mar 2017 | A1 |
20170148517 | Harari | May 2017 | A1 |
20170148810 | Kai et al. | May 2017 | A1 |
20170358594 | Lu et al. | Dec 2017 | A1 |
20180095127 | Pappu et al. | Apr 2018 | A1 |
20180108416 | Harari | Apr 2018 | A1 |
20180269229 | Or-Bach et al. | Sep 2018 | A1 |
20180331042 | Manusharow et al. | Nov 2018 | A1 |
20180366471 | Harari et al. | Dec 2018 | A1 |
20180366485 | Harari | Dec 2018 | A1 |
20180366489 | Harari et al. | Dec 2018 | A1 |
20190006009 | Harari | Jan 2019 | A1 |
20190019564 | Li et al. | Jan 2019 | A1 |
20190028387 | Gray | Jan 2019 | A1 |
20190067327 | Herner et al. | Feb 2019 | A1 |
20190121699 | Cohen | Apr 2019 | A1 |
20190148286 | Or-Bach et al. | May 2019 | A1 |
20190157296 | Harari et al. | May 2019 | A1 |
20190171391 | Dubeyko et al. | Jun 2019 | A1 |
20190180821 | Harari | Jun 2019 | A1 |
20190206890 | Harari et al. | Jul 2019 | A1 |
20190238134 | Lee et al. | Aug 2019 | A1 |
20190244971 | Harari | Aug 2019 | A1 |
20190259769 | Karda et al. | Aug 2019 | A1 |
20190303042 | Kim et al. | Oct 2019 | A1 |
20190325945 | Linus | Oct 2019 | A1 |
20190325964 | Harari | Oct 2019 | A1 |
20190319044 | Harari | Nov 2019 | A1 |
20190348424 | Karda et al. | Nov 2019 | A1 |
20190355747 | Herner et al. | Nov 2019 | A1 |
20190370005 | Moloney et al. | Dec 2019 | A1 |
20190370117 | Fruchtman et al. | Dec 2019 | A1 |
20200051990 | Harari et al. | Feb 2020 | A1 |
20200098738 | Herner et al. | Mar 2020 | A1 |
20200098779 | Cernea et al. | Mar 2020 | A1 |
20200176468 | Herner et al. | Jun 2020 | A1 |
20200194416 | Or-Bach et al. | Jun 2020 | A1 |
20200201718 | Richter | Jun 2020 | A1 |
20200243486 | Quader | Jul 2020 | A1 |
20200258897 | Yan et al. | Aug 2020 | A1 |
20220043596 | Madraswala et al. | Feb 2022 | A1 |
Number | Date | Country |
---|---|---|
20120085591 | Aug 2012 | KR |
2018236937 | Dec 2018 | WO |
Entry |
---|
www.wikipedia.com, Direct Memory Access, 2017, p. 1-2 (Year: 2017). |
“PCT Search Report and Written Opinion, PCT/US2021/016964”, dated Jun. 15, 2021, 19 pages. |
“EP Extended Search Report EP168690149.3”, dated Oct. 18, 2019. |
“European Search Report, EP 16852238.1”, dated Mar. 28, 2019. |
“European Search Report, EP17844550.8”, dated Aug. 12, 2020, 11 pages. |
“Invitation to Pay Additional Fees (PCT/ISA/206), PCT/US2020/015710”, dated Mar. 20, 2020, 2 pages. |
“Notification of Reasons for Refusal, Japanese Patent Application 2018-527740”, (English translation), dated Nov. 4, 2020, 8 pages. |
“Partial European Search Report EP 16869049.3”, dated Jul. 1, 2019, pp. 1-12. |
“PCT Search Report and Written Opinion, PCT/US2018/038373”, dated Sep. 10, 2018. |
“PCT Search Report and Written Opinion, PCT/US2019/014319”, dated Apr. 15, 2019. |
“PCT Search Report and Written Opinion, PCT/US2019/052164”, dated Feb. 27, 2020. |
“PCT Search Report and Written Opinion, PCT/US2019/052446”, dated Dec. 11, 2019. |
“PCT Search Report and Written Opinion, PCT/US2020/015710”, dated Jun. 9, 2020. |
“PCT Search Report and Written Opinion, PCT/US2020/017494”, dated Jul. 20, 2020, 13 pages. |
Hou, S. Y., et al., “Wafer-Leval Integration of an Advanced Logic-Memory System Through the Second-Generation CoWoS Technology”, IEEE Transactions on Electron Devices, vol. 64, No. 10, Oct. 2017, 4071-4077. |
Kim, N., et al., “Multi-layered Vertical gate NANO Flash Overcoming Stacking Limit for Terabit Density Storage”, Symposium on VLSI Tech Dig. of Technical Papers, 2009, pp. 188-189. |
Lue, H.T., et al., “A Highly Scalable 8-Layer 3D Vertical-gate {VG) TFT NANO Flash Using Junction-Free Buried Channel BE-SONOS Device”, Symposium on VLSI: Tech. Dig. of Technical Papers, 2010, pp. 131-132. |
Tanaka, T., et al., “A 768 GB 3b/cell 3D-Floaling-Gate NANO Flash Memory”, Digest of Technical Papers, the 2016 EEE International Solid-Slate Circuits Conference, 2016, pp. 142-144. |
Wann, H.C., et al., “High-Endurance Ultra-Thin Tunnel Oxide in Monos Device Structure for Dynamic Memory Application”, IEEE Electron Device letters, vol. 16, No. 11, Nov. 1995, pp. 491-493. |
Number | Date | Country | |
---|---|---|---|
20210263866 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
63040347 | Jun 2020 | US | |
62980571 | Feb 2020 | US |