Embodiments of the present disclosure generally relate to data error detection and correction, and more particularly, the use of data statistics for content aware error detection and correction.
As data storage products requirements increase, the performance limit of a single decoder of stored data for transmission to a host have been reached. In response, many data storage products have employed pools of decoders. To improve quality of service (QoS), an individual decoder may maintain statistical information about content characteristics, as blocks of data decoded together typically have similar data characteristics. By maintaining statistical information, a decoder may be configured to decode more efficiently by predictively configuring its decoding strategy based on data statistics.
However, because the data statistics are based on the data decoded by an individual decoder, their predictive ability may be limited, resulting in, for example, missed predictions that result in “head of line blocking” scenarios that may degrade QoS.
What is needed are systems and methods that overcome these and other deficiencies.
The present disclosure generally relates to content aware decoding using shared data statistics. Each decoder generates statistical data of content it decodes and provides these statistics to a joint statistics pool. As codewords arrive at the decoder pool, the joint statistics are utilized to estimate or predict any corrupted or missing bit values. Codewords may be assigned to a specific decoder, such as a tier 1 decoder, a tier 2 decoder, or a tier 3 decoder, based on a syndrome weight or a bit error rate. The assigned decoder updates the joint statistics pool after processing the codeword. In some embodiments, each decoder may additionally maintain local statistics regarding codewords, and use the local statistics when there is a statistically significant mismatch between the local statistics and the joint statistics pool.
In one embodiment, a data storage device is disclosed that includes a non-volatile memory (NVM), and a controller coupled to the NVM that includes a plurality of decoders, a first decoder configured to receive a first codeword, the first decoder configured to generate first data statistics for the first codeword, and a second decoder configured to receive a second codeword, the second decoder configured to generate second data statistics for the second codeword. The data storage device further includes a joint data statistics module configured to receive the first and second data statistics.
In another embodiment, a controller for a data storage device is disclosed. The controller includes an I/O to one or more NVMs, and a processor configured to perform a method for content aware decoding. The method includes receiving a codeword from the one or more NVMs at a first decoder, generating data statistics for the codeword, and providing the data statistics to a joint statistics module, the joint statistics module coupled to a plurality of decoders that include the first decoder.
In another embodiment, a system for storing data is disclosed, including an NVM means, and a controller means for executing a method for content aware decoding. The method includes receiving from the NVM means at a first decoder means of a plurality of decoder means, a first codeword, decoding the first codeword at the first decoder means, and generating a first data statistic based on decoding the first codeword. The method further includes updating a joint data statistics module coupled to each of the plurality of decoder means, with the first data statistic, receiving a second a second codeword from the NVM means, and assigning the second codeword to a second decoder means of the plurality of decoder means, based on the joint data statistics module.
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specifically described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments, and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
The present disclosure generally relates to content aware decoding using shared data statistics. Each decoder generates statistical data of content it decodes and provides these statistics to a joint statistics pool. As codewords arrive at the decoder pool, the joint statistics are utilized to estimate or predict any corrupted or missing bit values. Codewords may be assigned to a specific decoder, such as a tier 1 decoder, a tier 2 decoder, or a tier 3 decoder, based on a syndrome weight or a bit error rate. The assigned decoder updates the joint statistics pool after processing the codeword. In some embodiments, each decoder may additionally maintain local statistics regarding codewords, and use the local statistics when there is a statistically significant mismatch between the local statistics and the joint statistics pool.
The host device 104 may store and/or retrieve data to and/or from one or more storage devices, such as the data storage device 106. As illustrated in
The data storage device 106 includes a controller 108, NVM 110, a power supply 111, volatile memory 112, an interface 114, and a write buffer 116. In some embodiments, the data storage device 106 may include additional components not shown in
The interface 114 of the data storage device 106 may include one or both of a data bus for exchanging data with the host device 104 and a control bus for exchanging commands with the host device 104. The interface 114 may operate in accordance with any suitable protocol. For example, the interface 114 may operate in accordance with one or more of the following protocols: advanced technology attachment (ATA) (e.g., serial-ATA (SATA) and parallel-ATA (PATA)), Fibre Channel Protocol (FCP), small computer system interface (SCSI), serially attached SCSI (SAS), PCI, and PCIe, non-volatile memory express (NVMe), OpenCAPI, GenZ, Cache Coherent Interface Accelerator (CCIX), Open Channel SSD (OCSSD), or the like. The electrical connection of the interface 114 (e.g., the data bus, the control bus, or both) is electrically connected to the controller 108, providing electrical connection between the host device 104 and the controller 108, allowing data to be exchanged between the host device 104 and the controller 108. In some embodiments, the electrical connection of the interface 114 may also permit the data storage device 106 to receive power from the host device 104. For example, as illustrated in
The NVM 110 may include a plurality of memory devices or memory units. NVM 110 may be configured to store and/or retrieve data. For instance, a memory unit of NVM 110 may receive data and a message from the controller 108 that instructs the memory unit to store the data. Similarly, the memory unit of NVM 110 may receive a message from the controller 108 that instructs the memory unit to retrieve data. In some examples, each of the memory units may be referred to as a die. In some embodiments, a single physical chip may include a plurality of dies (i.e., a plurality of memory units). In some embodiments, each memory unit may be configured to store relatively large amounts of data (e.g., 128 MB, 256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, 128 GB, 256 GB, 512 GB, 1 TB, etc.).
In some embodiments, each memory unit of NVM 110 may include any type of NVM devices, such as flash memory devices, phase-change memory (PCM) devices, resistive random-access memory (ReRAM) devices, magnetoresistive random-access memory (MRAM) devices, ferroelectric random-access memory (F-RAM), holographic memory devices, and any other type of NVM devices.
The NVM 110 may comprise a plurality of flash memory devices or memory units. NVM Flash memory devices may include NAND or NOR based flash memory devices and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into a plurality of dies, where each die of the plurality of dies includes a plurality of blocks, which may be further divided into a plurality of pages. Each block of the plurality of blocks within a particular memory device may include a plurality of NVM cells. Rows of NVM cells may be electrically connected using a word line to define a page of a plurality of pages. Respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Furthermore, NVM flash memory devices may be 2D or 3D devices and may be single level cell (SLC), multi-level cell (MLC), triple level cell (TLC), or quad level cell (QLC). The controller 108 may write data to and read data from NVM flash memory devices at the page level and erase data from NVM flash memory devices at the block level.
The data storage device 106 includes a power supply 111, which may provide power to one or more components of the data storage device 106. When operating in a standard mode, the power supply 111 may provide power to one or more components using power provided by an external device, such as the host device 104. For instance, the power supply 111 may provide power to the one or more components using power received from the host device 104 via the interface 114. In some embodiments, the power supply 111 may include one or more power storage components configured to provide power to the one or more components when operating in a shutdown mode, such as where power ceases to be received from the external device. In this way, the power supply 111 may function as an onboard backup power source. Some examples of the one or more power storage components include, but are not limited to, capacitors, supercapacitors, batteries, and the like. In some embodiments, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or the size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or the size of the one or more power storage components also increases.
The data storage device 106 also includes volatile memory 112, which may be used by controller 108 to store information. Volatile memory 112 may include one or more volatile memory devices. In some embodiments, the controller 108 may use volatile memory 112 as a cache. For instance, the controller 108 may store cached information in volatile memory 112 until cached information is written to NVM 110. As illustrated in
The data storage device 106 includes a controller 108, which may manage one or more operations of the data storage device 106. For instance, the controller 108 may manage the reading of data from and/or the writing of data to the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 may initiate a data storage command to store data to the NVM 110 and monitor the progress of the data storage command. The controller 108 may determine at least one operational characteristic of the storage system 100 and store the at least one operational characteristic to the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 temporarily stores the data associated with the write command in the internal memory or write buffer 116 before sending the data to the NVM 110.
The controller 108 includes a decoder pool 150. The decoder pool 150 may be part of a low-density parity-check (LDPC) engine of the controller 108. The decoder pool 150 may include one or more decoders, where each of the one or more decoders have one or more gears. Each of the one or more gears may either be a tier 1, a tier 2, or a tier 3 decoder. The exemplification of the different tiers of decoders is not intended to be limiting, but to provide an example of a possible embodiment. For example, the usage of the term “tier” may be utilized as a placeholder for different decoders specialized for different cases. Furthermore, more than or less than the exemplified tiers of decoders are contemplated.
The tier 2 decoder may be utilized for less intensive decoding tasks, such as for low bit error rate (BER) codewords and the tier 3 decoder may be utilized for more intensive decoding tasks, such as for higher BER codewords. In other embodiments, the selected decoder may be based on whether the receive codeword exceeds some certain syndrome weight threshold of the tier 1 decoder, the tier 2 decoder, or the tier 3 decoder. The decoder utilized may be dependent on the decoding operation as well as the current resources utilized, such as current power consumption by the other components of the data storage device. The various decoders may use a tradeoff between latency and power to correction capability, such that the tradeoff is a gear shifting scheme. For example, the tier 1 decoder may be a bit flipping decoder, while the tier 2 and the tier 3 decoders may be message passing decoders. In this context, a tier 2 decoder would be a fasteter message passing decoder while a tier 3 would be a stronger message passing decoder.
Furthermore, the controller, such as the controller 108 may be configured to determine which decoder of the plurality of decoders 204a-n, 206a-n, 208a-n will decode the received codeword. The received codeword may be from volatile memory, such as the volatile memory 112 of
An Iterative Content Aware Decoder (ICAD) may be embedded with each decoder of the plurality of decoders 204a-n, 206a-n, 208a-n of the decoder pool 202. The ICAD allows for the host data statistics to be embedded in the decoder computation logic, such that the decoder works better when better host data statistics are stored. When host data statistics are not available, the data statistics are estimated for the relevant codeword. Furthermore, the ICAD works in an iterative manner, such that the ICAD iterates between decoding the codeword and re-estimating the host data statistics.
Because the host minimum data size may be larger than a flash memory unit (FMU) size, data may be stored across multiple sequential FMUs of the NVM, such as the NVM 110 of
The histogram 350 is a graphical representation of the frequency of each value of the 4 bit word of the set 300. For example, 0000 or 0 occurs 4 times and 1111 or 15 occurs 10 times in the set 300. The histogram 350 may be dynamically updated for each new 4 bit word of the set 300. For example, the value 15 has the highest frequency, and the value 5 has the lowest frequency. When the decoder or the joint data statistics module receives a 4 bit word, where one or more of the bits are corrupted, the decoder or the joint data statistics module may utilize the histogram 350 to determine what the one or more bit values may be. For example, the 4 bit word x110 is received by the decoder or the joint data statistics module, where “x” refers to the bit that is unknown. The 4 bit word may either be 6, where the 4 bit word is 0110, or 14, where the 4 bit word is 1110. When analyzing the histogram 350, the value of 6 has a higher probability of occurring than the value of 14, such that the decoder or the joint data statistics module may estimate that the unknown bit is “0”. However, without the histogram 350 statistics, the best guess estimate of what the unknown bit could be would be 50% “0” and 50% “1”.
In one embodiment, bits in the data are dependent. For example, in a text file, the bits are organized in bytes where each byte represents a character. In the example of a text file, the most used characters are alphanumeric, spaces and punctuation marks, while the rest of the characters are less common. This indicates that bits from the same byte are statistically dependent and knowing part of the bits within a byte increases the reliability of the other bits within the same byte. The data may be divided into groups such that all bits in a group are statistically dependent. Each group is considered as a symbol or a symbol node.
In
The probabilities may be learned during the encoding procedure, where the data is obtained without errors. However, learning the statistics during the encoding procedure may be costly as the learned statistics will need to be stored in the memory, such as the NVM 110 of
The data statistics may be jointly estimated for each decoder of the plurality of decoders 502a-n by keeping a joint representation of the data statistics for all codewords of “k” bit length, where “k” refers to a numerical value of the bit length. The data statistics may be stored as a histogram, such as the histogram 350 of
At each step that the individual decoder, such as the first decoder 502a, passes the updated data statistics to the joint data statistics estimation module 504, the decoder passes the difference from the previous estimation. For example, the difference may be exemplified as “bin1: +5, bin2: −7,” and so-forth. When the decoding of the codeword is completed, the host data associated with the decoded codeword is transferred to the target location, such as the host device, where the data statistics may be a noiseless version.
The data from each decoder may be taken using a weight that reflects the reliability, such that codewords at the beginning of the decoding operation or with high bit error rate are given a higher weight than codewords that are almost or fully decoded. Furthermore, past decoded codewords may also be taken into account with a certain weight, which may decrease with time to reflect changes in the data and to allow the data statistics to change with respect to time. The weights may account for the probability of receiving a certain bit of the codeword, as described in
Referring to the flowchart 700 of
A second decoder 502b receives the second codeword from the NVM at block 702. The second codeword and the first codeword are substantially similar, such that the data statistics of the codewords is similar. The second decoder 502b decodes the second codeword and generates a second data statistic associated with the second codeword at block 704. The generated second data statistic is transferred to the joint data statistics estimation module 504 at block 706.
The joint data statistics estimation module 504 receives both generated first data statistic and generated second data statistic at block 708. Because the first codeword and the second codeword are substantially similar, the generated first data statistic is updated by the generated second data statistic, such that the global statistics includes the generated first data statistic and the generated second data statistic at block 710. Furthermore, the joint data statistics estimation module 504 generates a histogram utilizing the global statistics at block 710.
A third codeword is retrieved from the NVM by the controller, such as the controller 108 of
If there is a mismatch between the data statistics of a decoder, such as the first decoder, and the global statistics of the joint data statics estimation module 604, the decoder may decide to continue using the local data statistics rather than the global statistics. Similarly, the global statistics may not be updated using the local data statistics. The mismatch may be measured by correlation or by some other form of distance, such as a KL or JS divergence, and may hold different thresholds for either side. For example, there may be a case where the global statistics are updated, but not the local data statistics, or vice-versa.
Expanding on the example of
However, due to a mismatch of the generated third data statistic and the global statistics provided to the first data statistics estimation module 606a at block 716, the generated third data statistics may be stored locally at block 720. The first data statistics estimation module 606a may then utilize the locally stored aggregated data statistic and the associated histogram rather than the global statistics and the associated histogram at block 720. Each decoder of the plurality of decoders 602a-n may maintain a local data statistic, such that in the case of a mismatch, the more accurate aggregated data statistic and the associated histogram may be utilized.
Furthermore, each of the generated data statistics may be associated with a weight, such that the weight of a newly generated data statistic is weighted as more relevant than an older generated data statistic. The weight may allow for better flexibility and adaptability of the global statistic or the local data statistic to the received codewords, such that the data statistics and the associated histogram places more value on the newly generated codewords. For example, a second generated data statistic may have a greater weight associated with the generated data statistic than that of the first generated data statistic, where the first generated data statistic is generated prior to the second generated data statistic.
Referring to
In another example, even though each decoder group mentioned above may be for different books of data statistics (i.e., decoders utilizing “k” bit histograms and decoders utilizing “n” bit histograms), the data from the same distribution may be passed between groups. In order to pass the data between the groups, adjustments may be made to the data. For example, an 8 bit histogram may be transformed to a 4 bit histogram by accumulating over all combinations that share the same 4 MSB without loss of data. In another example, 4 bit histograms may be combined with the results of an 8 bit histogram, where a certain factor is given to each group of 8 bits. In each of the 8 bits, the relevant 4 LSBs or 4 MSB may correspond to a certain value in the 4 bit histogram.
By adapting the architecture of the decoder pool to account for content aware decoding using shared data statistics, the latency of the decoding may decrease, the QoS may be improved, and the correction capability of the decoder may be improved.
In one embodiment, a data storage device is disclosed, that includes an NVM, and a controller coupled to the NVM, that includes a plurality of decoders, that includes a first decoder configured to receive a first codeword, the first decoder configured to generate first data statistics for the first codeword, and a second decoder configured to receive a second codeword, the second decoder configured to generate second data statistics for the second codeword. The data storage device further includes a joint data statistics module configured to receive the first and second data statistics.
The joint data statistics module is further configured to create an data histogram based on the first and second data statistics. The controller is configured to receive a third codeword, the third codeword being assigned to one of the plurality of decoders based on the data histogram. The first codeword and second codeword are substantially similar, and wherein the first data statistics are updated by the second data statistics in the joint data statistics module. The first decoder stores the first data statistics. The controller is configured to receive a third codeword. The first codeword, the second codeword, and the third codeword each assigned to the first decoder based on a syndrome weight or a bit error rate. The first decoder provides the third codeword to one of the plurality of decoders based on the first data statistics. Each decoder of the plurality of decoders includes one of a tier 1 decoder, a tier 2 decoder, and a tier 3 decoder. The controller is configured to receive a third codeword. The third codeword is assigned to one of the tier 1 decoder, the tier 2 decoder, and the tier 3 decoder based on the syndrome weight or the bit error rate.
In another embodiment, a controller for a data storage device is disclosed. The controller includes an I/O to one or more NVMs and a processor configured to perform a method for content aware decoding. The method includes receiving a codeword from the one or more NVMs at a first decoder, generating data statistics for the codeword, and providing the data statistics to a joint statistics module, the joint statistics module coupled to a plurality of decoders that include the first decoder.
Each decoder of the plurality of decoders includes one of a tier 1 decoder, a tier 2 decoder, and a tier 3 decoder. The method further includes receiving a second codeword from one or more of the NVMs and assigning the second codeword to one of the plurality of decoders based on the joint statistics module. Each one of the plurality of decoders locally maintains data statistics and provides data statistics to the joint statistics module. The method further including assigning the second codeword to one of the plurality decoders based on locally maintained data statistics. The method further including where a statistical mismatch is detected between locally maintained data statistics and the joint statistics module. The method further including where one of the joint statistics module and locally maintained data includes a histogram. The method further including where newer ones of the data statistics are weighted as more relevant than older ones of the data statistics.
In another embodiment, a system for storing data is disclosed, including an NVM means, and a controller means for executing a method for content aware decoding. The method includes receiving from the NVM means at a first decoder means of a plurality of decoder means, a first codeword, decoding the first codeword at the first decoder means, and generating a first data statistic based on decoding the first codeword. The method further includes updating a joint data statistics module coupled to each of the plurality of decoder means, with the first data statistic, receiving a second a second codeword from the NVM means, and assigning the second codeword to a second decoder means of the plurality of decoder means, based on a syndrome weight or a bit error rate.
The method further includes generating a second data statistic at the second decoder means and updating the joint data statistics module based on the second data statistics. The second data statistic is weighted as more relevant than the data statistics. Each one of the plurality of decoder means is configured to provide data statistics to the joint data statistics module. The assignment of subsequent codewords from the NVM means to one of the plurality of decoder means is based on the syndrome weight or the bit error rate.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
This application claims benefit of U.S. Provisional Patent Application Ser. No. 63/110,738, filed Nov. 6, 2020, which is herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
6381106 | Pinarbasi | Apr 2002 | B1 |
6928590 | Ilkbahar et al. | Aug 2005 | B2 |
7035062 | Mao et al. | Apr 2006 | B1 |
7982996 | Smith et al. | Jul 2011 | B2 |
8039885 | Wang et al. | Oct 2011 | B2 |
8064159 | Sakamoto et al. | Nov 2011 | B2 |
8085490 | Franca-Neto et al. | Dec 2011 | B2 |
8194361 | Kudo et al. | Jun 2012 | B2 |
8259409 | Bragana et al. | Sep 2012 | B2 |
8286048 | Chen et al. | Oct 2012 | B1 |
8301979 | Sharon et al. | Oct 2012 | B2 |
8320080 | Braganca et al. | Nov 2012 | B1 |
8325442 | Koui et al. | Dec 2012 | B2 |
8374026 | Sharon et al. | Feb 2013 | B2 |
8416539 | Carey et al. | Apr 2013 | B2 |
8467148 | Iwasaki et al. | Jun 2013 | B2 |
8472151 | Wang et al. | Jun 2013 | B2 |
8549385 | Yang | Oct 2013 | B2 |
8582240 | Chen et al. | Nov 2013 | B1 |
8687319 | Igarashi et al. | Apr 2014 | B2 |
8824104 | Koui et al. | Sep 2014 | B1 |
8924815 | Frayer | Dec 2014 | B2 |
8953283 | Shimizu et al. | Feb 2015 | B2 |
8970996 | Nagasaka et al. | Mar 2015 | B2 |
9042057 | Diao et al. | May 2015 | B1 |
9064508 | Shiimoto et al. | Jun 2015 | B1 |
9099107 | Igarashi et al. | Aug 2015 | B1 |
9099124 | Freitag et al. | Aug 2015 | B1 |
9208801 | Zhang et al. | Dec 2015 | B2 |
9230571 | Chen et al. | Jan 2016 | B1 |
9230597 | Shimoto et al. | Jan 2016 | B2 |
9236564 | Carey et al. | Jan 2016 | B2 |
9275672 | Shiroishi et al. | Mar 2016 | B2 |
9305574 | Nagasaka et al. | Apr 2016 | B1 |
9337415 | Oh et al. | May 2016 | B1 |
9368135 | Gao | Jun 2016 | B2 |
9412410 | Bentley | Aug 2016 | B1 |
9881637 | Wilson et al. | Jan 2018 | B1 |
9893273 | Hu et al. | Feb 2018 | B2 |
10063258 | Goldenberg et al. | Aug 2018 | B2 |
10121497 | Takahashi et al. | Nov 2018 | B1 |
10186284 | Narita et al. | Jan 2019 | B2 |
10236021 | Narita et al. | Mar 2019 | B2 |
10276193 | Narita et al. | Apr 2019 | B2 |
10325618 | Wu et al. | Jun 2019 | B1 |
10366714 | Olson et al. | Jul 2019 | B1 |
10460752 | Freitag et al. | Oct 2019 | B2 |
10498366 | Lin et al. | Dec 2019 | B2 |
10522744 | Jan et al. | Dec 2019 | B2 |
10566015 | Freitag et al. | Feb 2020 | B2 |
10613927 | Symons | Apr 2020 | B1 |
10700706 | Zhang et al. | Jun 2020 | B2 |
20020135935 | Covington | Sep 2002 | A1 |
20080112095 | Carey et al. | May 2008 | A1 |
20080294960 | Sharon et al. | Nov 2008 | A1 |
20080304176 | Takagishi et al. | Dec 2008 | A1 |
20090059423 | Yamada et al. | Mar 2009 | A1 |
20090257151 | Zhang et al. | Oct 2009 | A1 |
20090310244 | Shimazawa et al. | Dec 2009 | A1 |
20100157465 | Sakamoto et al. | Jun 2010 | A1 |
20110134561 | Smith et al. | Jun 2011 | A1 |
20130064971 | Carey et al. | Mar 2013 | A1 |
20130250456 | Yamada et al. | Sep 2013 | A1 |
20140068393 | Varnica et al. | Mar 2014 | A1 |
20140139952 | Takeo et al. | May 2014 | A1 |
20140177100 | Sugiyama et al. | Jun 2014 | A1 |
20140203383 | Guo | Jul 2014 | A1 |
20150010780 | Carey et al. | Jan 2015 | A1 |
20150124347 | Shimoto et al. | May 2015 | A1 |
20160006462 | Hanham et al. | Jan 2016 | A1 |
20160027455 | Kudo et al. | Jan 2016 | A1 |
20170236537 | Murakami et al. | Aug 2017 | A1 |
20180091172 | Ilani | Mar 2018 | A1 |
20180268848 | Narita et al. | Sep 2018 | A1 |
20180343017 | Kumar et al. | Nov 2018 | A1 |
20190088274 | Narita et al. | Mar 2019 | A1 |
20190088275 | Narita et al. | Mar 2019 | A1 |
20190140665 | Shin | May 2019 | A1 |
20190259412 | Gao et al. | Aug 2019 | A1 |
20190279667 | Freitag et al. | Sep 2019 | A1 |
20190288707 | Kumar et al. | Sep 2019 | A1 |
20190393901 | Fainzilber et al. | Dec 2019 | A1 |
20200013429 | Freitag et al. | Jan 2020 | A1 |
20200099402 | Avraham et al. | Mar 2020 | A1 |
20200099404 | Avraham et al. | Mar 2020 | A1 |
20200105999 | Jeong et al. | Apr 2020 | A1 |
20200235757 | Achtenberg | Jul 2020 | A1 |
20200350930 | Avraham et al. | Nov 2020 | A1 |
20200382144 | Avraham et al. | Dec 2020 | A1 |
20210344356 | Berman | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
2008176911 | Jul 2008 | JP |
2012118979 | Jun 2012 | JP |
2017097942 | Jun 2017 | JP |
2007082626 | Jul 2007 | WO |
Entry |
---|
Choe et al. “Near-data processing for machine learning,” CoRR, vol. abs/1610.02273, 2016, 9 pages. |
International Search Report and Written Opinion issued in Internation Patent Application No. PCT/US2019/067069, dated Apr. 2, 2020 (10 pages). |
Related U.S. Appl. No. 16/051,416, filed Jul. 31, 2018. |
International Search Report and Written Opinion issued in corresponding International Application No. PCT/US2020/034728, dated Aug. 13, 2020 (9 pages). |
Related U.S. Appl. No. 16/014,128, filed Jun. 21, 2018. |
Number | Date | Country | |
---|---|---|---|
20220149870 A1 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
63110738 | Nov 2020 | US |