The exemplary and non-limiting embodiments of this invention relate generally to memory storage systems, methods, devices and computer programs and, more specifically, relate to sharing resources among mass memory devices and the host devices to which they are operatively attached.
This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived, implemented or described. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
Various types of flash-based mass storage memories currently exist. A basic premise of managed mass storage memory is to hide the flash technology complexity from the host system. A technology such as embedded multimedia card (eMMC) is one example. A managed NAND type of memory can be, for example, an eMMC, solid state drive (SSD), universal flash storage (UFS) or a mini or micro secure digital (SD) card.
One non-limiting example of a flash memory controller construction is described in “A NAND Flash Memory Controller for SD/MMC Flash Memory Card”, Chuan-Sheng Lin and Lan-Rong Dung, IEEE Transactions of Magnetics, Vol. 43, No. 2, February 2007, pp. 933-935 (hereafter referred to as Lin et al.)
Performance of the mass memory device, and of the host device utilizing the mass memory device, are highly dependent on the amount of resources that are available for the memory functions. Such resources have traditionally been the central processing unit (CPU), random access memory (RAM) and also non-volatile memory such as for example non-volatile execution memory type (NOR) or non-volatile mass memory type (NAND). Resource availability also affects reliability and usability of the mass memory device. Most host/mass memory systems currently in commerce utilize a fixed allocation of resources. In traditional memory arrangements the CPU has some means to connect to the RAM and to the non-volatile memory, and these memories themselves have the resources needed for their own internal operations. But since that paradigm became prevalent the variety of resources has greatly increased, for example it is now common for there to be multi-core CPUs, main/slave processors, graphics accelerators, and the like.
Co-owned U.S. patent application Ser. No. 12/455,763 (filed Jun. 4, 2009) details an example in which there is one NAND where the NAND flash translation layer (FTL, a specification by the Personal Computer Memory Card International Association PCMCIA which provides for P2L mapping table, wear leveling, etc.) occurs side by side by the main CPU. Co-owned U.S. patent application Ser. No. 13/358,806 (filed Jan. 26, 2012) details examples in which eMMC and UFS components could also use system dynamic random access memory (DRAM) for various purposes in which case the system CPU would not do any relevant memory-processing.
In a first aspect thereof the exemplary embodiments of this invention provide a method that comprises: in response to determining that a memory operation has been initiated which involves a mass memory device, dynamically checking for available processing resources of a host device that is operatively coupled to the mass memory device; and putting at least one of the available processing resources into use for performing the memory operation.
In a second aspect thereof the exemplary embodiments of this invention provide an apparatus that comprises a) at least one memory controller embodied in a mass memory device; and b) an interface operatively coupling the mass memory device to a host device. In this second aspect the memory controller is configured to cause the apparatus to at least: in response to determining that a memory operation has been initiated which involves a mass memory device, dynamically checking for available processing resources of a host device that is operatively coupled to the mass memory device; and putting at least one of the available processing resources into use for performing the memory operation. In this aspect the memory controller may utilize executable software stored in the mass memory device for causing the mass memory device to behave as noted above. For example, such software that is tangibly embodied in a memory is recited below at the third aspect of these teachings.
In yet a third aspect thereof the exemplary embodiments of this invention provide a memory storing a program of computer readable instructions which when executed by a memory processor of a mass memory device causes the mass memory device to perform at least: in response to determining that a memory operation has been initiated which involves a mass memory device, dynamically checking for available processing resources of a host device that is operatively coupled to the mass memory device; and putting at least one of the available processing resources into use for performing the memory operation.
Embodiments of these teachings relate to architectures that are particularly advantageous for mobile host devices but can be implemented to advantage in non-mobile environments such as for example other consumer electronics such as personal computers PCs. Mass memories have evolved to include some complex logic, for purposes of wear levelling, error correction, physical-logical transformation and interface protocol handling to name a few. At the same time the amount of system RAM (typically dynamic RAM or DRAM) is very high compared to earlier mobile devices. The inventors have concluded that in general today's mobile devices have plenty of resources and not all of them are in use at the same time.
Like other prior memory systems, a common feature of the two co-owned US patent applications referenced in the background section is that the amount of resources outside of the memories is fixed. But a fixed split of resources between the host device and the mass memory device might not be optimum in all cases, for example if the same engine is used for two different end products or even in case of different use cases of one product.
Another problem with a fixed split of resources is that in some cases resources integrated to for an example mass memory are not available in a small enough amount such as in the case of DRAM, or these resources are not in effective usage. To design a mass memory module for peak performance would be costly, and would forego some of the efficiencies in manufacturing scale if different versions of a mass memory device needed to be designed for different end-products.
Embodiments of these teachings break with that paradigm of fixed divisions of the memory device and the host device so as to make system resources, which are available and suitable for memory usage, dynamically available for use with memory functions. As non-limiting examples, such resources can include system RAM (including DRAM, SRAM and other variants), CPU core or cores, digital signal processor (DSP), graphics processor, and so forth. In one particularly flexible embodiment the system and mass memory can agree about usage of the available resources and also about the manner in which the mass memory module can offer resources to the system.
In one embodiment the resources that can be made available for memory functionalities can be defined by the end user, the user of a mobile handset or PC for example. In other implementations the host system has ways to add resources to the “available” list from which those available resources can be taken into the memory usage dynamically.
In
In order that some of the host device/system resources can be used efficiently communications between the memory module/device 20 and these dynamically reserved resources occur over the virtual interface 18B. This interface can be embedded to an external memory interface 18B such as for example a mass storage memory bus shown in the eMMC of
The mass storage memory 20 includes a microcontroller, or more simply a controller 22 that is connected via at least one internal bus 27 with a volatile RAM 24, a non-volatile mass memory 26 (e.g., a multi-gigabyte flash memory mass storage) and a MSMB interface (FF) 28. The controller 22 operates in accordance with stored program instructions. The program instructions may be stored in the RAM 24 or in a ROM or in the mass memory 26. The mass storage memory 20 may be embodied as an MMC, UFS, eMMC or a SD device, as non-limiting examples, and may be external to (plugged into) the host device 10 or installed within the host device 10. Note that the mass memory 26 may in some embodiments additionally store a file system, in which case then the RAM 24 may store file system-related metadata such as one or more data structures comprised of bit maps, file allocation table data and/or other file system-associated information.
The embodiments of the invention described in commonly-assigned U.S. patent application Ser. No. 12/455,763 provide a technique to share the RAM 14 of the host device 10 with the mass storage memory device 20. It can be assumed that the host device 10 (e.g., a mobile computer, a cellular phone, a digital camera, a gaming device, a PDA, etc.) has the capability to allocate and de-allocate the RAM 14. The allocation of the RAM 14 may be performed dynamically or it may be performed statically. The allocation of a portion of the RAM may be performed in response to a request received at the host device 10, or at the initiative of the host device 10.
For the case in which the CPU 12 is a multi-core processor, one straightforward implementation of these teachings is for the mass memory module 20 to partially or fully utilize one of the multi-core engines of the system 10 for complex mass memory operation, such as for example the wear-leveling or bad block management noted above. In this case the system DRAM (e.g., shown as RAM 14 at
In another embodiment the host device 10 can indicate resource availability information in corresponding preset registers that are in the mass memory device 20, which the memory controller 22 checks. The host device can initially store this information when initializing the mass memory device 20 itself, and dynamically update the relevant registers when initializing some memory operation such as a write operation that involves the mass memory device 20. In a still further embodiment the host device 10 can indicate the resource availability information to the mass memory device 20 with the access command itself, for example indicating with the write command to the mass memory device 20 that the DSP 13 of the host device 10 is available for this write operation. In a further non-limiting embodiment the mass memory device 20 can request the resource availability list from the host device 10, at any time (periodic) or along with some specific memory access request.
The host device 10 writes a chunk of data to the memory module 20, which as above may be a managed NAND such as a UFS. This write operation involves an interim step of writing to the host system DRAM 12 and the data residing there until the controller 22 of the mass memory 20 can process it.
Further details of the write data being resident in the DRAM 12 of the host 10 prior to final writing to the mass memory device 20 may be seen in co-owned U.S. patent application Ser. No. 13/358,806 which was referenced in the background section above, and highlights of those teachings are summarized following the description of
At block 406 the memory module controller 22 recognizes that this particular write access requires DSP processing, and so it checks the virtual interface 18B to see if that or some substitute is available from the host system 10. Recall that in this example the mass memory device is designed to have minimal functionality so the memory module controller 22 is incapable of fully handling this write process on its own. The virtual interface check is dynamic, based on the specific needs of this particular write operation. The DSP 13 in this instance is available, so the mass memory device 20/controller 22 sends to the DSP 13 a request that it process the write data which at this juncture is resident in the DRAM 12 of the host 10. The DSP complies with this request at 412 and at 414 the DSP 13 informs the memory module controller 22 after the DSP 13 has completed its processing that the processed data is available again in the system DRAM 14. Then finally at 416 the memory module controller 22 writes the processed data from the DRAM 14 of the host device 10 to the mass memory 26 of the mass memory device 20.
A read process according to these teachings can follow similar process steps as shown at
A similar process as
As mentioned above, commonly-assigned U.S. patent application Ser. No. 12/455,763 provides a model in which the mass storage memory 20 is provided with read/write access to the system (host) DRAM. Commonly assigned U.S. patent application Ser. No. 13/358,806 extends that concept to enable the mass storage memory 20 to move data within the system DRAM, either logically (by the use of pointers) or physically. The actual move could occur within the DRAM or the data could travel back and forth over the system DRAM bus between the system DRAM and a Mass Memory Host Controller DMA buffer). The Mass Memory Host Controller can be considered to function in this regard as a DMA master and thus can include its own associated DMA data buffers for this purpose.
Commonly assigned U.S. patent application No. Ser. 13/358,806 provides several specific example embodiments. In a first embodiment a separate physical address space in the system DRAM is reserved for the mass storage memory, or a logical space is reserved if the system DRAM operates in a logical address space. The mass storage memory can utilize this address space freely, and is responsible for the management functions of this address space such as allocation/de-allocation functions and other functions.
There is some source of data such as an application or a file system cache or a file cache entity (as non-limiting examples) which has data to be stored into the mass memory module. The data is moved to the transfer buffer as the transfer data by a file system/driver for subsequent delivery to the mass memory module. Optionally the data could be moved directly from its original location thereby bypassing the transfer buffer. An access list is created in the system DRAM for the application such as by an OS utility and points to the location of the data. Such an “application” (if understood in a conventional sense as a third party application) cannot itself create any access lists but instead creates read/write accesses and functions as an initiator. The access lists are created typically by some OS services/memory subsystem (e.g. some driver layer or some OS utility) based on accesses coming through the file system layer. In effect the access lists are constructed or built for the application. An initiator may be, as non-limiting examples, an application, a file system, a driver or an OS utility.
In the commonly owned application Ser. No. 13/358,806 an access may take place by the host device as follows (assuming that the host device has already correctly initiated the mass storage memory).
Note that the UFS memory module 20 need not process received access commands sequentially. For example, before processing the write command from the second initiator if another write command having a higher indicated priority arrives from a third initiator, where the write data has also been stored in the allocated portion 14G, the UFS memory module 20 could process the write command from the third initiator and then process the write command from the second initiator.
In another embodiment detailed at commonly owned U.S. patent application Ser. No. 13/358,806 there need be no specific separate memory addresses reserved in the system DRAM for the mass memory module. Instead the mass memory module can have access to any (or almost any) location in the system DRAM. In this case instead of moving data physically in the system DRAM the mass memory module can control a list of memory pointers created by the host CPU. By modifying the lists of pointers (one list of pointers for host and another for the mass memory module) the mass memory module can virtually “move” data from host CPU-controlled logical memory space to space controlled by the mass memory module. Note that in this case the transfer buffer will/may still be present, however there is no need for the physical portion allocated for the mass memory module.
In all of these read-related embodiments from the commonly owned U.S. patent application Ser. No. 13/358,806, if previously cached data (logical address) is read then the read data is copied from the cache to the system memory side (host device 10, specifically the DRAM 13) and the cached data still remains also cached and will be finally written to the non-volatile memory 26 of the memory module 20 as soon as the memory controller 22 has the resources to perform the write operation.
Certain of the above embodiments of these teachings are summarized at the process flow diagram of
As was detailed above by various non-limiting examples, the memory controller 22 can determine that the memory operation of block 502 has been initiated via the virtual interface 18B. or by checking the preset registers that the host device 10 updates when initializing some memory operation that involves the mass memory device 20, or from the host device's access command itself, or in response to its request of the host device 10 for the availability list.
In the above examples the operative coupling of block 502 was via the physical MSMB 18A to which the virtual interface 18B was embedded. The memory controller 22 or memory host device is able to determine that a memory operation has been initiated which involves the mass memory device, as block 402 states, because it is the controller of that same mass memory device 20, so of course it will know of it initiates that operation/function directly or if some other processor or entity in the host device 10 initiated that memory operation. And in another embodiment the memory host controller of the host device 10 itself initiates the memory operation. In the above examples the initiated memory operation was a read or write operation, but also mentioned as further non-limiting examples it could be a wear leveling analysis, bad memory block management, and even data encryption or decryption (depending on where the data is being stored before or after the encryption/decryption), to name but a few.
Also in the above examples non-limiting embodiments of those available processing resources that are put into use at block 404 include any one or more of: a core engine of a multi-core CPU; a DSP; different type of sensors (like temperature-, acceleration-, movement-, location- or light sensor) and a graphics processor.
The specific DRAM example that above was detailed with respect to
In a specific but non-limiting embodiment above the host device provided an availability list to the mass memory device over the virtual interface, wherein the virtual interface is embedded to a physical mass storage memory bus that operatively couples the host device and the mass memory device. In other specific but non-limiting embodiments the mass memory device was a MultiMediaCard MMC; an embedded MultiMediaCard eMMC; a microSD card; a solid state drive SSD; or a universal flash storage UFS device; or more generally it is a mass memory on-board the host device or a mass memory removable from the host device. Also in the above embodiments the host device was a mobile terminal (or any number of other consumer electronics, mobile or otherwise).
From the above detailed description and non-limiting examples it is clear that at least certain embodiments of these teachings provide the technical effect that the same memory module can provide different performances and functionality depending on the available system/host resources. These teachings overcome the prior limitation of fixed resources, which is how the memory controller can dynamically check what resources are available. Cost savings may be realized for an example where the mass memory device would not need to have logic defined for its maximum use case, but instead can rely on requesting resources from the host device as tasks from system are initiated and ongoing. Cost savings can be similarly obtained by reducing the size of the RAM in the mass memory module so it would no longer need to account for maximum performance in the mass memory device itself. Implementation may be more complex in that there is a resource coordination in real time between the mass memory device and the host device, but this is seen to be far overshadowed by the potential cost savings in manufacturing the mass memory devices for something less than maximum performance, and yet without foregoing that maximum performance as noted above.
We note that the various blocks shown in
In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
It should thus be appreciated that at least some aspects of the exemplary embodiments of the inventions may be practiced in various components such as integrated circuit chips and modules, and that the exemplary embodiments of this invention may be realized in an apparatus that is embodied as an integrated circuit. The integrated circuit, or circuits, may comprise circuitry (as well as possibly firmware) for embodying at least one or more of a data processor or data processors, a digital signal processor or processors, baseband circuitry and radio frequency circuitry that are configurable so as to operate in accordance with the exemplary embodiments of this invention.
Various modifications and adaptations to the foregoing exemplary embodiments of this invention may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, any and all modifications will still fall within the scope of the non-limiting and exemplary embodiments of this invention.
It should be further noted that the terms “connected,” “coupled,” “operatively coupled”, or any variant thereof, mean any connection or coupling, either direct or indirect, between two or more elements, and may encompass the presence of one or more intermediate elements between two elements that are “connected” or “coupled” together. The coupling or connection between the elements can be physical, logical, or a combination thereof. As employed herein two elements may be considered to be “connected” or “coupled” together by the use of one or more wires, cables and/or printed electrical connections, as well as by the use of electromagnetic energy, such as electromagnetic energy having wavelengths in the radio frequency region, the microwave region and the optical (both visible and invisible) region, as several non-limiting and non-exhaustive examples.
Furthermore, some of the features of the various non-limiting and exemplary embodiments of this invention may be used to advantage without the corresponding use of other features. As such, the foregoing description should be considered as merely illustrative of the principles, teachings and exemplary embodiments of this invention, and not in limitation thereof.
Number | Name | Date | Kind |
---|---|---|---|
5586291 | Lasker et al. | Dec 1996 | A |
5701516 | Cheng et al. | Dec 1997 | A |
5802069 | Coulson | Sep 1998 | A |
5924097 | Hill et al. | Jul 1999 | A |
6067300 | Baumert et al. | May 2000 | A |
6115785 | Estakhri et al. | Sep 2000 | A |
6373768 | Woo et al. | Apr 2002 | B2 |
6513094 | Magro | Jan 2003 | B1 |
6522586 | Wong | Feb 2003 | B2 |
6665747 | Nazari | Dec 2003 | B1 |
6842829 | Nichols et al. | Jan 2005 | B1 |
7136963 | Ogawa et al. | Nov 2006 | B2 |
7181574 | Lele | Feb 2007 | B1 |
7233538 | Wu et al. | Jun 2007 | B1 |
7321958 | Hofstee et al. | Jan 2008 | B2 |
7395176 | Chung et al. | Jul 2008 | B2 |
7450456 | Jain et al. | Nov 2008 | B2 |
7480749 | Danilak | Jan 2009 | B1 |
7571295 | Sakarda et al. | Aug 2009 | B2 |
7760569 | Ruf et al. | Jul 2010 | B2 |
8190803 | Hobson et al. | May 2012 | B2 |
8218137 | Noh et al. | Jul 2012 | B2 |
20020093913 | Brown et al. | Jul 2002 | A1 |
20020108014 | Lasser | Aug 2002 | A1 |
20030028737 | Kaiya et al. | Feb 2003 | A1 |
20030137860 | Khatri et al. | Jul 2003 | A1 |
20040010671 | Sampsa et al. | Jan 2004 | A1 |
20040203670 | King et al. | Oct 2004 | A1 |
20040221124 | Beckert et al. | Nov 2004 | A1 |
20050010738 | Stockdale et al. | Jan 2005 | A1 |
20050071570 | Takasugl et al. | Mar 2005 | A1 |
20050097280 | Hofstee et al. | May 2005 | A1 |
20060041888 | Radulescu et al. | Feb 2006 | A1 |
20060069899 | Schoinas et al. | Mar 2006 | A1 |
20060075147 | Schoinas et al. | Apr 2006 | A1 |
20060075395 | Lee et al. | Apr 2006 | A1 |
20070088867 | Cho et al. | Apr 2007 | A1 |
20070207854 | Wolf et al. | Sep 2007 | A1 |
20070234006 | Radulescu et al. | Oct 2007 | A1 |
20070283078 | Li et al. | Dec 2007 | A1 |
20080104291 | Hinchey | May 2008 | A1 |
20080127131 | Gao et al. | May 2008 | A1 |
20080162792 | Wu et al. | Jul 2008 | A1 |
20080228984 | Yu et al. | Sep 2008 | A1 |
20080281944 | Vorne et al. | Nov 2008 | A1 |
20090106503 | Lee et al. | Apr 2009 | A1 |
20090157950 | Selinger | Jun 2009 | A1 |
20090164705 | Gorobets | Jun 2009 | A1 |
20090182940 | Matsuda et al. | Jul 2009 | A1 |
20090182962 | Khmelnitsky et al. | Jul 2009 | A1 |
20090198871 | Tzeng | Aug 2009 | A1 |
20090198872 | Tzeng | Aug 2009 | A1 |
20090210615 | Struk et al. | Aug 2009 | A1 |
20090216937 | Yasufuku | Aug 2009 | A1 |
20090222629 | Yano et al. | Sep 2009 | A1 |
20090307377 | Anderson et al. | Dec 2009 | A1 |
20090327584 | Tetrick et al. | Dec 2009 | A1 |
20100005281 | Buchmann et al. | Jan 2010 | A1 |
20100030961 | Ma et al. | Feb 2010 | A9 |
20100037012 | Yano et al. | Feb 2010 | A1 |
20100100648 | Madukkarumukumana et al. | Apr 2010 | A1 |
20100106886 | Marcu et al. | Apr 2010 | A1 |
20100106901 | Higeta et al. | Apr 2010 | A1 |
20100115193 | Manus et al. | May 2010 | A1 |
20100161882 | Stern et al. | Jun 2010 | A1 |
20100169558 | Honda et al. | Jul 2010 | A1 |
20100172180 | Paley et al. | Jul 2010 | A1 |
20100250836 | Sokolov et al. | Sep 2010 | A1 |
20100293420 | Kapil et al. | Nov 2010 | A1 |
20100312947 | Luukkainen et al. | Dec 2010 | A1 |
20110082967 | Deshkar et al. | Apr 2011 | A1 |
20110087804 | Okaue et al. | Apr 2011 | A1 |
20110099326 | Jung et al. | Apr 2011 | A1 |
20110264860 | Hooker et al. | Oct 2011 | A1 |
20110296088 | Duzly et al. | Dec 2011 | A1 |
20120102268 | Smith et al. | Apr 2012 | A1 |
20120131263 | Yeh | May 2012 | A1 |
20120131269 | Fisher et al. | May 2012 | A1 |
20120151118 | Flynn et al. | Jun 2012 | A1 |
20120210326 | Torr et al. | Aug 2012 | A1 |
20130138840 | Kegel et al. | May 2013 | A1 |
20130145055 | Kegel et al. | Jun 2013 | A1 |
20130339635 | Amit et al. | Dec 2013 | A1 |
20140068140 | Mylly | Mar 2014 | A1 |
20150039819 | Luukkainen et al. | Feb 2015 | A1 |
Number | Date | Country |
---|---|---|
2005200855 | Sep 2004 | AU |
0481716 | Apr 1992 | EP |
59135563 | Aug 1984 | JP |
WO9965193 | Dec 1999 | WO |
WO2004084231 | Sep 2004 | WO |
WO2005088468 | Jun 2005 | WO |
Entry |
---|
Lin et al., “A NAND Flash Memory Controller for SD/MMC Flash Memory Card”, IEEE Transactions of Magnetics, vol. 43, No. 2, (Feb. 2, 2007), pp. 933-935). |
“How to Boot an Embedded System for an emMC Equipped with a Microsoft FAT File System”, AN2539 Numonyx Application Note, Nov. 2008, pp. 1-25. |
Embedded MultiMediaCard (eMMC) Mechanical Standard, JESD84-C43, JEDEC Standard, JEDEC Solid State Technology Association, Jun. 2007. |
Embedded MultiMediaCard (eMMC) Product Standard, High Capacity, JEDEC Solid State Technology Association, JEDEC Standard, JES 84-A42, Jun. 2007. |
Li et al, “A Method for Improving Concurrent Write Performance by Dynamic Mapping Virtual Storage System Combined with Cache Management”, 2011 IEEE 7th International Conference of Parallel Distributed System, Dec. 7-8, 2011. |
The PCT Search Report and Written Opinion mailed Apr. 16, 2014 for PCT application No. PCT/US13/49434, 8 pages. |
Apostolakis, et al., “Software-Based Self Testing of Symmetric Shared-Memory Multiprocessors”, IEEE Transactions on Computers, vol. 58, No. 12, Dec. 2009, 13 pages. |
JEDEC Standard, “Embedded MultiMediaCard (eMMC) Product Standard, High Capacity,” JESD84-A42, Jun. 2007, 29 pages. |
JEDEC Standard, “Embedded ZmultiMediaCard(eMMC) eMMC/Card Product Standard, high Capacity, Including Reliable Write, Boot, and Sleep Modes,” (MMCA, 4.3), JSEDD84-A43, Nov. 2007, 166 pages. |
JEDEC Standard, “Embedded MultiMediaCard (eMMC) Mechanical Standard,” JESD84-C43, Jun. 2007, 13 pages. |
Numonyz, “How to boot an embedded system from an eMMCTM equipped with a Microsoft FAT file system.” Application note AN2539, Nov. 2008, pp. 1-25. |
Office Action for U.S. Appl. No. 13/358,806, mailed on Nov. 27, 2013, Kimmo J. Mylly, “Apparatus and Method to Provide Cache Move With Non-Volatile Mass Memory System”, 26 pages. |
Office Action for U.S. Appl. No. 14/520,030, mailed on Dec. 4, 2014, Olli Luukkainen, “Apparatus and Method to Share Host System RAM with Mass Storage Memory RAM”, 6 pages. |
Office Action for U.S. Appl. No. 13/596,480, mailed on Mar. 13, 2014, Kimmo J. Mylly, “Dynamic Central Cache Memory”, 15 pages. |
Office Action for U.S. Appl. No. 12/455,763, mailed on Mar. 4, 2014, Luukkainen et al., “Apparatus and method to share host system ram with mass storage memory ram”, 6 pages. |
Office Action for U.S. Appl. No. 12/455,763, mailed on Aug. 1, 2013, Luukkainen et al., “Apparatus and method to share host system ram with mass storage memory ram”, 28 pages. |
Final Office Action for U.S. Appl. No. 13/358,806, mailed on Sep. 10, 2014, Kimmo J. Mylly, “Apparatus and Method to Provide Cache Move With Non-Volatile Mass Memory System”, 27 pages. |
The PCT Search Report mailed Feb. 25, 2015 for PCT application No. PCT/US2014/069616, 10 pages. |
The PCT Search Report and Written Opinion mailed Mar. 6, 2014 for PCT application No. PCT/US13/56980, 11 pages. |
Tanenbaum, “Structured Computer Organization”, Prentice-Hall, Inc., 1984, 5 pages. |
Office Action for U.S. Appl. No. 13/358,806, mailed on Apr. 30, 2015, Kimmo J. Mylly, “Apparatus and Method to Provide Cache Move With Non-Volatile Mass Memory System”, 42 pages. |
Final Office Action for U.S. Appl. No. 14/520,030, mailed on May 20, 2015, Olli Luukkaninen, “Apparatus and Method to Share Host System RAM with Mass Storage Memory RAM”, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20130346668 A1 | Dec 2013 | US |