For an embedded microcontroller (MCU) system, because cache memory (e.g. static random access memory (SRAM)) is expensive, a large/slower/higher power memory such as dynamic random access memory (DRAM) is generally used with the cache to provide large memory space; however, this arrangement costs much power consumption. In some ultra-low-power applications such as a sensor hub or a pedometer, the DRAM power consumption is too high for these ultra-low-power applications, so the DRAM must be powered down during some periods to save power. However, because the current cache line is too small, it is easy to access DRAM randomly, causing the DRAM not able to sleep long enough to save power, and the overhead of the DRAM is increased due to the frequent power on/off.
It is therefore an objective of the present invention to provide a cache management method which can manage the objects loaded from the DRAM to the cache by using an object-oriented manner. By using the cache management method of the present invention, the DRAM may be powered down longer with less overhead, to solve the above-mentioned problems.
According to one embodiment of the present invention, a microcontroller comprises a processor, a first memory and a cache controller. The first memory comprises at least a working space. The cache controller is coupled to the first memory, and is arranged for managing the working space of the first memory, and dynamically loading at least one object from a second memory to the working space of the first memory in an object-oriented manner.
According to another embodiment of the present invention, a cache management method is provided, and the cache management method comprises the steps of: providing a first memory having a working space; and using a cache controller to dynamically load at least one object from a second memory to the working space of the first memory in an object-oriented manner.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” The terms “couple” and “couples” are intended to mean either an indirect or a direct electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
The microcontroller 100 is coupled to a DRAM 150 having a power unit 152 and an object pool 154, and the microcontroller 100 loads objects from the object pool 154 to execute the corresponding functions, where the term “object” is a collection of code or data for specific functionality, and is able to be linked and loaded to the SRAM 120 to be executed. In detail, the dynamic loader 136 of the cache controller 130 loads the objects from the program pool 152 to the working space 122 of the SRAM 120, and the CPU 110 reads and executes the loaded objects in the working space 122.
In this embodiment, the hardware cache 140 serves as an optimization tool for the operations of the cache controller 130. In detail, when the microcontroller 100 has a low power feature, the hardware cache 140 may be used to group tasks to increase locality for cache, and dispatch tasks which have no ultra-low-power requirement to cache DRAM 150 to save the usage of the cache controller 130. When microcontroller 100 has a ultra-low-power feature, the hardware cache 140 may not work for storing the objects corresponding to the ultra-low-power requirement to prevent the DRAM 150 being accessed more frequently.
In order to save the power of a system comprising the microcontroller 100 and the DRAM 150, the cache controller 130 can manage the objects in the SRAM 120 and the DRAM 150 in an object-oriented matter, to make the DRAM 150 not be powered on/off more frequently. In detail, the object-oriented policy manager 132 is arranged to provide at least one cache policy indicating relationships of a plurality of objects. The object allocator 134 is arranged to refer to the cache policy provided by the object-oriented policy manager 132 to determine at least one object, and the dynamic loader 136 loads the at least one of the objects, determined by the object allocator 134, from the DRAM 150 to the SRAM 120. By using the cache controller 130 to load the objects in the object-oriented manner, the object allocator 134 can control the power unit 152 of the DRAM 150 more efficiently to save power to make the system be suitable for the ultra-low-power applications such as the sensor hub or the pedometer.
Specifically, the cache policies are programmable by the user or the user scenario analyzed by a processor, for example the cache policy may be a group policy and/or an exclusive policy. The group policy may indicate a plurality of specific objects that are always used together, and if one of the specific objects is to be loaded from the object pool 154 to the working space 122, the other specific objects are loaded to the working space 122 simultaneously. The exclusive policy may indicate a plurality of specific objects that are never used together, and if one of the specific objects is to be loaded from the object pool 154 to the working space 122, the dynamic loader 136 moves the other specific objects, if any, from the working space 122 to the object pool 154.
Step 602: the cache controller 130 tries to load an object A from the DRAM 150 (Step 602-1) or the cache controller 130 tries to load a group G comprising a plurality of objects from the DRAM 150 (Step 602-2).
Step 604: the cache controller 130 determines if the exclusive policy is applied and the object A has an exclusive pair. If yes, the flow enters Step 608; and if not, the flow enters Step 606.
Step 606: the cache controller 130 allocates a block within the working space 122 for storing the object A or the group G.
Step 608: the cache controller 130 determines if one of the exclusive pairs has been loaded into the working space 122. If yes, the flow enters Step 610; and if not, the flow enters Step 606.
Step 610: the cache controller 130 unloads the one of the exclusive pairs to release block(s).
Step 612: the cache controller 130 assigns the block(s) to the object A.
Step 614: the cache controller 130 determines if the block assignment succeeds. If yes, the flow enters Step 618; if not, the flow enters Step 616.
Step 616: the cache controller 130 uses the LRU mechanism to release/free block(s) of the working space 122, and the flow goes back to the Step 606.
Step 618: the cache controller 130 loads the object A or the group G from the DRAM 150 to the working space 122.
Briefly summarized, in the cache management method and the microcontroller of the present invention, a cache controller having the object-oriented policy is used to manage the objects of the SRAM and the DRAM. The cache management method of the embodiments can access DRAM efficiently and lower the overhead of the SRAM to make the microcontroller be used in the ultra-low-power application.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
This is a continuation of the co-pending U.S. application Ser. No. 16/402,242 (filed on May 3, 2019). The entire content of the related applications is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5752272 | Tanabe | May 1998 | A |
5784548 | Liong | Jul 1998 | A |
7451225 | Todd | Nov 2008 | B1 |
8270196 | Norman | Sep 2012 | B2 |
8467244 | Jeon | Jun 2013 | B2 |
8843435 | Trefler | Sep 2014 | B1 |
20030120357 | Battistutto | Jun 2003 | A1 |
20040225843 | Vaidya | Nov 2004 | A1 |
20050172049 | Kamei | Aug 2005 | A1 |
20070294496 | Goss | Dec 2007 | A1 |
20080256302 | Maron | Oct 2008 | A1 |
20100180081 | Bose | Jul 2010 | A1 |
20110296095 | Su | Dec 2011 | A1 |
20120072643 | Thill | Mar 2012 | A1 |
20120084497 | Subramaniam | Apr 2012 | A1 |
20130077382 | Cho | Mar 2013 | A1 |
20130215069 | Lee | Aug 2013 | A1 |
20140244960 | Ise | Aug 2014 | A1 |
20140304475 | Ramanujan | Oct 2014 | A1 |
20160253264 | Bose | Sep 2016 | A1 |
20160321183 | Govindan | Nov 2016 | A1 |
20160371187 | Roberts | Dec 2016 | A1 |
20170068304 | Lee | Mar 2017 | A1 |
20170132144 | Solihin | May 2017 | A1 |
20180107399 | Hsiao | Apr 2018 | A1 |
Number | Date | Country |
---|---|---|
2015097751 | Jul 2015 | WO |
Entry |
---|
Carter, John, et al. “Impulse: Building a smarter memory controller.” Proceedings Fifth International Symposium on High-Performance Computer Architecture. IEEE, 1999. (Year: 1999). |
Mittal, Sparsh. “A survey of recent prefetching techniques for processor caches.” ACM Computing Surveys (CSUR) 49.2 (2016): 1-35. (Year: 2016). |
Drepper, Ulrich. “What every programmer should know about memory.” Red Hat, Inc 11 (2007) p. 14-15, 36. (Year: 2007). |
Hennessy, John L. and Patternson, David A. “Computer Architecture: A Quantitative Approach”. 3rd ed. Morgan Kaufman Publishers. San Francisco. 2003. p. 398-399. (Year: 2003). |
Number | Date | Country | |
---|---|---|---|
20210056032 A1 | Feb 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16402242 | May 2019 | US |
Child | 17090895 | US |