Low Power Control for Multiple Coherent Masters

Abstract
Systems and methods are provided for efficiently managing power among system components. In an embodiment, a power manager receives information from subsystems and determines which subsystem components will require power to perform upcoming tasks. Based on this received information, the power manager can power on and power down individual subsystem components. Systems and methods according to embodiments of the present disclosure enable a cache of a subsystem to be powered on without requiring a power-up of every component of the subsystem. Thus, disclosed systems and methods enable a first subsystem to snoop into a cache of a second subsystem without requiring a full power-up of the second subsystem.
Description
FIELD OF THE INVENTION

This invention relates to power efficiency and more specifically to power management of a system having a cache memory.


BACKGROUND

Many electronic systems use power management schemes to efficiently allocate and manage power among various system components. Some systems include a power management unit (PMU) to monitor power supplied to different system components (e.g., to memories, processors, various hardware subsystems, and/or software). The PMU can receive information regarding which system components will need power to perform tasks and which system components can be powered down without negatively impacting system performance. Based or this information, the PMU can efficiently allocate power to the system so that the system can perform necessary tasks while efficiently using available power.


Power management schemes used by some conventional PMUs have significant disadvantages. For example, some conventional PMUs power down and power on entire subsystems as power needs of the overall system change. Powering up an entire subsystem can be inefficient, for example, when only a single subsystem component needs power to perform a task.


Embodiments of the present disclosure provide systems and methods for more efficiently managing power among components of a system.





BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES

The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate embodiments of the disclosure and, together with the general description given above and the detailed descriptions of embodiments given below, serve to explain the principles of the present disclosure. In the drawings:



FIG. 1A is a block diagram of a system for managing power in accordance with an embodiment of the present disclosure.



FIG. 1B is a block diagram of a system for managing power including a cache coherency module (CCM) in accordance with an embodiment of the present disclosure.



FIG. 1C is a block diagram illustrating a more detailed diagram of a system for managing power in accordance with an embodiment of the present disclosure.



FIG. 2 is a flowchart of a method for powering up components of a subsystem in accordance with an embodiment of the present disclosure.



FIG. 3 is a flowchart of a method for powering down components of a subsystem in accordance with an embodiment of the present disclosure.



FIG. 4 is a flowchart of a method for processing a request to power down a component of a subsystem in accordance with an embodiment of the present disclosure.



FIG. 5 is a flowchart of a method for processing a request to access stored data in accordance with an embodiment of the present disclosure.



FIG. 6 is a block diagram illustrating an example computer system that can be used to implement embodiments of the present disclosure.





Features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.


DETAILED DESCRIPTION

In the following description, numerous specific details are set forth to provide a thorough understanding of the disclosure. However, it will be apparent to those skilled in the art that the disclosure, including structures, systems, and methods, may be practiced without these specific details. The description and representation herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the disclosure.


References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.


For purposes of this discussion, the term “module” shall be understood to include one of software, or firmware, or hardware (such as circuits, microchips, processors, or devices, or any combination thereof), or any combination thereof. In addition, it will be understood that each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.


1. OVERVIEW

Embodiments of the present disclosure provide systems and methods to efficiently manage power among system components. In an embodiment, a power manager receives information from subsystems and determines which subsystem components will require power to perform upcoming tasks. Based on this received information, the power manager can power on and power down individual subsystem components. By powering up individual subsystem components instead of powering up an entire subsystem, the power manager can conserve power while still supplying enough power so that the upcoming tasks can be performed.


Embodiments of the present invention provide systems and methods for power-efficient use of cache memory (“cache”) across multiple subsystems. For example, systems and methods according to embodiments of the present disclosure enable a cache of a subsystem to be powered on without requiring a power-up of every component of the subsystem. Thus, disclosed systems and methods enable a first subsystem to snoop into a cache of a second subsystem without requiring a full power-up of the second subsystem.


2. POWER MANAGER


FIG. 1A is a block diagram of a system for managing power in accordance with an embodiment of the present disclosure. FIG. 1A includes a power manager 102 coupled to two subsystems 108. Power manager 102 can be implemented using hardware, software, or a combination of hardware and software. In an embodiment, power manager 102 includes a dedicated processor (not shown) or hardware logic to process instructions for determining when to supply power to subsystems 108. In another embodiment, power manager 102 accesses another processor (e.g., a host processor) to process instructions for determining when to supply power to subsystems 108.


In an embodiment, subsystems 108 communicate with power manager 102 using control signals 106. In FIG. 1A, each subsystem 108a and 108c includes a plurality of subsystem components. For example, in an embodiment, these subsystem components comprise caches 115 and processor cores (“cores”) 118. Caches 115a and 115b can be used to temporarily store data for subsystems 115a and 115b. In an embodiment, cores 118 are individual cores of a multi-core processor. In another embodiment, each of cores 118 is a separate processor.


As shown in FIG. 1A, subsystems can have differing number of cores. For example, subsystem 108a includes four cores (cores 118a, 118b, 118c, and 118d), and subsystem 2 includes two cores (118e and 118f). Because subsystem 108a has more cores than subsystem 108c, subsystem 108a is more powerful than subsystem 108c but is also more power hungry than subsystem 108c. While only cores 118 and cache 115 are shown as components of subsystems 108 in FIG. 1A, it should be understood that subsystems can have other components in accordance with embodiments of the present disclosure.


2.1. Powering Up System Components

Power manager 102 manages power supplied to subsystems 108 based on received information about power needs of the system of FIG. 1A. For example, in an embodiment, power manager 102 can receive a notification whenever a system component (e.g., one of cores 118 or cache 115) will be needed to perform a task. For example, in an embodiment, power manager 102 can receive information regarding pending interrupts (such as hardware wakeup events) for cores 118. If, for example, power manager 102 determines that there is a pending interrupt for core 118a, power manager 102 can initiate a power-up of core 118a using control signal 106b. Alternatively, in an embodiment, power manage 102 can receive an instruction from a host processor (not shown) to power on one or more of cores 118 or one or more of caches 115.


In an embodiment, subsystems 108 (or individual components of subsystems 108) can send a power-up request to power manager 102. For example, in an embodiment, subsystem 108a can determine that one of its system components will be needed to perform a task, and subsystem 108a can send a request to power manager 102 (e.g., via sending control signal 106a to power manager 102 using a powered-up core) for the system component to be powered on. For example, core 118a of subsystem 108a can receive an interrupt input into core 118a. After receiving the interrupt, subsystem 108a can use a powered-up core to send a request via control signal 106a to power manager 102 to power on core 118a.


2.2. Powering Down System Components

Power manager 102 can also initiate a powering down of subsystem components to conserve power when subsystem components are not needed to perform tasks. For example, in an embodiment, if core 118a is finished performing a task, core 118a can send a message to power manager 102 (e.g., via control signal 106a) informing power manager 102 that core 118a has finished performing a task. In an embodiment, this message can include a request for core 118a to be shut down. It should be understood that, in an embodiment, power manager 102 can be informed that a subsystem component has finished performing a task from a source other than control signals 106.


After power manager 102 determines that a subsystem component has finished performing a task, power manager 102 can then determine, based on available information, whether the subsystem component should be shut down. For example, after receiving a shutdown request from core 118, power manager can determine whether core 118a will be needed to perform additional tasks in the near future or whether core 118a can be shut down to conserve power without negatively impacting system performance. For example, in an embodiment, power manager 102 can determine whether it is aware of any pending tasks that are scheduled to be processed using core 118a. If no such tasks exist, power manager 102 can initiate a shutdown of core 118a via control signal 106b. Subsystem 108a can receive control signal 106b and can initiate the shutdown of core 118a.


In an embodiment, power manager 102 can determine that a subsystem component should be shut down even if a task is pending for the subsystem component. For example, in an embodiment, power manager 102 can determine that a core (e.g., core 118a) can be powered down and powered back up before the task is scheduled to be processed to conserve power. Alternatively, in an embodiment, power manager 102 can reassign the task to a different subsystem component (e.g., to another powered-up core, such as core 118b).


3. CACHE SNOOPING

As discussed above, caches 115 can be used to temporarily store data for subsystems 115a and 115b. Subsystems 108 can access data stored in caches 115 faster than data stored in an external memory (not shown). In an embodiment, one subsystem can request to access data stored in a cache of another subsystem. Such requests can be referred to as “cache snooping.” For example, a component of subsystem 108c may request to snoop into cache 115a of subsystem 108a to access data because accessing data from cache 115a is faster than accessing data from an external memory. Additionally, in an embodiment, accessing data from caches 115 causes less latency than accessing data from an external memory. For example, in an embodiment, core 118e can send a request (e.g., via control signal 106e) to access data stored in cache 115a. Power manager 102 can then determine whether to power on cache 115a.


In an embodiment, power manager 102 can initiate a power on of cache 115a without powering up additional components of subsystem 108a (e.g., without powering up one of cores 118a, 118b, 118c, or 118d) to enable subsystem 108c to snoop into cache 115a. By using this limited powering up technique, the system of FIG. 1A can conserve power. After subsystem 108c has finished accessing cache 115a, subsystem 108c can notify power manager 102 that it has finished accessing cache 115a and that cache 115a can be powered down. For example, in an embodiment, core 118e can send a request (e.g., via control line 106e) to power down cache 115a. If power manager 102 determines that cache 115a is not needed to perform additional tasks, power manager can determine that cache 115a can initiate powering down cache 115a via control signal 106b.


By powering up and powering down individual components of a subsystem instead of powering up and powering down an entire subsystem, embodiments of the present disclosure advantageously enable caches to remain powered even when other subsystem components have been shut down. For example, if cores 118a, 118b, and 118c, and 118d have been powered down, power manager 102 can still supply cache 115a with power, enabling sub-system 108c to snoop into cache 115a to access data while core 118e or core 118f is being used to perform a task.


3.1 Cache Coherency

Systems and methods according to embodiments of the present disclosure can be configured to ensure cache coherency among subsystems. For example, if copies of the same data are stored in both caches 115a and 115b, systems and methods according to embodiments of the present disclosure can ensure that changes to data are uniformly made to all copies of the data stored in caches.



FIG. 1B is a block diagram of a system for managing power including a cache coherency module (CCM) in accordance with an embodiment of the present disclosure. In FIG. 1B, CCM subsystem 108b includes CCM 114, which ensures cache coherency among caches 115. As shown in FIG. 1B, CCM subsystem 108b is coupled to subsystems 108a and 108c and also to power manager 102. CCM subsystem 108b can communicate with power manager 102 using control signals 106c and 106d.


CCM 114 arbitrates requests to access data stored in caches 115. In an embodiment, CCM 114 includes a dedicated processor (not shown) or hardware logic to process instructions for arbitrating requests to access data stored in caches 115. In an embodiment, CCM 114 is notified when data is written to or read from caches 115, and CCM 114 records (or has access to) information regarding what data is stored in caches coupled to CCM subsystem 108b (e.g., caches 115). Thus, in an embodiment, subsystems 108 are not required to know which data is stored in which cache before requesting access to stored data. Instead, subsystems 115 can send a request to access data to CCM 114, and CCM 114 can determine whether the data is stored in one of caches 115 or whether it should access the data from external memory. In an embodiment, if CCM 114 is not powered on, subsystems 108 can send a request to power manager 102 to power on CCM 114, and then subsystems 108 can send a request to access data to CCM 114.


For example, in an embodiment, a component of subsystem 108c (e.g., core 118e) sends a request to CCM 114 to access data. CCM 114 receives the request, and determines whether data is stored in a cache (e.g., in cache 115a or 115b). If the data is not stored in a cache, CCM 114 initiates a retrieval of the data from external memory. If the data is stored in a cache, CCM initiates a retrieval of the information from the cache (e.g., from cache 115a). If the cache storing the data is not supplied with power, CCM 114 can send a request to power manager 102 to power on the cache so that the data can be read from the cache.


In an embodiment, CCM 114 is notified when data is written to a cache (e.g., to cache 115a or 115b). For example, if core 118e wants to write data to cache 115b, core 118e first notifies CCM 114 that it is planning to write data to cache 115b. In an embodiment, CCM 114 notifies other subsystems accessing the data that the data is going to be updated, and CCM 114 can also update copies of the data stored in other caches. Additionally, in an embodiment, CCM 114 can be required to approve the request to write data to a cache before the data is written. For example, in an embodiment, CCM 114 may determine that a task using the data should be allowed to finish before the data is updated. Alternatively, CCM 114 can be configured to notify a process in progress that it is using stale data that is being updated. The process may then complete using the updated data (or the process may restart using the updated data).


4. POWER MANAGER INCLUDING SWITCHES AND SWITCHING REGULATORS


FIG. 1C is a block diagram illustrating a more detailed diagram of a system for managing power in accordance with an embodiment of the present disclosure. As shown in FIG. 1C, in an embodiment, power manager 102 can include sub-power managers for subsystems coupled to power manager 102. For example, as shown in FIG. 1B, power manager 102 includes sub-power managers 104a, 104b, and 104c for subsystems 108a, 108b, and 108c, respectively. In an embodiment, sub-power managers 104 can receive control signals 106a, 106c, and 106e from subsystems 108 and can send control signals 106b, 106d, and 106f to subsystems 108.


In FIG. 1C, subsystems 108 include switching regulators 110, phase-locked loops (PLLs) 112, and switches 116. For example, subsystem 108a includes an adjustable switching regulator (ASR) 110a coupled to PLL 112a and switches 116a, 116b, 116c, and 116d. PLL 112a provides a clock signal for subsystem 108a. In an embodiment, ASR 110a supplies power to cache 115a and to cores 118a, 118b, 118c, and 118d via switches 116a, 116b, 116c, and 116d. As shown in FIG. 1C, each of switches 116a, 116b, 116c, and 116d is coupled to a respective core 118a, 118b, 118c, and 118d.


When sub-power manager 104a determines that a core (e.g., core 118a) should be powered down, sub-power manager 104a can send a control signal (e.g., control signal 106b) to the subsystem (e.g., subsystem 108a). The control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110a) to toggle a switch coupled to the core (e.g., ASR 110a can toggle switch 116a coupled to core 118a) to cut off power from the core. If the sub-power manager determines that an entire subsystem should be powered down, the sub-power manager can stop supplying power to the switching regulator of the subsystem. For example, sub-power manager 104a can stop supplying power to ASR 110a to cut off power from subsystem 108a.


When sub-power manager 104a determines that a core (e.g., core 118a) should be powered on, sub-power manager 104a can send a control signal (e.g., control signal 106b) to the subsystem (e.g., subsystem 108a). The control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110a) to toggle a switch coupled to the core (e.g., ASR 110a can toggle switch 116a coupled to core 118a so that switch 116a connects ASR 110a to core 118a) to supply power to the core. If sub-power manager 104a determines that an entire subsystem should be powered on, sub-power manager 104a can supply power to the switching regulator of the subsystem. For example, sub-power manager 104a can supply power to ASR 110a to supply power to subsystem 108a.


In an embodiment, a cache of a subsystem is powered down when a subsystem is powered down, and a cache of a subsystem is powered on when the subsystem powers on. For example, in an embodiment, cache 115a is powered down when subsystem 108a is powered down, and cache 115a is powered on when subsystem 108a is powered on. However, it should be understood that in an embodiment, caches can be powered down and powered on without requiring a power down or power on of the entire subsystem. For example, in an embodiment, cache 115a can be coupled to a dedicated switch (not shown), and ASR 110a can toggle this dedicated switch to cut off power from cache 115a or supply power to cache 115a without requiring entire subsystem 115a to be powered down or powered on.


As shown in FIG. 1C, CCM subsystem 108b includes a cache switching regulator (CSR) 110b coupled to a switch 116e. CRS 110b toggles switch 116e on or off to supply power to CCM 114, and PLL 112b supplies a clock signal for CCM 114. When sub-power manager 104b determines that CCM subsystem 108b should be powered down, sub-power manager 104b can send a control signal (e.g., control signal 106c) to CCM subsystem 108b. The control signal instructs CCM subsystem 108b and/or CSR 110b to toggle switch 116e coupled to CCM 114 to cut off power from CCM 114. When sub-power manager 104b determines that CCM subsystem 108b should be powered on, sub-power manager 104b can send a control signal (e.g., control signal 106c) to CCM subsystem 108b instructing CSR 110b to toggle switch 116e coupled to CCM 114 to supply power to CCM 114.


In an embodiment, subsystem components and/or subsystems can send a message to power manager 102 and/or respective sub-power managers 104 when the subsystem components and/or subsystems have finished performing tasks. These messages can optionally include requests to power down the subsystem components and/or subsystems. For example, in an embodiment, cores 118a, 118b, 118c, and 118d can send a message to sub-power manager 104a when cores 118a,118b, 118c, and 118d have finished performing tasks. If, after receiving this message, sub-power manager 104a determines that any of cores 118a,118b, 118c, and/or 118d should be powered down, sub-power manager 104a can initiate a powering down of cores 118a,118b, 118c, and/or 118d by sending a control signal (e.g., control signal 106b) to ASR 110a to instruct ASR 110a to toggle switches 116a, 116b, 116c, and/or 116d to cut off power to cores 118a,118b, 118c, and/or 118d. In an embodiment, sub-power manager 104a can determine whether any other system components need to access any of cores 118a,118b, 118c, and/or 118d before powering down any of cores 118a,118b, 118c, and/or 118d.


Additionally, for example, subsystem 108a can send a message to power manager 102 when subsystem 108a has finished performing tasks. For example, if cache 115a is no longer being used, subsystem 108a can send a message to sub-power manager 104a requesting that subsystem 108a be powered down. If, after receiving this message, sub-power manager 104a determines that subsystem 108a should be powered down, sub-power manager 104a can initiate a powering down of subsystem 108a by sending a control signal (e.g., control signal 106b) to ASR 110a to cut off power from ASR 110a to power down subsystem 108a. In an embodiment, sub-power manager 104a can determine whether any other system components need to access subsystem 108a before powering down subsystem 108a.


In an embodiment, subsystems can also send a message to power manager 102 informing power manager 102 that they have finished performing tasks using components of other subsystems. For example, if subsystem 108a finished accessing cache 115b of subsystem 108c, subsystem 108a can send a message to power manager 102 informing power manager 102 that it is no longer accessing cache 115b. In an embodiment, subsystem 108a can send this message to sub-power manager 104a, and sub-power manager 104a can forward the message to sub-power manager 104c. However, it should be understood that sub-power manager 104a or power manager 102 can process this message in accordance with embodiments of the present disclosure. If, after receiving this message, power manager 102 determines that subsystem 108c should be powered down (e.g., to cut off power from cache 115b), power manager 102 can initiate a powering down of subsystem 108c by sending a control signal (e.g., control signal 106f) to ASR 110c to cut off power from ASR 110c to power down subsystem 108c (and thus power down cache 115b). In an embodiment, power manager 102 can determine whether any other system components need to access cache 115b and/or other components of subsystem 108c before powering down subsystem 108c.


In an embodiment, CCM subsystem 108b can also send a message to power manager 102 when CCM subsystem 108b has finished performing tasks. For example, CCM subsystem 108b can send a message to sub-power manager 104b when CCM subsystem is no longer being used to arbitrate access to caches 115. If, after receiving this message, sub-power manager 104b determines that CCM subsystem 108b should be powered down, sub-power manager 104b can initiate a powering down of CCM subsystem 108b by sending a control signal (e.g., control signal 106d) to CSR 110b to cut off power from CSR 110b to power down CCM subsystem 108b (and thus power down CCM 114). In an embodiment, sub-power manager 104b can determine whether any other system components need to access CCM 114 and/or other components of CCM subsystem 108b before powering down CCM subsystem 108b.


5. SYSTEM LAYERING

Systems and methods according to embodiments of the present disclosure enable subsystems and/or subsystem components to be powered on in layers so that unused system components are not supplied with power. This layering concept provides an efficient, flexible approach to supplying power to various subsystem components. For example, in an embodiment, power manager 102 won't attempt to power down an entire subsystem while a subsystem component is still being used to perform a task. Instead, power manager 102 adopts a layered approach by first attempting to power down unused subsystem components. Then, once all subsystem components have finished performing tasks, power manager 102 determines whether to power down the subsystem. Finally, if all subsystems have finished performing tasks, power manager 102 determines whether to power down CCM subsystem 108b (and thus power down CCM 114).


For example, in an embodiment, power manager 102 does not power down ASR 110a (which, in an embodiment, supplies power to entire subsystem 108a including cache 115a) until all of cores 116a, 116b, 116c, and 116d have been powered down (e.g., via switches 116a, 116b, 116c, and 116d, respectively). Additionally, in an embodiment, power manager 102 does not power down CSR 110b (which, in an embodiment, supplies power to entire subsystem 108b including CCM 114) until both subsystem 108a and 108b have been powered down (e.g., via ASR 110a and ASR 110c, respectively).


In an embodiment, this layering concept can also extend to powering up subsystems and subsystem components. For example, in an embodiment, power manager 102 does not power on subsystem 108a or subsystem 108b until CCM subsystem has been powered on (e.g., by supplying power to CSR 110b). Additionally, in an embodiment, power manager 102 does not power on any of cores 118a, 118b, 118c, or 118d until subsystem 108a has been powered on (e.g., by supplying power to ASR 110a).


In an embodiment caches in accordance with embodiments of the present disclosure (e.g., caches 115a and/or 115b) can be partitioned into multiple portions, and each portion of a cache can be powered down when not used to conserve power and powered up when needed. For example, in an embodiment, power manager 102 can send a message instructing a portion of cache 115a to be powered down when this portion of cache 115a is not needed. While a portion of cache 115a is powered down, other portions of cache 115a can still be powered on and accessed. When power manager 102 determines that a powered down portion of cache 115a needs to be used to perform a task, power manager 102 can send a message instructing the powered down portion of cache 115a to be powered on again.


For example, in an embodiment, cache 115a can be split into a first portion and a second portion. If for example, core 118e has finished accessing the first portion of cache 115a, core 118e can send a message to power manager 102 instructing power manager 102 that it has finished using the first portion of cache 115a and that the first portion of cache 115a can be powered down. If power manager 102 determines that no other subsystems need to access the first portion of cache 115a, power 102 can send a message to ASR 110a instructing ASR 110a to cut off power to the first portion of cache 115a. While the first portion of cache 115a is powered down, the second portion of cache 115a can still receive power from ASR 110a and can still be accessed by other subsystem components. If, for example, core 118f needs to access the first portion of cache 115a, core 118f can send a message to power manager 102 requesting that the first portion of cache 115a be powered on. Power manager 102 can then send a message to ASR 110a instructing ASR 110a to supply power to the first portion of cache 115a.


In an embodiment, the components of the system of FIG. 1A, the components of the system of FIG. 1B and/or the components of the system of FIG. 1C can be implemented on a single integrated circuit (IC). In another embodiment, some components of the systems of FIG. 1A, 1B and/or 1C are implemented using multiple ICs. For example, in an embodiment, power manager 102 and subsystems 108 are implemented on different ICs. Additionally, it should be understood that the components of the systems of FIGS. 1A, 1B, and/or 1C can be implemented using hardware, software, or a combination of hardware and software in accordance with embodiments of the present disclosure.


6. METHODS


FIG. 2 is a flowchart of a method for powering up components of a subsystem in accordance with an embodiment of the present disclosure. In step 200, the CCM is powered on first. For example, sub-power manager 104b can send a control signal (e.g., control signal 106d) to CCM subsystem 108b if power manager 102 determines that CCM subsystem 108b is powered down. In step 202, a subsystem is powered on. For example, once power manager 102 determines that CCM subsystem 108b has power, power manager can then power on a subsystem (e.g., subsystem 108a or 108b) so that the subsystem can be accessed. For example, in an embodiment, if subsystem 108a is powered on, cache 115a can be accessed. In step 204, a subsystem component is powered on. For example, in an embodiment, if sub-power manager 104a determines that subsystem 108a has power, sub-power manager 104a can send control signal 106b to ASR 110a to instruct ASR 110a to toggle switch 116a to supply power to core 118a so that core 118a can be used to perform a task.



FIG. 3 is a flowchart of a method for powering down components of a system in accordance with an embodiment of the present disclosure. In step 300, a subsystem component is powered down. For example, sub-power manager 104a can send control signal 106b to ASR 110a to instruct ASR 110a to toggle switch 116a to power down core 118a. In step 302, a subsystem is powered down. For example, once sub-power manager 104a has powered down cores 118a, 118b, 118c, and 118d, sub-power manager-104a can determine to power down subsystem 108a when sub-power manager receives a request to power down subsystem 108a. In an embodiment, cache 115a is also powered down when subsystem 108a is powered down. In step 304, CCM 114 is powered down. For example, once sub-power manager 104a has powered down subsystems 108a and 108c, sub-power manager-104a can determine to power down CCM subsystem 108b (and thus CCM 114) when sub-power manager receives a request to power down CCM subsystem 108b.



FIG. 4 is a flowchart of a method for processing a request to power down a component of a system in accordance with an embodiment of the present disclosure. In step 400, a request to power down a system component is received. For example, sub-power manager 104a can receive a request to power down core 118a. In step 402, a determination is made regarding whether other system components need to access the system component. For example, sub-power manager 104a can determine whether other system components need to access core 118a (e.g., by determining whether an instruction is pending for core 118a). If the power manager (e.g., power manager 102) determines that other system components need to access the system component, the method proceeds to step 404, and the system component is left on. For example, sub-power manager 104a may determine to leave core 118a powered on if sub-power manager 104a determines that other system components need to access core 118a. If the power manager (e.g., power manager 102) determines that other system components do not need to access the system component, the method proceeds to step 406, and the system component is powered down. For example, sub-power manager 104a may determine to power down core 118a if sub-power manager 104a determines that other system components do not need to access core 118a.


In an embodiment, if sub-power manager 104a receives a request to power down cache 115a and/or subsystem 108a in step 400, sub-power manager 104a determines whether other subsystem components need to access cache 115a and/or subsystem 108a in step 402. If sub-power manager 104a determines that other system components need to access cache 115a and/or subsystem 108a, the method proceeds to step 404, and cache 115a and/or subsystem 108a is left on. If sub-power manager 104a determines that other system components do not need to access cache 115a and/or subsystem 108a, the method proceeds to step 406, and cache 115a and/or subsystem 108a are powered down (e.g., by powering down ASR 110a).



FIG. 5 is a flowchart of a method for processing a request to access stored data in accordance with an embodiment of the present disclosure. In step 500, a request to access data is received. For example, CCM 114 can receive a request from subsystem 108a to access data. In step 502, a determination is made regarding whether the data is stored in a cache. For example, CCM 114 can determine whether the data is stored in cache 115a or cache 115b. If the CCM (e.g., CCM 114) determines that the data is not stored in cache, the method proceeds to step 506, and the data is accessed from external memory. For example, CCM 114 can send a request to external memory to access the data. If the CCM (e.g., CCM 114) determines that the data is stored in cache, the method proceeds to step 504, and a determination is made regarding whether the cache is powered on. For example, CCM 114 can determine that the data is stored in cache 115b and can then determine whether cache 115b is powered on.


In an embodiment, the CCM can send a request to power manager 102 to determine whether the cache is powered on. For example, in an embodiment, CCM 114 sends a request to power manager 102 via control signal 106c to determine whether cache 115b is powered on. In an embodiment, power manager 102 can respond to the CCM via control signal 106d. If the CCM (e.g., CCM 114) determines that the cache is powered on, the method proceeds to step 510, and the data is accessed from the cache. For example, CCM 114 can retrieve the data from cache 115b. If the CCM (e.g., CCM 114) determines that the cache is not powered on, the method proceeds to step 508, and a request to power on the cache is sent. For example, CCM 114 can send a request to power on cache 115b to power manager 102 via control signal 106c. In an embodiment, sub-power manager 104c can then power on ASR 110c to supply power to cache 115b. Once the cache is powered on, the method proceeds to step 510, and the data is accessed from the cache (e.g., from cache 115b).


7. EXAMPLE COMPUTER SYSTEM ENVIRONMENT

It will be apparent to persons skilled in the relevant art(s) that various elements and features of the present disclosure, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software.


The following description of a general purpose computer system is provided for the sake of completeness. Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system. An example of such a computer system 600 is shown in FIG. 6. Modules depicted in FIGS. 1A-1C may execute on one or more computer systems 600. Furthermore, each of the steps of the processes depicted in FIGS. 2-5 can be implemented on one or more computer systems 600.


Computer system 600 includes one or more processors, such as processor 604. Processor 604 can be a special purpose or a general purpose digital signal processor. Processor 604 is connected to a communication infrastructure 602 (for example, a bus or network). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the disclosure using other computer systems and/or computer architectures.


Computer system 600 also includes a main memory 606, preferably random access memory (RAM), and may also include a secondary memory 608. Secondary memory 608 may include, for example, a hard disk drive 610 and/or a removable storage drive 612, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, or the like. Removable storage drive 612 reads from and/or writes to a removable storage unit 616 in a well-known manner. Removable storage unit 616 represents a floppy disk, magnetic tape, optical disk, or the like, which is read by and written to by removable storage drive 612. As will be appreciated by persons skilled in the relevant art(s), removable storage unit 616 includes a computer usable storage medium having stored therein computer software and/or data.


In alternative implementations, secondary memory 608 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 600. Such means may include, for example, a removable storage unit 618 and an interface 614. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, a thumb drive and USB port, and other removable storage units 618 and interfaces 614 which allow software and data to be transferred from removable storage unit 618 to computer system 600.


Computer system 600 may also include a communications interface 620. Communications interface 620 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 620 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 620 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 620. These signals are provided to communications interface 620 via a communications path 622. Communications path 622 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels.


As used herein, the terms “computer program medium” and “computer readable medium” are used to generally refer to tangible storage media such as removable storage units 616 and 618 or a hard disk installed in hard disk drive 610. These computer program products are means for providing software to computer system 600.


Computer programs (also called computer control logic) are stored in main memory 606 and/or secondary memory 608. Computer programs may also be received via communications interface 620. Such computer programs, when executed, enable the computer system 600 to implement the present disclosure as discussed herein. In particular, the computer programs, when executed, enable processor 604 to implement the processes of the present disclosure, such as any of the methods described herein. Accordingly, such computer programs represent controllers of the computer system 600. Where the disclosure is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 612, interface 614, or communications interface 620.


In another embodiment, features of the disclosure are implemented primarily in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s).


8. CONCLUSION

It is to be appreciated that the Detailed Description, and not the Abstract, is intended to be used to interpret the claims. The Abstract may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, is not intended to limit the present disclosure and the appended claims in any way.


The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.


The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.


Any representative signal processing functions described herein can be implemented in hardware, software, or some combination thereof. For instance, signal processing functions can be implemented using computer processors, computer logic, application specific circuits (ASIC), digital signal processors, etc., as will be understood by those skilled in the art based on the discussion given herein. Accordingly, any processor that performs the signal processing functions described herein is within the scope and spirit of the present disclosure.


The above systems and methods may be implemented as a computer program executing on a machine, as a computer program product, or as a tangible and/or non-transitory computer-readable medium having stored instructions. For example, the functions described herein could be embodied by computer program instructions that are executed by a computer processor or any one of the hardware devices listed above. The computer program instructions cause the processor to perform the signal processing functions described herein. The computer program instructions (e.g. software) can be stored in a tangible non-transitory computer usable medium, computer program medium, or any storage medium that can be accessed by a computer or processor. Such media include a memory device such as a RAM or ROM, or other type of computer storage medium such as a computer disk or CD ROM. Accordingly, any tangible non-transitory computer storage medium having computer program code that cause a processor to perform the signal processing functions described herein are within the scope and spirit of the present disclosure.


While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, and further the invention should be defined only in accordance with the following claims and their equivalents.

Claims
  • 1. A system, comprising: a first subsystem, comprising: a cache memory, anda processor core configured to initiate sending a message indicating that the processor core has finished performing a task;a cache coherency module (CCM) coupled to the cache memory; anda power manager coupled to the first subsystem, wherein the power manager is configured to: receive the message,determine whether the processor core is needed to perform an additional task, andin response to determining that the processor core is not needed to perform the additional task, initiate powering down of the processor core without powering down the cache memory.
  • 2. The system of claim 1, wherein the first subsystem further comprises: a switch coupled to the processor core, wherein the power manager is configured to initiate toggling of the switch to initiate powering down the processor core.
  • 3. The system of claim 1, wherein the cache memory is partitioned into a plurality of portions, wherein the plurality of portions includes a first portion and a second portion, and wherein the power manager is further configured to: initiate powering down of the first portion without powering down the second portion.
  • 4. The system of claim 1, further comprising: a second subsystem coupled to: the power manager, andthe CCM.
  • 5. The system of claim 4, wherein the second subsystem comprises: a switching regulator;a phase-locked loop (PLL) coupled to the switching regulator;a second cache memory coupled to: the switching regulator, andthe PLL;a first switch coupled to the switching regulator;a second processor core coupled to the first switch;a second switch coupled to the switching regulator; anda third processor core coupled to the second switch.
  • 6. The system of claim 1, wherein the power manager comprises: a first sub-power manager coupled to the first subsystem; anda second sub-power manager coupled to the CCM.
  • 7. The system of claim 1, wherein the power manager is further configured to: in response to initiating the powering down of the processor core, determine whether the first subsystem is needed to perform the additional task; andin response to determining that the first subsystem is not needed to perform the additional task, initiate powering down of the first subsystem.
  • 8. The system of claim 1, wherein the power manager is further configured to: in response to initiating the powering down of the first subsystem, determine whether the CCM is needed to perform the additional task; andin response to determining that the CCM is not needed to perform the additional task, initiate powering down of the CCM.
  • 9. The system of claim 1, wherein the power manager is further configured to: receive a request to power on the cache memory; andin response to receiving the request to power on the cache memory, initiate powering up the first subsystem.
  • 10. The system of claim 1, wherein the power manager is further configured to: receive a request to power on the processor core;in response to receiving the request to power on the processor core, determine whether the first subsystem is powered on;in response to determining that the first subsystem is powered on: initiate powering up the processor core; andin response to determining that the first subsystem is not powered on: initiate powering up the first subsystem, andinitiate powering up the processor core.
  • 11. The system of claim 1, wherein the CCM is configured to: receive a request to access data;determine whether the data is stored in the cache memory;in response to determining that the data is not stored in the cache memory, access data from external memory; andin response to determining that the data is stored in the cache memory, initiate accessing the data.
  • 12. The system of claim 11, wherein the CCM is further configured to: determine whether the cache memory is powered on; andin response to determining that the cache memory is not powered on, initiate sending a request to the power manager to power on the cache memory.
  • 13. The system of claim 11, further comprising: a second subsystem, comprising a second cache memory coupled to the CCM, wherein the CCM is further configured to: determine whether the data is stored in the second cache memory.
  • 14. The system of claim 13, wherein the CCM is further configured to: determine whether at least a portion of the second cache memory is powered on; andin response to determining that at least a portion of the second cache memory is not powered on, initiate sending a request to the power manager to power on at least a portion of the second cache memory.
  • 15. A system, comprising: a first subsystem, comprising: a first subsystem component configured to send a message indicating that the first subsystem component has finished performing a task, anda second subsystem component;a power manager coupled to the first subsystem, wherein the power manager is configured to: receive the message,determine whether the first subsystem component is needed to perform an additional task, andin response to determining that the first subsystem component is not needed to perform the additional task, initiate powering down of the first subsystem component without powering down the second subsystem component.
  • 16. The system of claim 15, wherein the first subsystem component is a processor core.
  • 17. The system of claim 15, further comprising: a cache coherency module (CCM) coupled to the second subsystem component, wherein the second subsystem component is a cache memory.
  • 18. A method, comprising: receiving, using a power managing device, a request to power down a first component of a first subsystem;determining, using the power managing device, whether the first component is needed to perform a task for a second subsystem; andin response to determining that the first component is not needed to perform the task for the second subsystem, initiating, using the power managing device, powering down the first component without powering down a cache memory of the first subsystem.
  • 19. The method of claim 18, further comprising: determining whether the cache memory is needed to perform the task; andin response to determining that the cache memory is not needed to perform the task, initiating powering down the first subsystem.
  • 20. The method of claim 19, further comprising: determining whether the second subsystem is needed to perform a second task; andin response to determining that the second subsystem is not needed to perform the second task, initiating powering down the second subsystem.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No. 61/757,947, filed on Jan. 29, 2013.

Provisional Applications (1)
Number Date Country
61757947 Jan 2013 US