This invention relates to power efficiency and more specifically to power management of a system having a cache memory.
Many electronic systems use power management schemes to efficiently allocate and manage power among various system components. Some systems include a power management unit (PMU) to monitor power supplied to different system components (e.g., to memories, processors, various hardware subsystems, and/or software). The PMU can receive information regarding which system components will need power to perform tasks and which system components can be powered down without negatively impacting system performance. Based or this information, the PMU can efficiently allocate power to the system so that the system can perform necessary tasks while efficiently using available power.
Power management schemes used by some conventional PMUs have significant disadvantages. For example, some conventional PMUs power down and power on entire subsystems as power needs of the overall system change. Powering up an entire subsystem can be inefficient, for example, when only a single subsystem component needs power to perform a task.
Embodiments of the present disclosure provide systems and methods for more efficiently managing power among components of a system.
The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate embodiments of the disclosure and, together with the general description given above and the detailed descriptions of embodiments given below, serve to explain the principles of the present disclosure. In the drawings:
Features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
In the following description, numerous specific details are set forth to provide a thorough understanding of the disclosure. However, it will be apparent to those skilled in the art that the disclosure, including structures, systems, and methods, may be practiced without these specific details. The description and representation herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the disclosure.
References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
For purposes of this discussion, the term “module” shall be understood to include one of software, or firmware, or hardware (such as circuits, microchips, processors, or devices, or any combination thereof), or any combination thereof. In addition, it will be understood that each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
Embodiments of the present disclosure provide systems and methods to efficiently manage power among system components. In an embodiment, a power manager receives information from subsystems and determines which subsystem components will require power to perform upcoming tasks. Based on this received information, the power manager can power on and power down individual subsystem components. By powering up individual subsystem components instead of powering up an entire subsystem, the power manager can conserve power while still supplying enough power so that the upcoming tasks can be performed.
Embodiments of the present invention provide systems and methods for power-efficient use of cache memory (“cache”) across multiple subsystems. For example, systems and methods according to embodiments of the present disclosure enable a cache of a subsystem to be powered on without requiring a power-up of every component of the subsystem. Thus, disclosed systems and methods enable a first subsystem to snoop into a cache of a second subsystem without requiring a full power-up of the second subsystem.
In an embodiment, subsystems 108 communicate with power manager 102 using control signals 106. In
As shown in
Power manager 102 manages power supplied to subsystems 108 based on received information about power needs of the system of
In an embodiment, subsystems 108 (or individual components of subsystems 108) can send a power-up request to power manager 102. For example, in an embodiment, subsystem 108a can determine that one of its system components will be needed to perform a task, and subsystem 108a can send a request to power manager 102 (e.g., via sending control signal 106a to power manager 102 using a powered-up core) for the system component to be powered on. For example, core 118a of subsystem 108a can receive an interrupt input into core 118a. After receiving the interrupt, subsystem 108a can use a powered-up core to send a request via control signal 106a to power manager 102 to power on core 118a.
Power manager 102 can also initiate a powering down of subsystem components to conserve power when subsystem components are not needed to perform tasks. For example, in an embodiment, if core 118a is finished performing a task, core 118a can send a message to power manager 102 (e.g., via control signal 106a) informing power manager 102 that core 118a has finished performing a task. In an embodiment, this message can include a request for core 118a to be shut down. It should be understood that, in an embodiment, power manager 102 can be informed that a subsystem component has finished performing a task from a source other than control signals 106.
After power manager 102 determines that a subsystem component has finished performing a task, power manager 102 can then determine, based on available information, whether the subsystem component should be shut down. For example, after receiving a shutdown request from core 118, power manager can determine whether core 118a will be needed to perform additional tasks in the near future or whether core 118a can be shut down to conserve power without negatively impacting system performance. For example, in an embodiment, power manager 102 can determine whether it is aware of any pending tasks that are scheduled to be processed using core 118a. If no such tasks exist, power manager 102 can initiate a shutdown of core 118a via control signal 106b. Subsystem 108a can receive control signal 106b and can initiate the shutdown of core 118a.
In an embodiment, power manager 102 can determine that a subsystem component should be shut down even if a task is pending for the subsystem component. For example, in an embodiment, power manager 102 can determine that a core (e.g., core 118a) can be powered down and powered back up before the task is scheduled to be processed to conserve power. Alternatively, in an embodiment, power manager 102 can reassign the task to a different subsystem component (e.g., to another powered-up core, such as core 118b).
As discussed above, caches 115 can be used to temporarily store data for subsystems 115a and 115b. Subsystems 108 can access data stored in caches 115 faster than data stored in an external memory (not shown). In an embodiment, one subsystem can request to access data stored in a cache of another subsystem. Such requests can be referred to as “cache snooping.” For example, a component of subsystem 108c may request to snoop into cache 115a of subsystem 108a to access data because accessing data from cache 115a is faster than accessing data from an external memory. Additionally, in an embodiment, accessing data from caches 115 causes less latency than accessing data from an external memory. For example, in an embodiment, core 118e can send a request (e.g., via control signal 106e) to access data stored in cache 115a. Power manager 102 can then determine whether to power on cache 115a.
In an embodiment, power manager 102 can initiate a power on of cache 115a without powering up additional components of subsystem 108a (e.g., without powering up one of cores 118a, 118b, 118c, or 118d) to enable subsystem 108c to snoop into cache 115a. By using this limited powering up technique, the system of
By powering up and powering down individual components of a subsystem instead of powering up and powering down an entire subsystem, embodiments of the present disclosure advantageously enable caches to remain powered even when other subsystem components have been shut down. For example, if cores 118a, 118b, and 118c, and 118d have been powered down, power manager 102 can still supply cache 115a with power, enabling sub-system 108c to snoop into cache 115a to access data while core 118e or core 118f is being used to perform a task.
Systems and methods according to embodiments of the present disclosure can be configured to ensure cache coherency among subsystems. For example, if copies of the same data are stored in both caches 115a and 115b, systems and methods according to embodiments of the present disclosure can ensure that changes to data are uniformly made to all copies of the data stored in caches.
CCM 114 arbitrates requests to access data stored in caches 115. In an embodiment, CCM 114 includes a dedicated processor (not shown) or hardware logic to process instructions for arbitrating requests to access data stored in caches 115. In an embodiment, CCM 114 is notified when data is written to or read from caches 115, and CCM 114 records (or has access to) information regarding what data is stored in caches coupled to CCM subsystem 108b (e.g., caches 115). Thus, in an embodiment, subsystems 108 are not required to know which data is stored in which cache before requesting access to stored data. Instead, subsystems 115 can send a request to access data to CCM 114, and CCM 114 can determine whether the data is stored in one of caches 115 or whether it should access the data from external memory. In an embodiment, if CCM 114 is not powered on, subsystems 108 can send a request to power manager 102 to power on CCM 114, and then subsystems 108 can send a request to access data to CCM 114.
For example, in an embodiment, a component of subsystem 108c (e.g., core 118e) sends a request to CCM 114 to access data. CCM 114 receives the request, and determines whether data is stored in a cache (e.g., in cache 115a or 115b). If the data is not stored in a cache, CCM 114 initiates a retrieval of the data from external memory. If the data is stored in a cache, CCM initiates a retrieval of the information from the cache (e.g., from cache 115a). If the cache storing the data is not supplied with power, CCM 114 can send a request to power manager 102 to power on the cache so that the data can be read from the cache.
In an embodiment, CCM 114 is notified when data is written to a cache (e.g., to cache 115a or 115b). For example, if core 118e wants to write data to cache 115b, core 118e first notifies CCM 114 that it is planning to write data to cache 115b. In an embodiment, CCM 114 notifies other subsystems accessing the data that the data is going to be updated, and CCM 114 can also update copies of the data stored in other caches. Additionally, in an embodiment, CCM 114 can be required to approve the request to write data to a cache before the data is written. For example, in an embodiment, CCM 114 may determine that a task using the data should be allowed to finish before the data is updated. Alternatively, CCM 114 can be configured to notify a process in progress that it is using stale data that is being updated. The process may then complete using the updated data (or the process may restart using the updated data).
In
When sub-power manager 104a determines that a core (e.g., core 118a) should be powered down, sub-power manager 104a can send a control signal (e.g., control signal 106b) to the subsystem (e.g., subsystem 108a). The control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110a) to toggle a switch coupled to the core (e.g., ASR 110a can toggle switch 116a coupled to core 118a) to cut off power from the core. If the sub-power manager determines that an entire subsystem should be powered down, the sub-power manager can stop supplying power to the switching regulator of the subsystem. For example, sub-power manager 104a can stop supplying power to ASR 110a to cut off power from subsystem 108a.
When sub-power manager 104a determines that a core (e.g., core 118a) should be powered on, sub-power manager 104a can send a control signal (e.g., control signal 106b) to the subsystem (e.g., subsystem 108a). The control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110a) to toggle a switch coupled to the core (e.g., ASR 110a can toggle switch 116a coupled to core 118a so that switch 116a connects ASR 110a to core 118a) to supply power to the core. If sub-power manager 104a determines that an entire subsystem should be powered on, sub-power manager 104a can supply power to the switching regulator of the subsystem. For example, sub-power manager 104a can supply power to ASR 110a to supply power to subsystem 108a.
In an embodiment, a cache of a subsystem is powered down when a subsystem is powered down, and a cache of a subsystem is powered on when the subsystem powers on. For example, in an embodiment, cache 115a is powered down when subsystem 108a is powered down, and cache 115a is powered on when subsystem 108a is powered on. However, it should be understood that in an embodiment, caches can be powered down and powered on without requiring a power down or power on of the entire subsystem. For example, in an embodiment, cache 115a can be coupled to a dedicated switch (not shown), and ASR 110a can toggle this dedicated switch to cut off power from cache 115a or supply power to cache 115a without requiring entire subsystem 115a to be powered down or powered on.
As shown in
In an embodiment, subsystem components and/or subsystems can send a message to power manager 102 and/or respective sub-power managers 104 when the subsystem components and/or subsystems have finished performing tasks. These messages can optionally include requests to power down the subsystem components and/or subsystems. For example, in an embodiment, cores 118a, 118b, 118c, and 118d can send a message to sub-power manager 104a when cores 118a,118b, 118c, and 118d have finished performing tasks. If, after receiving this message, sub-power manager 104a determines that any of cores 118a,118b, 118c, and/or 118d should be powered down, sub-power manager 104a can initiate a powering down of cores 118a,118b, 118c, and/or 118d by sending a control signal (e.g., control signal 106b) to ASR 110a to instruct ASR 110a to toggle switches 116a, 116b, 116c, and/or 116d to cut off power to cores 118a,118b, 118c, and/or 118d. In an embodiment, sub-power manager 104a can determine whether any other system components need to access any of cores 118a,118b, 118c, and/or 118d before powering down any of cores 118a,118b, 118c, and/or 118d.
Additionally, for example, subsystem 108a can send a message to power manager 102 when subsystem 108a has finished performing tasks. For example, if cache 115a is no longer being used, subsystem 108a can send a message to sub-power manager 104a requesting that subsystem 108a be powered down. If, after receiving this message, sub-power manager 104a determines that subsystem 108a should be powered down, sub-power manager 104a can initiate a powering down of subsystem 108a by sending a control signal (e.g., control signal 106b) to ASR 110a to cut off power from ASR 110a to power down subsystem 108a. In an embodiment, sub-power manager 104a can determine whether any other system components need to access subsystem 108a before powering down subsystem 108a.
In an embodiment, subsystems can also send a message to power manager 102 informing power manager 102 that they have finished performing tasks using components of other subsystems. For example, if subsystem 108a finished accessing cache 115b of subsystem 108c, subsystem 108a can send a message to power manager 102 informing power manager 102 that it is no longer accessing cache 115b. In an embodiment, subsystem 108a can send this message to sub-power manager 104a, and sub-power manager 104a can forward the message to sub-power manager 104c. However, it should be understood that sub-power manager 104a or power manager 102 can process this message in accordance with embodiments of the present disclosure. If, after receiving this message, power manager 102 determines that subsystem 108c should be powered down (e.g., to cut off power from cache 115b), power manager 102 can initiate a powering down of subsystem 108c by sending a control signal (e.g., control signal 106f) to ASR 110c to cut off power from ASR 110c to power down subsystem 108c (and thus power down cache 115b). In an embodiment, power manager 102 can determine whether any other system components need to access cache 115b and/or other components of subsystem 108c before powering down subsystem 108c.
In an embodiment, CCM subsystem 108b can also send a message to power manager 102 when CCM subsystem 108b has finished performing tasks. For example, CCM subsystem 108b can send a message to sub-power manager 104b when CCM subsystem is no longer being used to arbitrate access to caches 115. If, after receiving this message, sub-power manager 104b determines that CCM subsystem 108b should be powered down, sub-power manager 104b can initiate a powering down of CCM subsystem 108b by sending a control signal (e.g., control signal 106d) to CSR 110b to cut off power from CSR 110b to power down CCM subsystem 108b (and thus power down CCM 114). In an embodiment, sub-power manager 104b can determine whether any other system components need to access CCM 114 and/or other components of CCM subsystem 108b before powering down CCM subsystem 108b.
Systems and methods according to embodiments of the present disclosure enable subsystems and/or subsystem components to be powered on in layers so that unused system components are not supplied with power. This layering concept provides an efficient, flexible approach to supplying power to various subsystem components. For example, in an embodiment, power manager 102 won't attempt to power down an entire subsystem while a subsystem component is still being used to perform a task. Instead, power manager 102 adopts a layered approach by first attempting to power down unused subsystem components. Then, once all subsystem components have finished performing tasks, power manager 102 determines whether to power down the subsystem. Finally, if all subsystems have finished performing tasks, power manager 102 determines whether to power down CCM subsystem 108b (and thus power down CCM 114).
For example, in an embodiment, power manager 102 does not power down ASR 110a (which, in an embodiment, supplies power to entire subsystem 108a including cache 115a) until all of cores 116a, 116b, 116c, and 116d have been powered down (e.g., via switches 116a, 116b, 116c, and 116d, respectively). Additionally, in an embodiment, power manager 102 does not power down CSR 110b (which, in an embodiment, supplies power to entire subsystem 108b including CCM 114) until both subsystem 108a and 108b have been powered down (e.g., via ASR 110a and ASR 110c, respectively).
In an embodiment, this layering concept can also extend to powering up subsystems and subsystem components. For example, in an embodiment, power manager 102 does not power on subsystem 108a or subsystem 108b until CCM subsystem has been powered on (e.g., by supplying power to CSR 110b). Additionally, in an embodiment, power manager 102 does not power on any of cores 118a, 118b, 118c, or 118d until subsystem 108a has been powered on (e.g., by supplying power to ASR 110a).
In an embodiment caches in accordance with embodiments of the present disclosure (e.g., caches 115a and/or 115b) can be partitioned into multiple portions, and each portion of a cache can be powered down when not used to conserve power and powered up when needed. For example, in an embodiment, power manager 102 can send a message instructing a portion of cache 115a to be powered down when this portion of cache 115a is not needed. While a portion of cache 115a is powered down, other portions of cache 115a can still be powered on and accessed. When power manager 102 determines that a powered down portion of cache 115a needs to be used to perform a task, power manager 102 can send a message instructing the powered down portion of cache 115a to be powered on again.
For example, in an embodiment, cache 115a can be split into a first portion and a second portion. If for example, core 118e has finished accessing the first portion of cache 115a, core 118e can send a message to power manager 102 instructing power manager 102 that it has finished using the first portion of cache 115a and that the first portion of cache 115a can be powered down. If power manager 102 determines that no other subsystems need to access the first portion of cache 115a, power 102 can send a message to ASR 110a instructing ASR 110a to cut off power to the first portion of cache 115a. While the first portion of cache 115a is powered down, the second portion of cache 115a can still receive power from ASR 110a and can still be accessed by other subsystem components. If, for example, core 118f needs to access the first portion of cache 115a, core 118f can send a message to power manager 102 requesting that the first portion of cache 115a be powered on. Power manager 102 can then send a message to ASR 110a instructing ASR 110a to supply power to the first portion of cache 115a.
In an embodiment, the components of the system of
In an embodiment, if sub-power manager 104a receives a request to power down cache 115a and/or subsystem 108a in step 400, sub-power manager 104a determines whether other subsystem components need to access cache 115a and/or subsystem 108a in step 402. If sub-power manager 104a determines that other system components need to access cache 115a and/or subsystem 108a, the method proceeds to step 404, and cache 115a and/or subsystem 108a is left on. If sub-power manager 104a determines that other system components do not need to access cache 115a and/or subsystem 108a, the method proceeds to step 406, and cache 115a and/or subsystem 108a are powered down (e.g., by powering down ASR 110a).
In an embodiment, the CCM can send a request to power manager 102 to determine whether the cache is powered on. For example, in an embodiment, CCM 114 sends a request to power manager 102 via control signal 106c to determine whether cache 115b is powered on. In an embodiment, power manager 102 can respond to the CCM via control signal 106d. If the CCM (e.g., CCM 114) determines that the cache is powered on, the method proceeds to step 510, and the data is accessed from the cache. For example, CCM 114 can retrieve the data from cache 115b. If the CCM (e.g., CCM 114) determines that the cache is not powered on, the method proceeds to step 508, and a request to power on the cache is sent. For example, CCM 114 can send a request to power on cache 115b to power manager 102 via control signal 106c. In an embodiment, sub-power manager 104c can then power on ASR 110c to supply power to cache 115b. Once the cache is powered on, the method proceeds to step 510, and the data is accessed from the cache (e.g., from cache 115b).
It will be apparent to persons skilled in the relevant art(s) that various elements and features of the present disclosure, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software.
The following description of a general purpose computer system is provided for the sake of completeness. Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system. An example of such a computer system 600 is shown in
Computer system 600 includes one or more processors, such as processor 604. Processor 604 can be a special purpose or a general purpose digital signal processor. Processor 604 is connected to a communication infrastructure 602 (for example, a bus or network). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the disclosure using other computer systems and/or computer architectures.
Computer system 600 also includes a main memory 606, preferably random access memory (RAM), and may also include a secondary memory 608. Secondary memory 608 may include, for example, a hard disk drive 610 and/or a removable storage drive 612, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, or the like. Removable storage drive 612 reads from and/or writes to a removable storage unit 616 in a well-known manner. Removable storage unit 616 represents a floppy disk, magnetic tape, optical disk, or the like, which is read by and written to by removable storage drive 612. As will be appreciated by persons skilled in the relevant art(s), removable storage unit 616 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative implementations, secondary memory 608 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 600. Such means may include, for example, a removable storage unit 618 and an interface 614. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, a thumb drive and USB port, and other removable storage units 618 and interfaces 614 which allow software and data to be transferred from removable storage unit 618 to computer system 600.
Computer system 600 may also include a communications interface 620. Communications interface 620 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 620 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 620 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 620. These signals are provided to communications interface 620 via a communications path 622. Communications path 622 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels.
As used herein, the terms “computer program medium” and “computer readable medium” are used to generally refer to tangible storage media such as removable storage units 616 and 618 or a hard disk installed in hard disk drive 610. These computer program products are means for providing software to computer system 600.
Computer programs (also called computer control logic) are stored in main memory 606 and/or secondary memory 608. Computer programs may also be received via communications interface 620. Such computer programs, when executed, enable the computer system 600 to implement the present disclosure as discussed herein. In particular, the computer programs, when executed, enable processor 604 to implement the processes of the present disclosure, such as any of the methods described herein. Accordingly, such computer programs represent controllers of the computer system 600. Where the disclosure is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 612, interface 614, or communications interface 620.
In another embodiment, features of the disclosure are implemented primarily in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s).
It is to be appreciated that the Detailed Description, and not the Abstract, is intended to be used to interpret the claims. The Abstract may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, is not intended to limit the present disclosure and the appended claims in any way.
The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
Any representative signal processing functions described herein can be implemented in hardware, software, or some combination thereof. For instance, signal processing functions can be implemented using computer processors, computer logic, application specific circuits (ASIC), digital signal processors, etc., as will be understood by those skilled in the art based on the discussion given herein. Accordingly, any processor that performs the signal processing functions described herein is within the scope and spirit of the present disclosure.
The above systems and methods may be implemented as a computer program executing on a machine, as a computer program product, or as a tangible and/or non-transitory computer-readable medium having stored instructions. For example, the functions described herein could be embodied by computer program instructions that are executed by a computer processor or any one of the hardware devices listed above. The computer program instructions cause the processor to perform the signal processing functions described herein. The computer program instructions (e.g. software) can be stored in a tangible non-transitory computer usable medium, computer program medium, or any storage medium that can be accessed by a computer or processor. Such media include a memory device such as a RAM or ROM, or other type of computer storage medium such as a computer disk or CD ROM. Accordingly, any tangible non-transitory computer storage medium having computer program code that cause a processor to perform the signal processing functions described herein are within the scope and spirit of the present disclosure.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, and further the invention should be defined only in accordance with the following claims and their equivalents.
This application claims the benefit of U.S. Provisional Patent Application No. 61/757,947, filed on Jan. 29, 2013.
Number | Date | Country | |
---|---|---|---|
61757947 | Jan 2013 | US |