Portable computing devices (“PCDs”) are becoming necessities for people on personal and professional levels. These devices may include cellular telephones, portable digital assistants (“PDAs”), portable game consoles, palmtop computers, and other portable electronic devices.
One aspect of PCDs that is in common with most computing devices is the use of electronic memory components for storing instructions and/or data. Various types of memory components may exist in a PCD, each designated for different purposes. Commonly, random access memory (“RAM”) such as double data rate (“DDR”) memory is used to store instructions and data for multimedia (“MM”) client applications. As such, when a PCD is processing workloads associated with multimedia applications, there may be significant amounts of read/write traffic to and from the DDR memory component.
In a use case that includes multiple MM clients trying to simultaneously read and write from dispersed regions of the DDR, the read/write transactions may be intermingled to such an extent that the DDR is constantly being “struck” at non-contiguous addresses associated with different pages within the DDR. Consequently, because the DRR may only be capable of keeping a few memory pages actively open and ready for quick access, the intermingled strikes on the DDR may dictate constant closing of pages and opening of others. This constant opening and closing of pages as the memory controller “ping pongs” all over the DDR writing data to some addresses and reading data to others may significantly impact MM application latency, the availability of memory bandwidth, and other quality of service (“QoS”) metrics.
Accordingly, what is needed in the art is a system and method for deep coalescing memory management in a portable computing device. More specifically, what is needed in the art is a system and method that instantiates buffers in a low-latency cache memory, associates those buffers with particular MM clients, and sequentially orders transactions within the buffers so that page opening and closing in the DDR is optimized.
Various embodiments of methods and systems for deep coalescing memory management (“DCMM”) in a portable computing device (“PCD”) are disclosed. Because multiple active multimedia (“MM”) clients running on the PCD may generate a random stream of mixed read and write requests associated with data stored at non-contiguous addresses in a double data rate (“DDR”) memory component, DCMM solutions triage the requests into dedicated deep coalescing (“DC”) cache buffers to optimize read and write transactions from and to the DDR memory component.
One exemplary DCMM method includes instantiating in a cache memory, in association with a particular active MM client, a first DC buffer that is expressly for data transaction requests that are read requests and a second DC buffer that is expressly for data transaction requests that are write requests. When a write request is received from the MM client, the request is sequentially queued in the second DC buffer relative to other write requests already queued in the second DC buffer. The queued write requests are sequentially ordered in the DC buffer based on associated addresses in the DDR memory component. When a read request is received from the MM client, the requested data is returned from the first DC buffer. The data in the first DC buffer may have been previously retrieved from the DDR.
The exemplary DCMM method may monitor the capacities of the first and second DC buffers, seeking to refresh the amount of data held in the first DC buffer (the “read” buffer) and minimize the amount of data queued in the second DC buffer (the “write” buffer). When the first DC buffer becomes sufficiently depleted, the exemplary DCMM method may retrieve a block of data from the DDR, such as a memory page of data. In this way, the MM client may benefit from having its read requests serviced from the relatively fast cache memory. Similarly, when the second DC buffer becomes sufficiently full, the exemplary DCMM method may flush a block of data to the DDR, such as a memory page of contiguous data. In this way, updating data in the DDR from write requests generated by the MM client may be efficiently done as the various requests were aggregated in the second DC buffer and sequentially ordered, thus mitigating the need for the DDR to engage in page opening and closing activities to save the flushed data.
In the drawings, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as “102A” or “102B”, the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all figures.
The word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect described herein as “exemplary” is not necessarily to be construed as exclusive, preferred or advantageous over other aspects.
In this description, the term “application” may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an “application” referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.
In this description, reference to “DDR” memory components will be understood to envision any of a broader class of volatile random access memory (“RAM”) and will not limit the scope of the solutions disclosed herein to a specific type or generation of RAM. That is, it will be understood that various embodiments of the systems and methods provide a solution for deep coalescing memory management of read and write transaction requests to a memory component defined by pages/rows of memory banks and are not necessarily limited in application to double data rate memory. Moreover, it is envisioned that certain embodiments of the solutions disclosed herein may be applicable to DDR, DDR-2, DDR-3, low power DDR (“LPDDR”) or any subsequent generation of RAM. As would be understood by one of ordinary skill in the art, DDR RAM is organized in rows or memory pages and, as such, the terms “row” and “memory page” are used interchangeably in the present description. The memory pages of DDR may be divided into four sections, called banks in the present description. Each bank may have a register associated with it and, as such, one of ordinary skill in the art will recognize that in order to address a row of DDR (i.e., a memory page), an address of both a memory bank and a row may be required. A memory bank may be active, in which case there may be one or more open pages associated with the register of the memory bank.
In this description, the term “contiguous” is used to refer to data blocks stored in a common memory page of a DDR memory and, as such, is not meant to limit the application of solutions to reading and/or writing data blocks that are stored in an uninterrupted series of addresses on a memory page. For example, although an embodiment of the solution may read or write data blocks from/to addresses in a memory page numbered sequentially 2, 3 and 4, an embodiment may also read or write data blocks from/to addresses in a memory page numbered 2, 5, 12 without departing from the scope of the solution.
As used in this description, the terms “component,” “database,” “module,” “system,” and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
In this description, the terms “central processing unit (“CPU”),” “digital signal processor (“DSP”),” “graphical processing unit (“GPU”),” and “chip” are used interchangeably. Moreover, a CPU, DSP, GPU or a chip may be comprised of one or more distinct processing components generally referred to herein as “core(s).”
In this description, the term “portable computing device” (“PCD”) is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation (“3G”) and fourth generation (“4G”) wireless technology have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, among others.
In current systems and methods, multiple multimedia (“MM”) clients running simultaneously in a PCD create an intermingled flow of read and write transaction requests that necessitate access to dispersed regions of a DDR memory component. When all the transaction requests are intermingled with each other, the DDR is constantly being struck at different addresses. When an address strikes the DDR, the DDR recognizes that the address is located in a certain part of its memory and remembers that location for a period of time under the assumption that a subsequent strike in the same region may be imminent. By remembering the location of a recent strike, a DDR seeks to minimize the volume of page opening and closing required to accommodate the flow of transaction requests. Unfortunately, a DDR is limited in its ability to keep up with the pages it has open, especially under pressure of accommodating a high capacity flow of read and write transaction requests coming from multiple active MM clients.
Embodiments of systems and methods for deep coalescing memory management (“DCMM”) take advantage of the predictable read/write patterns of certain MM clients to optimize traffic to and from the DDR. To do this, DCMM embodiments may instantiate buffers in a cache memory and associate each buffer with either read requests or write requests from a particular active MM client. As transaction requests are generated by the MM clients, each is deposited into the appropriate cache buffer associated with the MM client from which it originated. Advantageously, the transaction requests may be ordered sequentially in the respective cache buffers so that when the buffers are flushed (whether “write flushed” to the DDR or “read flushed” to a MM client) the time required for accessing the DDR and completing the requests is optimized. Moreover, DCMM embodiments may sequentially flush cache buffers based on the DDR addresses of transaction requests in the buffers so that the time required for accessing the DDR and completing the requests is optimized by avoiding unnecessary random striking of the DDR. Meanwhile, the MM clients benefit from the efficient and speedy interaction with the cache (as opposed to reading and writing directly to the DDR), thereby optimizing QoS enjoyed by the PCD user.
In general, the deep coalescing traffic (“DCT”) manager 101 may be formed from hardware and/or firmware and may be responsible for managing read and/or write requests for instructions and/or data stored in a DDR memory component 115 (depicted in
As illustrated in
As further illustrated in
The CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157B. The on-chip thermal sensors 157A may comprise one or more proportional to absolute temperature (“PTAT”) temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor (“CMOS”) very large-scale integration (“VLSI”) circuits. The off-chip thermal sensors 157B may comprise one or more thermistors. The thermal sensors 157 may produce a voltage drop that is converted to digital signals with an analog-to-digital converter (“ADC”) controller (not shown). However, other types of thermal sensors 157 may be employed.
The touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 157B, the PMIC 180 and the power supply 188 are external to the on-chip system 102. It will be understood, however, that one or more of these devices depicted as external to the on-chip system 102 in the exemplary embodiment of a PCD 100 in
In a particular aspect, one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 112 or as form the DCT manager 101. Further, the DCT manager 101, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein.
To avoid the random flow of read and write transaction requests 205 striking the DDR 115, the DCT manager 101 may filter the requests into DC buffers 116 uniquely associated with the active MM clients 201. Write requests emanating from MM client 201A, for example, may be queued in MMC 201A DC Buffer 116A while read requests from MM client 201A are queued in a different DC buffer 116. Read and write requests from other MM clients 201 may be queued in other DC buffers 116 that were instantiated in association with, and for the benefit of, those other MM clients 201.
Recognizing the data access patterns associated with certain MM clients 201, the DCT manager 101 may sequentially order the write requests in the appropriate DC buffers 116 until a full page, or other optimal data block size, of requests has accumulated in a given DC “write” buffer, at which time the DCT manager 101 may trigger a flush 207 of the given “write” buffer to the DDR 115. Similarly, the DCT manager 101 may monitor the capacity of a given “read” buffer and, recognizing that data levels in the DC “read” buffer are low, trigger a read transaction from the DDR 115 in a full page block, or other optimal block size, into the “read” buffer. In these ways, DCMM embodiments seek to optimize transactions to and from the DDR 115 as blocks of data transactions associated with addresses in a common memory page are conducted sequentially.
For example, referring to MM client 201A, it can be seen that two exemplary data transactions, A2 and A1, have originated from MM client 201A. Similarly, data transactions B2, B1 and B3 are shown to have originated from MM client 201B while data transactions n1, n3 and n2 have originated from MM client 201n. Notably, although the illustrations depict three active MM clients 201, one of ordinary skill in the art will recognize that DCMM embodiments may be applicable for data transaction management in systems having any number of active MM clients.
Returning to the
Because the data transaction requests from one MM client 201 may originate independently from data transaction requests originated from a different MM client 201, the requests may arrive on bus 211A randomly, as depicted in the
Moreover, a given MM client 201 may be executing a single computing thread, for example, such that the associated DC buffer 116 is filled sequentially without the DCT manager 101 having to reorder the transaction requests. In other applications, however, a DC buffer 116 may be filled in reverse, from finish to start of a computing thread, or from a center point in the thread to an end point, such as may be the case when building or modifying list structures. Further, it is envisioned that an MM client 201, although singularly “functional,” may include multi-threaded parallel hardware that are executing multiple threads in parallel which send transaction requests to the associated DC buffer 116. In such case, the associated DC buffer 116 may be large with different threads assigned to different spatial regions or, as another example, the associated DC buffer 116 may require multiple iterations of work to be performed on the cached data before being flushed to the DDR 115.
Returning to the
Notably, regarding the size of deep coalescing cache buffers in a given DCMM system 102, it is envisioned that the size may be determined based on the known pattern of data updates and retrieval for a given MM client. As such, it is envisioned that DC buffer size may be determined based on a likelihood that the size lends itself to being consistently filled with sequential data transactions (if a “read” buffer) or consistently emptied with a block of sequential data transactions (if a “write” buffer).
In the
The requests are managed by the deep coalescing traffic manager 101. As the requests arrive, the DCT manager 101 triages the requests into DC cache buffers instantiated in cache memory 116 for coalescing of write requests uniquely associated with each of the MM clients 201. As previously described, the DCT manager 101 may order the requests in the respective DC write buffers such that the requests are sequentially ordered according to DDR addresses associated with the data in the write requests.
The DCT manager 101 may have determined a given DC cache buffer size based on a recognized pattern of write request generation from a given MM client 201. For example, assuming that the data transaction requests depicted in the
Returning to the
As can further be seen from the
In the
The requests are managed by the deep coalescing traffic manager 101. As DC cache buffers associated with read requests for each of the MM clients 201 become depleted, the DCT manager 101 triggers the DDR memory 115 to copy contiguous blocks of data into the DC cache “read” buffers. As previously described, the DCT manager 101 may order the requests in the respective DC read buffers such that the requests are sequentially ordered according to DDR addresses associated with the data in the read requests.
The DCT manager 101 may have determined a given DC cache read buffer size based on a recognized pattern of read request generation from a given MM client 201. For example, assuming that the data transaction requests depicted in the
Returning to the
As can further be seen from the
As illustrated in
The CPU 110 may receive commands from the DCT manager module(s) 101 that may comprise software and/or hardware. If embodied as software, the module(s) 101 comprise instructions that are executed by the CPU 110 that issues commands to other application programs being executed by the CPU 110 and other processors.
The first core 222, the second core 224 through to the Nth core 230 of the CPU 110 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package. Designers may couple the first core 222, the second core 224 through to the Nth core 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies.
Bus 211 may include multiple communication paths via one or more wired or wireless connections, as is known in the art. The bus 211 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the bus 211 may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.
When the logic used by the PCD 100 is implemented in software, as is shown in
In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The computer-readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
In an alternative embodiment, where one or more of the startup logic 250, management logic 260 and perhaps the DCTM interface logic 270 are implemented in hardware, the various logic may be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
The memory 112 is a non-volatile data storage device such as a flash memory or a solid-state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor 110 (or additional processor cores).
The startup logic 250 includes one or more executable instructions for selectively identifying, loading, and executing a select program for managing or controlling the performance of one or more of the available cores such as the first core 222, the second core 224 through to the Nth core 230. The startup logic 250 may identify, load and execute a select DCMM program. An exemplary select program may be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a deep coalescing algorithm 297 and a set of parameters 298 for a given MM client that may include timing parameters, data amounts, patterns of data transaction requests, etc. The exemplary select program, when executed by one or more of the core processors in the CPU 110 may operate in accordance with one or more signals provided by the DCT manager module 101 to triage write and read requests in dedicated deep coalescing cache buffers.
The management logic 260 includes one or more executable instructions for terminating a memory management program on one or more of the respective processor cores, as well as selectively identifying, loading, and executing a more suitable replacement program for managing memory in response to read and write data transaction requests originating from various active multimedia clients. The management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device. A replacement program may be found in the program store 296 of the embedded file system 290 and, in some embodiments, may be defined by a specific combination of an algorithm 297 and a set of parameters 298.
The interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290. In one embodiment, the interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296. Moreover, the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260. By way of example, the inputs may include a change to the optimum cache buffer size for a given multimedia client.
The interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100. When the memory 112 is a flash memory, one or more of the startup logic 250, the management logic 260, the interface logic 270, the application programs in the application store 280 or information in the embedded file system 290 may be edited, replaced, or otherwise modified. In some embodiments, the interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280 and information in the embedded file system 290. The operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time.
The embedded file system 290 includes a hierarchically arranged memory management store 292. In this regard, the file system 290 may include a reserved section of its total file system capacity for the storage of information for the configuration and management of the various parameters 298 and deep coalescing memory management algorithms 297 used by the PCD 100. As shown in
If the data transaction request of block 705 is a “write” request, then the method proceeds to decision block 715. Because the DCT manager module 101 may be seeking to keep as empty as possible a deep coalescing buffer instantiated in the cache 116 for the express queuing of write requests originating from the active MM client 201, at decision block 715 the DCT manager module 101 may take note of the available capacity in the particular DC cache buffer that is uniquely associated with the MM client 201 from which the write request originated. If there is no room in the DC buffer, the method 700 may proceed to block 720 and stall the MM client 201 until capacity in the DC buffer becomes available for queuing the write request. If there is room in the DC buffer, the “yes” branch is followed from decision block 715 to block 725 and the write request transaction data is deposited in the dedicated DC buffer and sequentially order relative to other data already queued in the DC buffer (i.e., other write transaction requests previously generated by the MM client 201).
Proceeding from blocks 720 or 725, the method 700 moves to decision block 730. At decision block 730, the DCT manager module 101 may determine whether the DC buffer contains an optimum amount of transaction requests for writing to the DDR 115. If “yes,” then the method 700 proceeds to block 740 and a block of data, sequentially ordered according to its associated addresses in the DDR 115, is written to the DDR 115. If the DC buffer does not contain an optimum amount of data for writing to the DDR 115, the method 700 may proceed to decision block 735 to determine whether the DC buffer should be flushed to the DDR 115 regardless. If “yes,” then the method 700 moves to block 740 and the data is written to the DDR 115. The “yes” branch of decision block 735 may be triggered by any number of conditions including, but not limited to, a duration of time that the DC buffer has existed, the state of a workload being processed by the associated MM client 201, the amount of data in the DC buffer, etc. Essentially, it is envisioned that any trigger may be used to dictate whether a DC buffer is ripe for flushing and, as such, embodiments of the solutions are not limited to particular amounts of data aggregating in a DC buffer. Moreover, it is envisioned that DCMM embodiments may be opportunistic in setting thresholds for triggering of data flushes. If the “no” branch is followed from decision block 735, then the method returns.
Returning to decision block 710 in the method 700, if the data transaction request received at block 705 is a “read” request, then the method 700 proceeds to decision block 745. At decision block 745, the DCT manager module 101 may determine if the requested data is already queued in a DC buffer instantiated expressly for “read” requests associated with the particular MM client 201 from which the data transaction request originated. If “no,” then the method 700 moves to block 750 and the MM client 201 is stalled until the requested data can be retrieved from the DDR 115 and stored in the associated DC buffer as described below. It is envisioned, however, that some embodiments may not stall the MM client 201 when the requested data is not already queued in the DC buffer but, rather, opt to directly query the data from the DDR 115. If the “yes” branch is followed from decision block 745, then at block 755 the requested data is returned from the DC buffer to the MM client 201 and the capacity level of the DC buffer updated.
Moving from blocks 750 or 755, the method 700 may proceed to decision block 760. At decision block 760, the DCT manager module 101 may determine whether the DC buffer associated with read requests for the MM client 201 is in need of replenishing. If “yes,” then the method 700 proceeds to block 770 and an optimum amount of data is retrieved from the DDR 115 and stored in the dedicated DC buffer pending a read request from the associated MM client 201 for data included in the block. If the “no” branch is followed from decision block 760, then the method 700 may determine at decision block 765 whether the DC buffer associated with read requests should be “read flushed” up to the MM client or otherwise cleared out for fresh data. Essentially, it is envisioned that any trigger may be used to dictate whether a DC buffer is ripe for flushing and, as such, embodiments of the solutions are not limited to particular amounts of data aggregating in a DC buffer. Moreover, it is envisioned that DCMM embodiments may be opportunistic in setting thresholds for triggering of data flushes. If “yes,” then the method 700 may clear out the DC buffer or return to block 755. If “no,” then the method may return.
Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as “thereafter”, “then”, “next”, etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.
Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example. Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the drawings, which may illustrate various process flows.
In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.
Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims.