Embodiments of the present invention relate to the field of processor design. More specifically, embodiments of the present invention relate to systems and methods for fast unaligned memory access.
The term unaligned memory access is generally used to refer to or to describe memory requests that require a memory, e.g., a cache memory, to return data that is not aligned to its read boundaries. For example, if a cache memory is aligned to word boundaries, e.g., 64-bit words, or the data path from a cache to the Load Store Queue (LSQ) is aligned along word boundaries from a cache line, a request for data that crosses this alignment is considered to be unaligned.
A request made to address 0x000006 for 32 bits of data, will generally produce 16 bits of data from the entry addressed 0x000008 and the upper 16 bits of data from the entry addressed 0x000010. Such an unaligned access generally requires two memory accesses to fulfill one load request. It is to be appreciated that unaligned memory accesses generally decrease processor performance.
An additional problem with unaligned memory accesses occurs when a data bypass is required in a Load Store Queue (LSQ). When a load instruction (LD) is encountered, the cache is accessed and space is allocated in the Load Store Queue (LSQ) to install the data returned by cache. The load instruction resides in the Load Store Queue (LSQ) until the point at which the data that was requested is consumed.
This data may come from a cache, or it may be allowed to bypass from a store instruction (SD) which writes to the same address. The stores follow a similar path to cache where they are first logged into the Load Store Queue (LSQ) and then moved to the cache at instruction retirement. A store instruction that is older than a load instruction may bypass data to that load instruction, provided that the addresses match.
If one of these memory access instructions is unaligned, it is generally necessary to compare not only the aligned component but also the address to the next, or sequential aligned address, in order to determine a match. If only one instruction is unaligned, three addresses need to be compared. For example, one address for the aligned instruction and two addresses for the unaligned instruction must be compared. If both the instructions are unaligned, as many as four addresses may need to be compared, e.g., two addresses for the load instruction compared with each of the two addresses for the store instruction.
Conventional art approaches to mitigate such problems have included letting unaligned stores retire to cache before forwarding, generating exceptions to let software deal with the misalignment, and storing all possible addresses for each instruction. Unfortunately, such conventional approaches are prohibitively expensive and undesirable, in consideration of both degraded performance and deleteriously increased integrated circuit area. In addition, storing all the addresses for unaligned instructions generally requires two entries for each load/store (LD/SD) instruction pair in the Load Store Queue. A need for storing such addresses limits how many loads or stores can be in flight at the same time.
Therefore, what is needed are systems and methods for fast unaligned memory access. What is additionally needed are systems and methods for fast unaligned memory access that result in a minimal increase in integrated circuit die area. A further need exists for systems and methods for fast unaligned memory access that are compatible and complementary with existing systems and methods for processor design, programming and operation. Embodiments of the present invention provide these advantages.
In accordance with a first embodiment of the present invention, a computing device includes a load queue memory structure configured to queue load operations and a store queue memory structure configured to queue store operations. The computing device includes also includes at least one bit configured to indicate the presence of an unaligned address component for an entry of said load queue memory structure, and at least one bit configured to indicate the presence of an unaligned address component for an entry of said store queue memory structure. The load queue memory may also include memory configured to indicate data forwarding of an unaligned address component from said store queue memory structure to said load queue memory structure.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. Unless otherwise noted, the drawings are not drawn to scale.
Reference will now be made in detail to various embodiments of the invention, fast unaligned memory access, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with these embodiments, it is understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the invention, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be recognized by one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the invention.
Notation and Nomenclature
Some portions of the detailed descriptions which follow are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits that may be performed on computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “accessing” or “performing” or “generating” or “adjusting” or “creating” or “executing” or “continuing” or “indexing” or “processing” or “computing” or “translating” or “calculating” or “determining” or “measuring” or “gathering” or “running” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Fast Unaligned Memory Access
Embodiments in accordance with the present invention are well-suited to addressing various types and levels of memory in a computer system memory hierarchy. Many of the exemplary embodiments presented herein describe or refer to a cache memory, as cache memories may benefit from the performance advantages of embodiments in accordance with the present invention. It is to be appreciated that such examples are not intended to be limiting, and those of ordinary skill in the art will be able to envision how to extend the disclosures presented herein to other memory types and structures, and that all such embodiments are considered within the scope of the present invention.
In accordance with embodiments of the present invention, unaligned access processing starts as soon as the address for a load or store operation is resolved in the execution unit. Each unaligned access is treated as two memory accesses while considering them a singular entity for all other purposes. This approach requires having the components that completely describe the unaligned (address+1) component of the address at all times, so that it may be recreated when required.
The generation of the unaligned component of an address requires incrementing that address to the next sequential aligned address. The increment operation involves adding a one (1) along the alignment boundary. For example, if each address points to a 64-bit data segment, then determining the next sequential address is equivalent to adding a 1 starting at bit 3 of the address, ignoring bits 0-2, if the machine is byte addressable.
When this addition is carried out, the carry propagation stops at the first occurrence of a ‘0’ bit, after which point the address bits of this new unaligned address match the bits of the original aligned address. Addition of the circuitry of a 32-bit increment function, or a full adder, within memory access circuitry is disadvantageous in terms of the performance degradation associated with such functions, as well as the large integrated circuit die area required to implement such functions. The latency involved in performing the increment is also deleterious due the 31-bit carry propagation chain.
In order to create the unaligned address quickly, e.g., within a memory access cycle, and without using a 32-bit increment function, or a full adder, it should be determined where the carry propagation stops. It is to be appreciated that the bits to the right of this point, towards the least significant bit (LSB), will be all zero and the bits to the left of this point, towards the most significant bit (MSB), will all match the original address.
Embodiments in accordance with the present invention consider the address as a group of four bytes and stores information that identifies which byte the carry propagation stopped in. Accordingly, a “Group Enable” may be described as a four bit value, with each bit representing a group of eight bits. The bit that is set to ‘1’ points to the group of eight bits where the carry propagation stopped. Group Enable 1 (201) in
In the example of
Consider the address space shown in
To generate Group Enable 0 (202) the circuit requires finding where the carry propagation stopped for this address. This is identified by the first occurrence of a ‘1’ bit. All zeros in a group will signify that the carry from the previous addition propagated all the way through the group. If a group contains any set bit, the carry could not have propagated any further.
In the exemplary embodiment of
In this address, divided into four one-byte sets, once it is incremented there are three distinct regions that can be separately processed.
Group(s) through which the carry completely propagated,
Group at which the carry propagation stopped, and
Group(s) not affected by the carry propagation.
All group(s) through which the carry completely propagated will be all zeros since the carry propagation reset all the bits. Group(s) not affected by the carry propagation will completely match the original address from which the unaligned address is generated since such Group(s) are unaffected and the carry did not propagate this far into the address, e.g., into these Group(s).
The remaining group, at which the carry propagation stopped, will be different from the original address and to compare/generate this part of the address we store this part as a Partial Sum Group (PSG) data pattern.
Partial Sum Group 0 (215) is a data pattern representing the result of the group where the carry propagation would have stopped responsive to generating the present address as an unaligned component of the address of the previous 64 bits as described for Group Enable generation in the previous subsection. Partial Sum Group 0 (215) is therefore the bits of the group indicated to by Group Enable 0 (202) as being the propagation stop point.
The load instruction residing in the Load Store Queue (LSQ) 305 is an unaligned load instruction and it has available to it its aligned component of the address, the PSG0 and PSG1 bits and it can quickly generate its Group Enable 0 and Group Enable 1 through the mechanism already described (
The example in
Since the lower 12 bits of the aligned load instruction address and the aligned store instruction address create a mismatch, the hardware, through an unaligned bit, is aware that load instruction has an unaligned component that needs to be compared with the store instruction address. For this purpose the implementation considers the three parts of the address explained in section 2.1.2, the group of bits unmodified after the increment, the incremented group and the zero group.
To recognize the point of division of the three components the hardware first compares the Group Enable as shown in
Group Enable only forms part of the compare, which, if it matches, confirms that the carry propagation stopped in the same group for both addresses. Next the group where this propagation stopped in its entirety is compared. For this purpose the implementation compares PSG1 from the load instruction and PSG0 from the store instruction for the same reasons that the appropriate Group Enable's were chosen. Once these produce a match the result of the compare of the upper 16 bits of the address, which already produced a match when the original addresses were compared, is AND-ed with these results to produce an unaligned match result.
This implementation is able to achieve this result without the need to generate and save a second 32 bit address. The granularity of the groups that the address is divided into (bytes in this example) can be modified if need requires an architecture to store smaller PSG components. The compare hardware is also simplified by avoiding a second 32 bit comparator.
Three cases need to be handled in the Load Store Queue (LSQ) if it allows for data bypassing between loads and stores. The three cases are:
Case 1 is a conventionally aligned address. It is appreciated that memory circuitry and accesses should be able to handle aligned addresses. Case 2 has been discussed in detail for this implementation. Case 3 is a derivation of case 2. With the components described in the previous sections case 3 is also handled without the need for extra component generation or extra storage. An example of this case is an unaligned load instruction comparing against an unaligned store instruction. Both represent two addresses, load address aligned portion (LAAL), load address unaligned portion (LAUL) and store address aligned portion (SAAL), store address unaligned portion (SAUL). The following are the matches that need to be conducted and the components that are utilized for those compares.
This covers all the cases that would be required for the Load Store Queue (LSQ), e.g., load store queue 305 of
Embodiments in accordance with the present invention provide systems and methods for fast unaligned memory access. Embodiments in accordance with the present invention also provide for systems and methods for fast unaligned memory access that result in a minimal increase in integrated circuit die area. Further, embodiments in accordance with the present invention provide for systems and methods for fast unaligned memory access that are compatible and complementary with existing systems and methods for processor design, programming and operation.
Various embodiments of the invention are thus described. While the present invention has been described in particular embodiments, it should be appreciated that the invention should not be construed as limited by such embodiments, but rather construed according to the below claims.
This application is a continuation of U.S. application Ser. No. 14/376,825, entered May 19, 2015, which is the national stage of International Application No. PCT/US2011/057380, filed Oct. 21, 2011, which is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5878245 | Johnson et al. | Mar 1999 | A |
6499044 | Brooks et al. | Dec 2002 | B1 |
7310706 | Stribaek et al. | Dec 2007 | B1 |
7418552 | Akkary et al. | Aug 2008 | B2 |
7640414 | Cox et al. | Dec 2009 | B2 |
8195919 | Olson et al. | Jun 2012 | B1 |
20060036817 | Oza et al. | Feb 2006 | A1 |
20060184738 | Bridges et al. | Aug 2006 | A1 |
20060184777 | Mericas et al. | Aug 2006 | A1 |
20100145969 | Wang et al. | Jun 2010 | A1 |
20110040906 | Chung et al. | Feb 2011 | A1 |
20110202704 | Seo et al. | Aug 2011 | A1 |
20130073784 | Ng et al. | Mar 2013 | A1 |
Entry |
---|
Advisory Action from U.S. Appl. No. 14/376,825, dated May 23, 2018, 3 pages. |
Final Office Action from U.S. Appl. No. 14/376,825, dated Jan. 2, 2019, 12 pages. |
Final Office Action from U.S. Appl. No. 14/376,825, dated Mar. 2, 2018, 17 pages. |
International Preliminary Report on Patentability for Application No. PCT/US2011/057380, dated May 1, 2014, 7 pages. |
International Search Report and Written Opinion for Application No. PCT/US2011/057380, dated Jul. 31, 2012, 8 pages. |
Non-Final Office Action from U.S. Appl. No. 14/376,825, dated Aug. 10, 2017, 18 pages. |
Non-Final Office Action from U.S. Appl. No. 14/376,825, dated Jun. 26, 2018, 15 pages. |
Notice of Allowance from U.S. Appl. No. 14/376,825, dated Mar. 20, 2019, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20190286445 A1 | Sep 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14376825 | US | |
Child | 16434066 | US |