Disclosed embodiments are directed to floating point operations. More particularly, exemplary embodiments are directed to instructions for generating commonly used floating point constants.
Floating point representations of numbers are useful in computing systems for supporting a wider range of values than can be supported in integer or fixed point formats. In floating point numbers, the radix point (decimal/binary) is allowed to “float,” which makes it possible to encode a wide range of values using a small number of bits. Because of the wide range, many floating point numbers may not be effectively represented in fixed point formats without a serious loss of precision or sometimes, errors/exceptions.
Some modern processors support floating point instructions which may operate on numbers represented in floating point format. Integrating floating point instructions in integer/fixed point processor pipelines presents challenges. For example, the potential loss of precision poses challenges in encoding floating point constants or immediate values in floating point instructions.
Traditional instruction set architectures (ISA) for computer processors commonly include instructions which specify an immediate value. Usually, instructions which specify an immediate value contain the immediate value within the instruction itself, in a designated field of the instruction. The number of bits available for immediate value fields in instructions is quite small, usually much smaller than the bit-width of the instruction. Accordingly, floating point instructions may not be able to accurately specify floating point immediate values in immediate value fields of small bit-widths. Therefore, the floating point constants/immediate values are conventionally loaded directly from memory or formed in their entirety by customized instructions for generating the desired floating point constants. Some approaches may also include hard-coding specific floating point values such as 0.0, 1.0, and 2.0 in registers. As can be recognized, such conventional techniques for generating floating point constants result in increased latency, increased code size, and/or increased hardware.
Accordingly, there is a need in the art for overcoming the aforementioned limitations associated with floating point constants immediate values for floating point instructions.
Exemplary embodiments of the invention are directed to systems and method for generating a floating point constant value from an instruction.
For example, an exemplary embodiment is directed to a method of generating a floating point constant value from an instruction comprising: decoding a first field of the instruction as a sign bit of the floating point constant value; decoding a second field of the instruction to correspond to an exponent value of the floating point constant value; decoding a third field of the instruction to correspond to the significand of the floating point constant value; and combining the first field, the second field, and the third field to form the floating point constant value. Optionally, the second field and the third field may be shifted by first and second shift values respectively before the fields are combined to form the floating point constant value.
Another exemplary embodiment is directed to an instruction for generating a floating point constant value, wherein the instruction comprises: a first field corresponding to a sign bit of the floating point constant value; a second field corresponding to an exponent value of the floating point constant value; and a third field corresponding to a significand of the floating point constant value.
Yet another exemplary embodiment is directed to a system for generating a floating point constant value from an instruction comprising: means for decoding a first field of the instruction as a sign bit of the floating point constant value; means for decoding a second field of the instruction to correspond to an exponent value of the floating point constant value; means for decoding a third field of the instruction to correspond to a significand of the floating point constant value; and means for combining the first field, the second field, and the third field to form the floating point constant value.
Another exemplary embodiment is directed to a non-transitory computer-readable storage medium comprising code, which, when executed by a processor, causes the processor to perform operations for generating a floating point constant value from an instruction, the non-transitory computer-readable storage medium comprising: code for decoding a first field of the instruction as a sign bit of the floating point constant value; code for decoding a second field of the instruction to correspond to an exponent value of the floating point constant value; code for decoding a third field of the instruction to correspond to a significand of the floating point constant value; and code for combining the first field, the second field, and the third field to form the floating point constant value.
The accompanying drawings are presented to aid in the description of embodiments of the invention and are provided solely for illustration of the embodiments and not limitation thereof.
Aspects of the invention are disclosed in the following description and related drawings directed to specific embodiments of the invention. Alternate embodiments may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term “embodiments of the invention” does not require that all embodiments of the invention include the discussed feature, advantage or mode of operation.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Further, many embodiments are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequences of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the embodiments described herein, the corresponding form of any such embodiments may be described herein as, for example, “logic configured to” perform the described action.
Exemplary embodiments are directed to generating commonly encountered floating point values using preexisting architecture for integer pipelines in processing systems. Embodiments include one or more instructions to specify floating point constants, for example in immediate value fields of the instruction. One or more subfields within the immediate value field may comprise information such as sign, significand, bias, and exponent values corresponding to the specified floating point constant. The subfields may be extracted and assembled appropriately to generate the specified floating point constant.
One of ordinary skill in the art will recognize conventional formats for representing floating point numbers. In general, a floating point number may include a sign bit to indicate the sign (positive/negative) of the floating point number. The floating point number also includes a number of bits corresponding to a significand (also known as “mantissa”), which comprises the significant digits (e.g. digits not including leading zeros) of the floating point number. In general, the number of bits of significand relates to the precision that the floating point number can represent. The significand is raised by an exponent value specified in the floating point number, with an assumed base, to provide the magnitude of the floating point number. For example, the assumed base is 2 for binary numbers and the assumed base is 10 for decimal numbers. In mathematical notation, the value of the floating point number is derived by the formula significand*base^exponent with the appropriate sign.
Sometimes the exponent value may be offset by a specified or assumed bias value in order to shift the range of the exponent. In conventional implementations, a bias value may be added to the exponent value extracted from the floating point number in order to obtain the actual exponent value. Further, a radix point within the significand may be explicitly specified in a predetermined format. However, in conventional implementations, the radix point is assumed to be placed at a fixed position in the significand, and the exponent value is adjusted appropriately to achieve the floating nature of the radix point. For example, a decimal radix point may be uniformly specified to be placed after the most significant digit of the significand, such that the decimal number 12.3×10^10 may be represented as 1.23×10^11 by shifting the radix point to be placed after the most significant digit of the significand and increasing the exponent value appropriately.
Standard formats, such as the IEEE-754 standard, for representing floating point numbers with the above-described fields are well known in the art. The IEEE-754 standard includes a single precision and double precision standard for floating point numbers used in modern processing systems. Basically, the single precision format comprises 32-bit binary floating point numbers, including a sign bit, a 23-bit significand field and an 8-bit exponent field with a bias value of 127. The double precision format comprises 64-bit binary floating point numbers including a sign bit, a 52-bit significand field, and an 11-bit exponent field with a bias value of 1023. While various other provisions of the IEEE-754 standard will not be described in detail herein, it will be understood that exemplary embodiments may be compatible with the IEEE-754 standard for both single precision and double precision formats.
For example, exemplary embodiments may include instructions, “SFMAKE” and “DFMAKE” to generate single precision and double precision floating point values for use as floating point constants. These instructions may be used to generate a large set of floating point values that may be commonly encountered, for example, in applications related to digital signal processors, microprocessors, or other general purpose processors.
In an illustrative embodiment, an exemplary instruction specifying an 11-bit floating point immediate field, may be recognized as comprising a sign bit, a 6-bit significand field, and a 4-bit exponent field. A predetermined bias value may be applied based on whether the floating point constant value is represented in single precision or double precision. In this illustration, a bias value of 6 may be assumed to be applicable. This exemplary instruction may be used to generate floating point numbers belonging to a wide range of constants notated by: [+, −] [1.0, 1+63/64]*2^[−6, +9]. For example, within this range of constants can be generated all positive and negative integers of a range of magnitudes: [1, 128]. Further, within the range of constants, can be generated, all positive and negative even integers of magnitudes: [2, 256]. Positive and negative integers of magnitude 1000 can also be generated. Many commonly used fractional values (both positive and negative), of magnitudes such as 0.25, 0.5, 1/32 (and numerous multiples thereof), 1.5, 1/256, etc can also be generated. Skilled persons will recognize numerous other floating point constant values that are covered by the range of constants in the exemplary instruction.
With reference now to
With continuing reference to
With reference now to
Accordingly, exemplary embodiments may include one or more instructions to generate commonly used floating point constant values, without requiring the floating point constant values to be loaded from memory. Disclosed embodiments avoid polluting the memory and caches with floating point constant values. Accordingly, the embodiments also lead to low power implementations for generating floating point constant values. The embodiments may be used in operations such as division, computations of square-roots, etc. A wide range of commonly used constants may be supported. The embodiments may fully support single or double precision formats and may be compatible with conventional standards for representing floating point numbers.
Further, it will be appreciated that embodiments include various methods for performing the processes, functions and/or algorithms disclosed herein. For example, as illustrated in
Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The methods, sequences and/or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
Accordingly, an embodiment of the invention can include a computer readable media embodying a method for generating a floating point constant value from an instruction. Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in embodiments of the invention.
Referring to
In a particular embodiment, input device 330 and power supply 344 are coupled to the system-on-chip device 322. Moreover, in a particular embodiment, as illustrated in
It should be noted that although
The foregoing disclosed devices and methods are typically designed and are configured into GDSII and GERBER computer files, stored on a computer readable media. These files are in turn provided to fabrication handlers who fabricate devices based on these files. The resulting products are semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. The chips are then employed in devices described above.
While the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the embodiments of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Number | Name | Date | Kind |
---|---|---|---|
5201056 | Daniel et al. | Apr 1993 | A |
5341320 | Trissel | Aug 1994 | A |
5671105 | Sugawara et al. | Sep 1997 | A |
5805475 | Putrino | Sep 1998 | A |
5838984 | Nguyen et al. | Nov 1998 | A |
5878266 | Goddard et al. | Mar 1999 | A |
5940311 | Dao et al. | Aug 1999 | A |
5991531 | Song et al. | Nov 1999 | A |
6058465 | Nguyen | May 2000 | A |
6122721 | Goddard et al. | Sep 2000 | A |
6397239 | Oberman et al. | May 2002 | B2 |
6487653 | Oberman et al. | Nov 2002 | B1 |
6509022 | Lowry et al. | Jan 2003 | B2 |
6519694 | Harris | Feb 2003 | B2 |
7003539 | Purcell | Feb 2006 | B1 |
7212959 | Purcell et al. | May 2007 | B1 |
7330864 | Yuval et al. | Feb 2008 | B2 |
7885992 | Richey et al. | Feb 2011 | B2 |
7885995 | Barrett et al. | Feb 2011 | B2 |
7949696 | Ishii et al. | May 2011 | B2 |
8006078 | Lee et al. | Aug 2011 | B2 |
8024678 | Taylor et al. | Sep 2011 | B1 |
8051117 | Lundvall et al. | Nov 2011 | B2 |
8185723 | Norin et al. | May 2012 | B2 |
8244783 | Boersma et al. | Aug 2012 | B2 |
8412756 | Langhammer | Apr 2013 | B1 |
8635257 | Lundvall et al. | Jan 2014 | B2 |
8645449 | Langhammer | Feb 2014 | B1 |
8650231 | Langhammer | Feb 2014 | B1 |
8706790 | Langhammer | Apr 2014 | B1 |
8745111 | Ollmann | Jun 2014 | B2 |
20010051969 | Oberman et al. | Dec 2001 | A1 |
20020184282 | Yuval et al. | Dec 2002 | A1 |
20030200244 | Abraham et al. | Oct 2003 | A1 |
20050154773 | Ford et al. | Jul 2005 | A1 |
20060112160 | Ishii et al. | May 2006 | A1 |
20070203967 | Dockser et al. | Aug 2007 | A1 |
20070240129 | Kretzschmar et al. | Oct 2007 | A1 |
20070252733 | Thebault et al. | Nov 2007 | A1 |
20080256346 | Lee et al. | Oct 2008 | A1 |
20080270500 | Lundvall et al. | Oct 2008 | A1 |
20080270506 | Lundvall et al. | Oct 2008 | A1 |
20090249040 | Fujimoto et al. | Oct 2009 | A1 |
20110055307 | Hurd | Mar 2011 | A1 |
20110125987 | Plondke et al. | May 2011 | A1 |
20120079251 | Gradstein et al. | Mar 2012 | A1 |
20120191955 | Jonsson et al. | Jul 2012 | A1 |
20130081013 | Plondke et al. | Mar 2013 | A1 |
20130151577 | Carter et al. | Jun 2013 | A1 |
20130212357 | Plondke et al. | Aug 2013 | A1 |
20130246491 | Panda et al. | Sep 2013 | A1 |
20140067894 | Plondke et al. | Mar 2014 | A1 |
Number | Date | Country |
---|---|---|
101685383 | Mar 2010 | CN |
201628951 | Nov 2010 | CN |
2006154979 | Jun 2006 | JP |
20100010473 | Feb 2010 | KR |
9712317 | Apr 1997 | WO |
Entry |
---|
IEEE Std 754-2008—IEEE Standard for Floating-Point Arithmetic (p. 6 through 13). |
“ARM Architecture Reference Manual”, Jul. 2005, Issue I, pp. A5-1 to A5-17. |
Florian Kainz, Rod Bogart, “OpenEXR Documentation: Technical Introduction.” Industrial Light & Magic. Last Update: Feb. 18, 2009, Url: http://www.openexr.com/documentation.html. |
International Search Report and Written Opinion—PCT/US2013/025401—ISA/EPO—May 13, 2013. |
Number | Date | Country | |
---|---|---|---|
20130212357 A1 | Aug 2013 | US |