IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.
IEEE 754-1985 represents numbers in binary, providing definitions for four levels of precision, of which the two most commonly used are:
level | width | range | precision* |
---|---|---|---|
single precision | 32 bits | ±1.18×10−38 to ±3.4×1038 | approx. 7 decimal digits |
double precision | 64 bits | ±2.23×10−308 to ±1.80×10308 | approx. 15 decimal digits |
- Precision: The number of decimal digits precision is calculated via number_of_mantissa_bits * Log10(2). Thus ~7.2 and ~15.9 for single and double precision respectively.
The standard also defines representations for positive and negative infinity, a "negative zero", five exceptions to handle invalid results like division by zero, special values called NaNs for representing those exceptions, denormal numbers to represent numbers smaller than shown above, and four rounding modes.
Read more about IEEE 754-1985: Representation of Numbers, Representation of Non-numbers, Range and Precision, Examples, Comparing Floating-point Numbers, Rounding Floating-point Numbers, Extending The Real Numbers, History