TFT

Hex to Float & Double Converter Online

Convert hex to IEEE 754 floating-point numbers and back. Visualize the sign, exponent, and mantissa bits. An advanced tool for scientific computing, graphics programming, and binary data analysis.

Hex to Float/Double Converter

Convert hexadecimal values to floating-point numbers

Examples

  • 40490FDB → Float32: 3.1415927 (π)
  • 400921FB54442D18 → Float64: 3.141592653589793 (π)
  • 00000001 → Int32: 1, Float32: very small number

How It Works

This hex to float converter interprets hexadecimal values as IEEE 754 floating-point numbers, converting between raw binary representation and decimal values.

The conversion process:

  1. Hex parsing: Convert the hexadecimal input to its binary representation.
  2. Bit field extraction: Separate the sign bit (1 bit), exponent (8 or 11 bits), and mantissa (23 or 52 bits).
  3. IEEE 754 decoding: Apply the IEEE 754 formula: value = (-1)^sign × 2^(exponent-bias) × 1.mantissa
  4. Result display: Show the decimal float/double value along with a breakdown of each component.

This is crucial for understanding how computers store decimal numbers, debugging floating-point issues, and working with binary data formats.

When You'd Actually Use This

Binary File Analysis

Read floating-point values from binary files, save games, or data dumps that store floats as raw hex.

Debugging Precision Issues

Understand why 0.1 + 0.2 ≠ 0.3 by examining the actual bit representation of floating-point numbers.

Network Protocol Analysis

Decode floating-point values from network packets that transmit sensor data or scientific measurements.

Game Memory Editing

Find and modify health values, coordinates, or other float variables in game memory for modding or analysis.

Scientific Computing

Verify floating-point calculations at the bit level for numerical analysis and algorithm development.

Embedded Systems

Work with sensor data, ADC readings, or control values stored as IEEE 754 floats in memory or registers.

What to Know Before Using

Float vs Double have different sizes

Float (32-bit/8 hex chars) has ~7 decimal digits precision. Double (64-bit/16 hex chars) has ~15 digits.

Endianness affects byte order

Little-endian systems store least-significant byte first. The same hex may represent different values on different architectures.

Special values exist in IEEE 754

Infinity, negative zero, and NaN (Not a Number) have specific bit patterns. The tool should identify these.

Not all decimals can be represented exactly

0.1 in binary is a repeating fraction. This causes the famous 0.1 + 0.2 = 0.30000000000000004 issue.

Denormal numbers handle very small values

Numbers close to zero use a special format (denormalized) that sacrifices precision for range.

Common Questions

What's the hex representation of 1.0?

Float: 0x3F800000. Double: 0x3FF0000000000000. The sign is 0 (positive), exponent is biased, mantissa is 0.

How do I represent negative numbers?

Set the sign bit (first bit) to 1. For -1.0 float: 0xBF800000. Only the sign bit changes from positive.

What's special about 0x7F800000?

That's positive infinity in float format. Exponent all 1s, mantissa all 0s. 0xFF800000 is negative infinity.

What does NaN look like in hex?

Exponent all 1s, mantissa non-zero. For example: 0x7FC00000 (quiet NaN) or 0x7FB00000 (signaling NaN).

Why can't 0.1 be represented exactly?

In binary, 0.1 is 0.0001100110011... (repeating). Like 1/3 in decimal, it can't be expressed finitely in binary.

What's the exponent bias?

Float: 127. Double: 1023. The stored exponent = actual exponent + bias. This allows representing both large and small numbers.

Can I convert doubles too?

Yes, doubles use 64 bits (16 hex chars). They have 11 exponent bits and 52 mantissa bits for more precision.