Number Base Converter

Convert numbers between binary, octal, decimal, hexadecimal, and any base from 2 to 36. View bitwise operations, signed/unsigned integer interpretations, and IEEE 754 floating-point representation -- all client-side. Your data never leaves your browser.

Base Conversions

Signed / Unsigned Integer Interpretation

IEEE 754 Floating Point

Bitwise Operations (32-bit)

How It Works

Real-Time Conversion

Type a number and see it instantly converted to binary, octal, decimal, hex, and any custom base from 2 to 36. All processing happens locally in your browser using BigInt for arbitrary precision.

🔧

Bitwise Operations

Enter two numbers to compute AND, OR, and XOR results. NOT is always shown for the first input. All operations work on 32-bit values with results displayed in binary, hex, and decimal.

📈

Integer Interpretation

See how your number is interpreted as signed and unsigned integers in 8-bit, 16-bit, 32-bit, and 64-bit widths. Values that overflow a given width are highlighted in red.

🔢

IEEE 754 Display

Visualize the IEEE 754 floating-point representation with color-coded sign, exponent, and mantissa bits for both 32-bit single and 64-bit double precision formats.

Understanding Number Base Conversion

Number base conversion is the process of expressing a number from one positional numeral system (base or radix) to another. The most common bases in computing are binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16). Each base uses a set of digits: binary uses 0-1, octal uses 0-7, decimal uses 0-9, and hexadecimal uses 0-9 plus A-F.

Why Number Bases Matter in Computing

Computers operate in binary (base 2), storing and processing data as sequences of 0s and 1s. However, reading long binary strings is error-prone for humans. Hexadecimal provides a compact representation where each hex digit maps to exactly 4 binary bits, and octal maps each digit to 3 bits. Understanding these bases is essential for low-level programming, debugging, networking, and working with hardware.

Common Use Cases

Two's Complement and Signed Integers

Two's complement is the most common method for representing signed integers in binary. The most significant bit acts as the sign bit (0 for positive, 1 for negative). To negate a value, invert all bits and add 1. This system allows addition and subtraction to work the same way for both positive and negative numbers, simplifying hardware design.

IEEE 754 Floating Point

The IEEE 754 standard defines how real numbers are represented in binary. A floating-point number consists of a sign bit, an exponent field, and a mantissa (significand). Single precision (32-bit) provides about 7 decimal digits of precision, while double precision (64-bit) provides about 15-17 digits. Understanding this representation helps explain why floating-point arithmetic can produce unexpected results like 0.1 + 0.2 not equaling 0.3 exactly.

Frequently Asked Questions

What is a number base (radix)?
A number base, or radix, is the number of unique digits used in a positional numeral system. Decimal (base 10) uses digits 0-9, binary (base 2) uses 0 and 1, octal (base 8) uses 0-7, and hexadecimal (base 16) uses 0-9 and A-F. This tool supports any base from 2 to 36, where bases above 10 use letters A-Z as additional digits.
How do I convert decimal to binary?
To convert decimal to binary, repeatedly divide by 2 and record the remainders. Read the remainders bottom-to-top for the binary representation. For example, 13 becomes 1101: 13/2=6 R1, 6/2=3 R0, 3/2=1 R1, 1/2=0 R1. This tool does this instantly for any number, including very large numbers via BigInt.
What is hexadecimal and why is it used in programming?
Hexadecimal (base 16) uses 0-9 and A-F. Each hex digit represents exactly 4 binary bits, making it a compact way to write binary data. The byte 11111111 (binary) is simply FF in hex. Hex is used for memory addresses, color codes (#FF5733), MAC addresses, hash values, and debugging binary data.
What are bitwise operations?
Bitwise operations manipulate individual bits. AND (&) returns 1 when both bits are 1. OR (|) returns 1 when either bit is 1. XOR (^) returns 1 when exactly one bit is 1. NOT (~) inverts all bits. They are used in networking (subnet masks), graphics, cryptography, permission systems, and performance-critical code.
What is two's complement?
Two's complement is the standard representation for signed integers. The most significant bit is the sign bit (0 = positive, 1 = negative). To negate, invert all bits and add 1. In 8-bit two's complement, values range from -128 to 127, and 255 unsigned equals -1 signed. This tool shows both signed and unsigned values for 8, 16, 32, and 64-bit widths.
What is IEEE 754 floating point?
IEEE 754 is the standard for binary floating-point numbers. A 32-bit float has 1 sign bit, 8 exponent bits, and 23 mantissa bits. A 64-bit double has 1 sign bit, 11 exponent bits, and 52 mantissa bits. This tool color-codes these fields so you can see exactly how a number is stored. Numbers beyond Number.MAX_SAFE_INTEGER (2^53 - 1) may lose precision in the float display.
Is my data safe when using this tool?
Yes. All conversions happen entirely in your browser using JavaScript. No data is sent to any server, stored, or logged. You can verify this by disconnecting from the internet and confirming the tool still works. This makes it safe for any use case.

Explore More Developer Tools

Check out our other free developer tools. Encode Base64, decode JWTs, format JSON, generate hashes, and more -- all from your browser with no sign-up required.

Hash Generator →