Convert numbers between binary, octal, decimal, hexadecimal, and any base from 2 to 36. View bitwise operations, signed/unsigned integer interpretations, and IEEE 754 floating-point representation -- all client-side. Your data never leaves your browser.
Type a number and see it instantly converted to binary, octal, decimal, hex, and any custom base from 2 to 36. All processing happens locally in your browser using BigInt for arbitrary precision.
Enter two numbers to compute AND, OR, and XOR results. NOT is always shown for the first input. All operations work on 32-bit values with results displayed in binary, hex, and decimal.
See how your number is interpreted as signed and unsigned integers in 8-bit, 16-bit, 32-bit, and 64-bit widths. Values that overflow a given width are highlighted in red.
Visualize the IEEE 754 floating-point representation with color-coded sign, exponent, and mantissa bits for both 32-bit single and 64-bit double precision formats.
Number base conversion is the process of expressing a number from one positional numeral system (base or radix) to another. The most common bases in computing are binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16). Each base uses a set of digits: binary uses 0-1, octal uses 0-7, decimal uses 0-9, and hexadecimal uses 0-9 plus A-F.
Computers operate in binary (base 2), storing and processing data as sequences of 0s and 1s. However, reading long binary strings is error-prone for humans. Hexadecimal provides a compact representation where each hex digit maps to exactly 4 binary bits, and octal maps each digit to 3 bits. Understanding these bases is essential for low-level programming, debugging, networking, and working with hardware.
Two's complement is the most common method for representing signed integers in binary. The most significant bit acts as the sign bit (0 for positive, 1 for negative). To negate a value, invert all bits and add 1. This system allows addition and subtraction to work the same way for both positive and negative numbers, simplifying hardware design.
The IEEE 754 standard defines how real numbers are represented in binary. A floating-point number consists of a sign bit, an exponent field, and a mantissa (significand). Single precision (32-bit) provides about 7 decimal digits of precision, while double precision (64-bit) provides about 15-17 digits. Understanding this representation helps explain why floating-point arithmetic can produce unexpected results like 0.1 + 0.2 not equaling 0.3 exactly.
Check out our other free developer tools. Encode Base64, decode JWTs, format JSON, generate hashes, and more -- all from your browser with no sign-up required.
Hash Generator →