How to Convert a Decimal to Binary Notation: Understand How a Computer Converts Input to Ones and Zeros

A woman sitting at a desk using a computer

A computer uses digital signals to manipulate information. For any computer science student, one of the first semester courses is logic design, which includes converting a decimal number to binary bits. Binary numbers are collections of ones and zeros.

These binary numbers are saved as hexadecimal blocks in computer memory. When a programmer creates code in a high level language such as Visual Basic or a low level language such a C or assembly, each statement is ported to high (the ones) and low (the zeros) signals. It’s easy to learn a high level computer language, but some computer gaming degrees and advanced applications require understanding of low level machine language, which includes binary and hexadecimal numbers. And when you are good with numbers, you might actually do pretty well playing some fun sports betting games via

The Decimal Number System

Take the number 465. It’s in base 10 (0 to 9). Base ten is what most people recognize. Adding, subtracting, multiplying and dividing in base 10 are second nature to most people. To rewrite this number in a more segregated format, it can also be represented as:

  • 4 * 100 = 100
  • 6 * 10 = 60
  • 5 * 1 = 5

Add these numbers together and they add to 456. People sometimes forget simple elementary math when calculating decimal numbers. Each place in the number represents 1s, 10s, 100s, 1000s, 10000s until the columns are represented. Simply take the number in the decimal place and multiply it to find the total value of the number. In binary, the calculation is the same thing.

Understanding Binary Numbers

The easiest way to understand binary numbers is to create a column for each number placement, similar to decimal numbers. For instance, the following binary number can also be described using decimal notation (a “^” represents a “number raised to a power” in programming languages):



To translate into a similar notation used with a decimal number, the binary number represented is:

  • 1*2^3 = 1*8
  • 1*2^2 = 1*4
  • 1*2^1 = 1*2
  • 0*2^0 = 0*1

Adding these calculations together, the binary number 1110 is equal to 14 in decimal notation.

Convert Decimal to Binary

Take the decimal number illustrated earlier: 465. This number can be converted to binary by first creating columns that represent each location for the ones and zeros. The following represents each place in a binary number:


Calculating these factors, the columns are represented as decimal numbers as the following:


Since the number is 465, find the highest binary column fits into the decimal number. 256 is the highest column that fits into 465. So add a 1 to that column and subtract the amount from the decimal number. Therefore, the first digit in the binary number is 1 and the difference is 465-256. So far, we have:

Binary Number: 1

465-256 remainder: 209

Find the next binary column that fits into 209. This is the next column: 128. Therefore, add another 1 to the binary number, which is:

Binary Number: 11

209-128 remainder: 81

The next column that fits into the decimal remainder is 64. The next calculation is:

Binary number: 111

81-64 remainder: 17

32 is the next column. It does not fit, so a 0 is added. 16 fits into 17, so a one is added to that column and 16 is subtracted from the remainder:

Binary number: 11101

17-16 remainder: 1

Finalize the number by using zeros for each number that does not fit. Since only 1 is left, it’s all zeros until the last bit. The binary number is finally converted and the decimal number remainder is 0. The calculations are below:

Binary number: 111010001

Therefore, the binary representation of 465 is 111010001.

This type of calculation helps computer programmers understand how the words and functions entered into the compiler are transferred into digital high and low output. This type of logic design is required for computer science majors, so it’s important to get used to these conversions if you are interested in a computer science degree.