Computers store information using electronic components that understand two conditions, such as "off" and "on," "false" and "true," or "no" and "yes." To a computer, the two states are zero and one, also known as the binary system. A single one or zero is called a bit, and eight bits together, such as 11010101, is called a byte. Every letter has a numeric equivalent, called a character encoding, that a computer uses internally to represent the letter. To convert a character to binary, obtain a character encoding table and look up the binary value. Universal transformation format 8 is a popular character encoding scheme used by approximately 84 percent of websites as of May 2015, according to W3Techs.
Our numbering system is called the decimal system because it's based on the number 10. We have 10 digits, numbered zero through nine. When a number requires more than one digit, such as the number 9, 876, the place that each digit occupies represents a power of 10. For example, 9 occupies the place that represents 103, or 1,000; 8 occupies the place that represents 102, or 100; 7 occupies the place that represents 101, or 10; and 6 occupies the place that represents 100, or 1. The sum of each digit multiplied by its magnitude of 10 gives us the resulting value: (9 times 1,000) plus (8 times 100) plus (7 times 10) plus (6 times 1), or 9,876.
A computer can't store ten different states -- it can only store two. So instead of using the decimal system based on the number 10, computers use the binary system, which is based on the number two. Rather than ten digits numbered zero through nine, the binary system has two digits numbered zero and one. When a number requires more than one digit, it follows the same logic as the decimal system, but uses powers of two instead of powers of ten. For example, consider the number 1011 in binary. The first digit on the left, 1, occupies the place that represents 23, or 8; the next digit, 0, is in the position that represents 22, or 4; the next digit, 1, occupies the place for 21, or 2; and the last digit, 1, is in the position that represents 20, or one. To determine the decimal equivalent of the binary value, multiply (1 times 8), add (0 times 4), add (1 times 2) and then add (1 times 1) for a total of eleven in the decimal system.
Since a computer only stores zeroes and ones, every character in the alphabet is assigned a binary number that the computer uses to represent the character. While there are different character encoding tables that translate characters to a numeric code, most are based upon the American Standard Code for Information Interchange table, which was originally created for the teletype machine. For example, an uppercase A has a decimal value of 65, or a one-byte binary value of 01000001. A lowercase z has a decimal value of 122, or a single-byte binary value of 01111010.
Converting a Character to Binary
To convert a character to binary, determine the character encoding scheme that the computer uses and look up the character's value in a reference table for the scheme. For example, UTF-8 extends the ASCII character set and uses either eight, 16, 24 or 32 bits to represent characters and symbols. The Greek capital letter Omega has a UTF-8 value of 1100111010101001, which is equivalent to 52,905 decimal.