Overview
This lecture explains how binary code works, covering the basics of bits, bytes, binary counting, and how computers use zeros and ones to represent numbers.
Decimal and Binary Number Systems
- The decimal system is base 10, using digits 0 through 9.
- Each digit's position in decimal represents a power of 10 (ones, tens, hundreds, etc.).
- Binary is base 2 and uses only two digits: 0 and 1.
- Each binary digit (bit) represents a power of two (1, 2, 4, 8, 16, etc.).
Bits, Bytes, and Binary Representation
- A bit is the smallest unit of data in binary, short for "binary digit".
- An 8-bit system (byte) uses 8 zeros or ones to represent numbers.
- The value of each bit in an 8-bit number is: 128, 64, 32, 16, 8, 4, 2, 1.
Converting Numbers to Binary
- To convert a decimal number to binary, subtract the largest power of two possible, marking each used bit as 1 and others as 0.
- Example: 19 in binary is 00010011 (16 + 2 + 1).
- Example: 64 in binary is 01000000 (only the 64-bit is 1).
- Zero in binary is all zeros: 00000000.
- The largest number with 8 bits is 255 (all bits set to 1).
Applications and Systems
- The number of bits determines the range of numbers possible (e.g., 8 bits = 0 to 255).
- Higher bit systems (e.g., 32-bit, 64-bit) allow for larger numbers and more data.
- Terms like "32-bit processor" or "Nintendo 64" refer to the number of bits a system can handle.
Key Terms & Definitions
- Binary Code — System using only 0s and 1s to represent data.
- Bit — Single binary digit (0 or 1).
- Byte — Group of 8 bits.
- Base 10 — Decimal system, uses digits 0–9.
- Base 2 — Binary system, uses digits 0 and 1.
Action Items / Next Steps
- Practice converting decimal numbers to binary and vice versa.
- Review how different bit systems affect the range of numbers computers can use.