💻

Exploring Harvard vs. Von Neumann Architecture

Oct 23, 2024

Lecture Notes on Harvard Architecture

Introduction to Harvard Architecture

  • Named after IBM's Harvard Mark I
  • Harvard Mark I's role in developing atomic bomb during WWII
  • Instructions stored on punched paper tape; data stored in electromechanical counters
  • Principle: separate storage for instructions and data

Von Neumann Architecture

  • Traditional single-core computer model
  • Programs and data stored in main memory (RAM)
  • Fetching process:
    • Memory address sent to main memory via address bus
    • Instruction or data sent back to CPU via data bus
  • Saving data requires:
    • Memory address sent via address bus
    • Data conveyed from CPU to memory via data bus
  • Key points:
    • Memory addresses: one-way travel
    • Data: two-way travel
  • Von Neumann bottleneck:
    • Instructions and data share same memory/bus
    • Two fetch-execute cycles needed to load data into CPU (one for instruction, one for data)

Advantages of Harvard Architecture

  • Instructions and data stored in separate memories
  • Flexibility in memory design:
    • More instruction memory than data memory possible
    • Different word widths for instruction vs. data memory
  • Separate buses for instruction and data:
    • Wider instruction bus compared to data bus
  • Use of read-only memory for instructions and read-write memory for data possible

Applications of Harvard Architecture

  • Commonly found in Digital Signal Processors (DSPs)
  • DSP applications:
    • Audio and video processing
    • Medical imaging (x-ray, MRI, CAT scans)
    • Fitness trackers and smartwatches
    • Digital assistants (e.g., Amazon Alexa, Google Home)
  • Functionality:
    • Capture and digitization of analogue information
    • Processing of digital signals (e.g., speech processing applications)

Modern CPU Design Principles

  • Contemporary CPUs borrow principles from Harvard architecture:
    • Multiple cores, each with its own arithmetic and logic unit
    • Managed by a single control unit
    • Multiple levels of cache memory:
      • Level 1 cache: closest to core, fastest access
      • Level 2 cache: larger but slower than Level 1
      • Level 3 cache: shared among cores, largest but slowest
  • Cache memory benefits:
    • Faster than main memory
    • Avoids data bus travel for cached instructions/data
  • Modified Harvard architecture:
    • Level 1 cache split for instructions and data
    • Allows simultaneous fetching and processing of instructions and data by the same core

Conclusion

  • Original Harvard Mark I principle of separate memories for instructions and data remains relevant in modern computing.