💻

Software Evolution and LLMs

Jun 29, 2025

Overview

The lecture discusses the rapid evolution of software in the era of AI, introducing the concept of "software 3.0" with large language models (LLMs), their impact on programming paradigms, and how developers and students can work effectively with these new systems.

Evolution of Software Paradigms

  • Software has evolved from hand-written code (1.0), to neural network weights (2.0), to prompt-based programming with LLMs (3.0).
  • Software 1.0 = classical code; 2.0 = neural networks trained via data; 3.0 = LLMs programmed by prompts in English.
  • Modern applications increasingly blend code with English instructions, expanding what "programming" means.

LLMs: A New Kind of Computer

  • LLMs act as programmable computers where prompts are the programs.
  • LLMs are accessed like utilities, with cloud-based APIs and time-sharing models, similar to how early mainframes were used.
  • The LLM ecosystem resembles an operating system landscape, with both closed- and open-source "platforms."
  • LLMs are centralizing deep tech and research, but remain fundamentally software, making them less defensible than hardware.

Unique Properties and Limitations of LLMs

  • LLMs have large-scale encyclopedic knowledge but also cognitive deficits like hallucinations and inconsistent reasoning.
  • Context window = working memory; LLMs lack persistent learning or "self-knowledge" across sessions.
  • Security concerns include prompt injection and data leakage risks.

Building and Using LLM-Enabled Apps

  • Partially autonomous apps (e.g., coding assistants like Cursor, search tools like Perplexity) blend user control with LLM-driven automation.
  • Effective LLM apps feature: context management, GUI for verification, orchestration of multiple models, and an "autonomy slider" to adjust AI involvement.
  • Fast human-AI verification loops and keeping the "AI on a leash" are key best practices.
  • Large diffs or highly autonomous agents often create verification bottlenecks for users.

Human-AI Collaboration and Coding Practices

  • "Vibe coding" describes natural language-based, improvisational development enabled by LLMs.
  • LLMs lower the barrier to software creation, making programming accessible to more people.
  • Integrating new features (e.g., authentication or deployment) often remains manual and time-consuming, suggesting a need for more agent-driven automation.

Building for Agents

  • Agents (LLM-driven systems) need software infrastructure tailored for them, such as markdown-based docs and special APIs.
  • Documentation and interfaces are being adapted to be more legible and actionable for LLMs (e.g., replacing "Click" with command-line alternatives).
  • Tools that convert human-facing interfaces into LLM-friendly formats (like Deep Wiki or Get Ingest) improve agent usability.

Key Terms & Definitions

  • Software 1.0 — Programming by writing code manually.
  • Software 2.0 — Using neural networks trained on data; weights as code.
  • Software 3.0 — Programming via prompts in natural language for LLMs.
  • LLM (Large Language Model) — AI model trained on large text corpora and capable of understanding/generating human language.
  • Autonomy Slider — UI control for adjusting the amount of AI automation in an app.
  • Prompt Injection — Security risk where malicious input manipulates LLM behavior.

Action Items / Next Steps

  • Reflect on how to build or adapt apps for partial autonomy using LLMs.
  • Experiment with prompt engineering best practices for more effective AI collaboration.
  • Explore tools that transform documentation/interfaces for agent and LLM accessibility.
  • Consider the evolving balance between human oversight and AI autonomy in future projects.