πŸ€–

LQR Control Overview

Sep 17, 2025

Overview

This lecture introduces the Linear Quadratic Regulator (LQR), an optimal control method using state-space models, covering its structure, concept of optimality, cost functions, and tuning using MATLAB examples.

LQR vs. Pole Placement Controllers

  • Both LQR and pole placement use full state feedback and the same implementation structure: output = reference - K * state.
  • The key difference is how the gain matrix K is chosen: pole placement picks desired pole locations, while LQR optimizes performance and control effort.
  • Both approaches can achieve zero steady-state error through different feedback structures.*

The Concept of Optimality in LQR

  • LQR defines "optimal" by balancing system performance with actuator effort via a cost function.
  • Preferences (like speed vs. cost) are weighted using matrices Q (performance) and R (effort) in the cost function.
  • There’s no universal optimal solution; the best one depends on user-defined weights.

The LQR Cost Function

  • The cost function measures total system cost by summing (over time) weighted squared state errors and actuator efforts.
  • States are penalized via the Q matrix, which can emphasize specific state variables by assigning higher values.
  • Control effort is penalized via the R matrix, typically a diagonal matrix relating to each actuator.
  • The cost function is quadratic, ensuring a single minimum.

Tuning LQR with Q and R

  • Adjusting Q values increases penalty for state errors; increasing R penalizes actuator usage.
  • Designers often start with identity matrices for Q and R and then tune by trial and error or intuition.
  • Higher Q leads to faster system response (but more effort); higher R leads to less actuator usage (but slower response).
  • LQR tuning is often more intuitive than pole placement, especially for complex systems.

MATLAB LQR Examples

  • Example with a rotating mass: Q and R selected to balance rotation speed vs. fuel use.
  • Increasing R lowered fuel use but increased maneuver time; increasing specific Q elements minimized overshoot.
  • LQR easily accommodates actuator limitations by adjusting R without needing to manually reposition system poles.

Key Terms & Definitions

  • LQR (Linear Quadratic Regulator) β€” Optimal control method using quadratic cost functions on linear state-space models.
  • State-space representation β€” Mathematical model describing system dynamics with state variables.
  • Gain matrix (K) β€” Matrix multiplying state vector in feedback for controlling system behavior.
  • Cost function β€” Mathematical expression penalizing deviations in performance and excessive control effort.
  • Q matrix β€” Weights state errors in the cost function.
  • R matrix β€” Weights actuator effort in the cost function.
  • Pole placement β€” Control design method where desired closed-loop pole locations are chosen directly.

Action Items / Next Steps

  • Practice designing LQR controllers by adjusting Q and R matrices for different system goals in MATLAB.
  • Read up on the mathematical derivation of LQR for deeper understanding.
  • Explore additional feedback structures as mentioned for completeness.