Overview
This lecture introduces foundational AI concepts, focusing on search algorithms for problem-solving, including classical and adversarial search, and discusses their strengths, weaknesses, and optimizations.
Introduction to Artificial Intelligence (AI)
- AI enables computers to perform intelligent tasks like image recognition, game playing, and natural language understanding.
- AI involves techniques and algorithms for tasks such as search, knowledge representation, uncertainty, optimization, and learning.
Search Problems in AI
- A search problem involves finding a sequence of actions to transition from an initial state to a goal state.
- Examples include solving puzzles, navigating mazes, and finding driving directions.
- Key terms: agent (entity acting in environment), state (configuration), initial state, actions, transition model, goal test, and path cost.
- The state space represents all possible states reachable from the initial state via actions.
- Nodes encapsulate state, parent, action taken, and path cost.
General Search Algorithm Approach
- Start with the frontier containing the initial state.
- While the frontier is not empty: remove a node, check if it's a goal, expand it by adding valid neighbors, and avoid revisiting explored states.
- The frontier can be managed as a stack (last-in-first-out) or queue (first-in-first-out), impacting search behavior.
Classical Search Algorithms
- Depth-First Search (DFS): Uses a stack, explores as deep as possible, may not find the optimal solution, can use less memory.
- Breadth-First Search (BFS): Uses a queue, explores shallowest nodes first, guarantees optimal solution if path costs are equal, but may require more memory.
Informed Search Algorithms
- Greedy Best-First Search (GBFS): Selects nodes estimated to be closest to the goal using a heuristic ( h(n) ); not always optimal.
- A Search:* Selects nodes minimizing ( g(n) + h(n) ), where ( g(n) ) is the path cost so far and ( h(n) ) is the heuristic; optimal if heuristic is admissible (never overestimates) and consistent.*
Adversarial (Game) Search
- In adversarial search, multiple agents (e.g., players in a game) have competing objectives.
- Minimax Algorithm: Models games as trees, where the max player seeks to maximize and the min player seeks to minimize utility.
- Terminal states have utilities (e.g., win = 1, loss = -1, draw = 0).
- The algorithm recursively evaluates possible moves and outcomes, choosing optimal actions for each player.
Minimax Optimizations
- Alpha-Beta Pruning: Skips branches that cannot affect the final decision, reducing computation.
- Depth-Limited Minimax: Limits look-ahead depth, using an evaluation function to estimate non-terminal state values (important for complex games like chess).
Key Terms & Definitions
- Agent — Entity that perceives and acts in an environment.
- State — Specific configuration of the environment.
- Actions — Choices available to the agent in a state.
- Transition Model — Function determining the outcome state of an action.
- Path Cost — Numerical cost assigned to a sequence of actions.
- Frontier — Collection of states/nodes to be explored.
- Heuristic ( h(n) ) — Estimate of cost from node ( n ) to the goal.
- Admissible Heuristic — Never overestimates the true cost to reach the goal.
- Consistent Heuristic — Satisfies ( h(n) \leq h(n') + \text{cost}(n, n') ) for all successors.
- Utility — Numerical value representing outcome desirability in games.
- Max/Min Player — Maximizes or minimizes utility in adversarial search.
Action Items / Next Steps
- Review the pseudocode for general and adversarial search algorithms.
- Experiment with DFS, BFS, and A* in maze-solving code.
- Prepare for next lecture on knowledge representation in AI.*