Jul 23, 2024

- Explore ideas, techniques, and algorithms foundational to AI.
- AI examples: face recognition, game playing, natural language understanding.

**Search:**Finding solutions to problems (e.g., driving directions, game moves).**Knowledge:**Representing and drawing inferences from information.**Uncertainty:**Handling probabilistic information.**Optimization:**Finding optimal solutions from multiple possibilities.**Machine Learning:**Learning from data and improving over time.**Neural Networks:**Analogous to human brain structures for efficient task performance.**Natural Language Processing:**Understanding and processing human languages.

**Agent:**Entity that perceives and acts in an environment.**State:**Configuration of the agent and environment (e.g., tile arrangement in a puzzle).**Initial State:**Starting state of the search algorithm.**Actions:**Choices available to the agent in a given state, defined as a function`actions(S) -> set of actions`

.**Transition Model:**Describes the result of actions, defined as a function`result(S, A) -> new state`

.**Goal Test:**Determines if a state is the goal state.**Path Cost:**Numerical cost associated with a particular path.**Solution:**Sequence of actions from initial to goal state.**Optimal Solution:**Solution with the lowest path cost.

**15 Puzzle:**Sliding tiles to achieve a specific arrangement.**Maze Solving:**Finding the correct path from start to end.**Driving Directions:**Finding the best route from point A to point B using search algorithms.

- Represents a state, its parent, the action taken, and the path cost.
**Frontier:**Data structure containing nodes to explore next.**Explored Set:**Tracks already explored states to prevent repeated exploration.

**Depth First Search (DFS):**Uses a stack (Last-In-First-Out) to explore the deepest nodes first.**Breadth First Search (BFS):**Uses a queue (First-In-First-Out) to explore the shallowest nodes first.**Greedy Best-First Search (GBFS):**Uses a heuristic to expand the node estimated to be closest to the goal.* Considers both the cost to reach a node (*A*Search:`g(n)`

) and the heuristic estimate to the goal (`h(n)`

), optimizing for the sum (`g(n) + h(n)`

).

**Uninformed Search:**No specific knowledge of the problem (e.g., DFS, BFS).**Informed Search:**Utilizes problem-specific knowledge and heuristics (e.g., GBFS, A*).

**Manhattan Distance:**Used in grid-based problems to estimate distance to the goal.**Admissibility:**Heuristic never overestimates the actual cost.**Consistency:**Heuristic values must be consistent across states.

**Minimax Algorithm:**Used in games with two players having opposite objectives.**Max Player:**Aims to maximize the score.**Min Player:**Aims to minimize the score.**Utility Function:**Assigns numerical values to terminal states (e.g., win, lose, draw).**Terminal Test:**Checks if a state is a terminal state (game over).

**Alpha-Beta Pruning:**Optimizes Minimax by pruning branches that cannot affect the final decision.**Depth-Limited Minimax:**Limits the depth of the search tree to improve efficiency, using an evaluation function for intermediate states.

- Larger games like chess are computationally intractable for comprehensive search, hence requiring advanced techniques like alpha-beta pruning and evaluation functions.

- Search algorithms are foundational to many AI applications.
- Next lecture: Understanding knowledge representation and reasoning in AI.