Introduction to Course, Multitask Learning, and Meta Learning
Course Introduction
- Instructor: Chelsea
- Teaching Assistants: 7 TAs
Student Check-in
- Encouraging students to share how they are doing.
- Example shared: A student attended a wrong class initially.
Acknowledgment
- Noting the return to near-normalcy but acknowledging ongoing global uncertainties.
- Policies have been set to provide flexibility recognizing students' diverse circumstances.
Course Information and Resources
- Course Website: Primary source of information.
- Communication: Use Ed for questions, linked to Canvas. Preference for public posts on Ed to benefit all, but private posts and emails are allowed for confidential matters.
- Office Hours: Information available on course website; Zoom links on Canvas. Start on Wednesday.
Course Learning Objectives
- Foundations: Modern deep learning methods for multitask and learning across tasks.
- Practical Experience: Implementing methods in code (Pytorch) and understanding real-world applications beyond theory.
- Scientific Process: Insights into developing algorithms, encouraging critical thinking and understanding development processes.
Course Content Overview
- Topics:
- Basics of multitask learning and transfer learning.
- Meta learning algorithms: Black Box approaches, optimization-based approaches, and Metric learning.
- Advanced topics: Overfitting, unsupervised and Bayesian meta learning, few shot learning, adaptation, unsupervised pre-training, domain adaptation, and domain generalization.
- Case Studies: Real applications, e.g., recommendation systems (YouTube), land cover classification, education, few shot learning in large language models.
Changes in Course Content
- Removal of reinforcement learning topics.
- New content: few shot learning with unsupervised pre-training, domain adaptation, etc.
- Introduction of a new deep reinforcement learning course in the Spring.
Course Logistics
- Lectures: In-person, live-streamed, and recorded (available on Canvas).
- Guest Lectures: Scheduled, details to be announced.
- Participation Encouraged: Questions during lectures help gauge understanding.
- Office Hours: Mix of in-person and remote; specifics on course website.
- Prerequisites: Background in machine learning, familiarity with Pytorch.
- Pytorch Review: Scheduled session for review.
- Assignments: Range from Pytorch warm-up to advanced homework on meta learning and fine-tuning pre-trained models.
- Grading: 50% homework, 50% project. Flexibility offered with an optional homework to replace a lower score or part of the project grade.
- Late Days: 6 late days allowed across assignments, with up to 2 late days per assignment.
- Collaboration Policy: Allowed but must acknowledge collaborators and independently write solutions and avoid external solutions.
Final Project
- Type: Research-level project in groups of 1-3, ideally related to students' ongoing research.
- Flexibility: Projects can overlap with other courses but with higher expectations.
- Poster Session: Final presentation of projects, no late days for this.
Initial Steps
- Homework Zero: A lightweight warm-up assignment due in a week.
- Forming Groups: Encouraged to start forming groups for the final project.
Motivation for Studying Multitask Learning and Meta Learning
- Chelsea's Research Perspective
- Goal: Enabling real-world agents (e.g., robots) to learn diverse skills using few examples.
- Examples: Robots using tools, mimicking human tasks, understanding objects and environments.
- Challenges: Current systems learn narrow tasks requiring extensive human effort and supervision. Aim to build more generalizable systems.
- General vs. Specialist Systems
- Current systems are specialists, trained on single tasks with significant data and effort required for each new task.
- Humans and generalizable systems learn broadly, applying learned skills across domains.
Why Multitask Learning and Meta Learning Matter
- Beyond Robotics: Applicable in general-purpose machine learning and various fields such as personalized education, rare language translation, medical imaging, etc.
- Addressing Long-tail Distributions: Handling rare cases and edge cases more effectively by leveraging shared structures and prior data.
- Quick Learning: Few-shot learning enables rapid adaptation to new tasks or environments using minimal data.
- Applications: Systems that can handle a broad array of inputs, like vision and language tasks, providing diverse functionalities.
Challenges and Open Questions
- Determining shared structure between tasks and understanding dependencies and generalization capabilities in multitask systems.
Definitions and Problem Statements
- Task Definition: A machine learning task defined by a dataset and a loss function to produce a model (formal definitions in future lectures).
- Multitask Learning: Learning multiple tasks simultaneously, with training and testing on the same set of tasks.
- Meta Learning/Transfer Learning: Using data from previous tasks to learn new tasks more effectively.
Comparison to Single-task Learning
- Multitask learning can sometimes be reduced to single-task learning by combining datasets and loss functions, but there are unique challenges and benefits.
- Emphasized that studying multitask and meta learning can lead to significant advancements in performance and applicability of machine learning systems.
Next Steps
- Encourages forming project groups and starting with Homework Zero.
- Further exploration of multitask meta learning principles in the next lecture.
Questions and Interaction
- Engaged students with Q&A throughout the lecture, addressing concepts, applications, and theoretical underpinnings.
Note: Continue with office hours for more detailed personalization and guidance on projects and course material.