Lecture: Advances and Challenges in Artificial Intelligence
Historical Context
AI as a discipline since post-WWII with the advent of digital computers.
Slow progress until the early 21st century.
Significant improvement in AI since 2005 due to machine learning, especially since 2012.
Machine Learning Fundamentals
Definition
Misleading term; it's not about computers learning independently like humans.
Supervised Learning
Uses training data consisting of input-output pairs.
Example: Facial recognition using labeled images of Alan Turing.
Importance of training data: Social media uploads contribute to training for big data companies.
Classification tasks: Identifying/classifying objects like faces, tumors, and traffic signs.
Important for applications like Tesla's self-driving cars.
Deep learning, large data availability, and cheap computational power have enabled advancements.
Neural Networks
Concept
Inspired by animal brains and nervous systems.
Human brain contains ~86 billion neurons, each connected to many others.
Neurons perform simple pattern recognition tasks and send signals to neighbors.
Neural networks in AI mimic this pattern recognition capability in software.
Implementation
Early ideas (1940s, 1960s, 1980s) but became feasible with modern computational power and data.
Training involves adjusting the network to produce desired outputs from given inputs.
Large Scale Developments
2012 Onwards
Use of GPUs for neural network training; significant improvements in applications.
Silicon Valley's investment and turning up data and computation to achieve better results.
Discovery of 'Attention Mechanism' and development of Transformer architectures like GPT-3 by OpenAI.
GPT-3 and Beyond
GPT-3: 175 billion parameters, trained on 500 billion words from the web.
Capable of impressive autocomplete tasks, surprising capabilities in common sense reasoning.
ChatGPT: An enhanced, user-friendly version of GPT-3.
Issues and Concerns
Accuracy and Truthfulness
AI systems often produce plausible but incorrect information.
Users must fact-check outputs.
Bias and Toxicity
Inherited biases and offensive content from training data like Reddit.
Implementation of imperfect 'guardrails' to prevent inappropriate outputs.
Challenges in eliminating biases related to culture, race, etc.
Intellectual Property and GDPR
AI training involves copyrighted materials; lawsuits ongoing.
Compliance with GDPR difficult due to the nature of neural networks.
Limits and Future Directions
Understanding AI Capabilities
AI performs poorly on tasks outside its training data, illustrated by Tesla misidentifying stop signs.
Fundamental differences between AI and human intelligence.
General Artificial Intelligence (AGI)
Different tiers of AGI from performing human-like cognitive tasks to full human-like capabilities.
Current technology (like large language models) is a step toward AGI but not fully there yet.
Machine Consciousness
Debate fueled by claims of sentience in AI systems like those made by Blake Le Moine about Google's Lambda.
No substantial evidence that AI has consciousness or subjective experience.
Conclusion
AI has made dramatic leaps in recent years, presenting both extraordinary opportunities and significant challenges.
The future of AI will involve addressing existing limitations, ethical issues, and striving towards achieving more advanced, possibly general AI systems.