Generative Artificial Intelligence: A Lecture Recap
Jun 9, 2024
Generative Artificial Intelligence: A Lecture Recap
Introduction
Generative AI: Combines the ideas of generating (creating new content) and artificial intelligence (getting computers to do tasks usually performed by humans).
Scope: Focus will be on text-based generative AI, primarily natural language processing (NLP).
Audience Participation: Lectures will involve interactions due to the speaker's minimal use of tech aids.
Outline of the Talk
Three Parts: Past, Present, Future of AI.
Quote: “Yesterday's history, tomorrow is a mystery, today is a gift, which is why it is called the present” by Alice Mor Earl.
Generative AI - History
Not New: Generative AI has been around for years in tools like Google Translate (since 2006) and Siri (since 2011).
Common Examples: Google Translate, Siri, Amazon Alexa have utilized generative AI longer than people realize.
Daily Applications: Generative AI is embedded in autocomplete features in emails, search predictions, and other common tools.
New Advancements: With tools like ChatGPT-4 by OpenAI showing significant advancements in generative capabilities.
Recent Developments
ChatGPT-4: Launched in 2023, known for passing complex exams and generating various forms of content (text, code, etc.).
Usage Stats: Reached 100 million users in just two months, a record pace compared to past technologies like Google Translate.
Core Technology and Functionality
Language Modeling: A key technology where given a sequence of words (context), the AI predicts the next words in the sequence.
Neural Networks: Language models are built using neural networks to predict the sequence of words rather than just counting word occurrences.
Building Models: Requires vast amounts of data, typically sourced from the internet (Wikipedia, Reddit, social media, etc.).
Training Process: Involves predicting missing words in sentences, adjusting the model when predictions are incorrect, via self-supervised learning.
Model Size: The larger the model (more parameters), the more tasks it can accomplish. GPT-4 has one trillion parameters.
Past vs. Current Models: There's been an exponential increase in model sizes and the volume of words they've processed since 2018.
Scaling and Performance
Parameter Size: More parameters often mean better performance on a wider range of tasks.
Cost of Training: Significant computational and financial costs involved; GPT-4 cost around $100 million to train.
Energy Use: Running these models requires substantial energy, raising concerns about environmental impact.
Fine-Tuning and Alignment
Adaptation: Fine-tuning is crucial for adapting generic pre-trained models to specific tasks or domains (e.g., medical condition diagnosis).
Alignment Problem: Ensuring AI behavior aligns with human intent involves training models to be helpful, honest, and harmless.
Human Preferences: Involves human trainers providing feedback to guide the model towards more accurate and less biased outputs.
Demonstrations and Issues
Practical Use Cases: The lecture demonstrated various prompts and how ChatGPT responded, highlighting strengths and limitations.
Issues in Responses: The model sometimes provides too lengthy or outdated information and may fail to follow specific instructions (e.g., shorter texts).
Bias and Errors: Generative AI can produce biased or inaccurate information, and there have been notable publicized errors with tools like Google's Bard.
Energy Usage: Generative AI queries are significantly more energy-intensive than traditional search queries.
Future of AI and Regulation
Risks and Regulation: The proliferation of AI necessitates regulation to mitigate risks, especially considering potential job displacement and misuse (e.g., deepfakes, fake news).
Environmental Concerns: AI development's environmental impact due to high energy consumption.
Super Intelligence Concerns: Discussion on AI's potential risks versus other global challenges like climate change.
Regulatory Guidelines: Drawing parallels with how other high-risk technologies like nuclear energy are regulated, implying similar approaches for AI.
Conclusion
Reflections from Tim Berners-Lee: Importance of understanding and controlling AI, focusing on its safe and beneficial deployment.
Audience Questions: Final segment included live Q&A showing practical AI use cases and addressing public concerns.
Summary
Generative AI's Evolution: Rapid advancements and increasing applications in daily life despite challenges and limitations.
Key Takeaway: Generative AI is a powerful tool that, with proper regulation and ethical oversight, can offer significant societal benefits.