Artificial Intelligence Lecture

Jul 4, 2024

Artificial Intelligence Lecture

Introduction

  • AI has been around since post-WWII with the advent of digital computers.
  • Progress was slow until the 21st century, significantly improving around 2005.
  • AI is broad but Machine Learning (ML) notably became practical and useful post-2005, especially around 2012.

Machine Learning (ML)

  • Misleading name: doesn't involve a computer self-learning like a human.
  • Focus of the lecture: Understanding ML's workings and applications.

Alan Turing and Facial Recognition

  • Example: AI used in facial recognition to identify faces such as Alan Turing's.
  • Supervised Learning
    • Requires training data: Input-output pairs (pictures labeled with names).
    • AI uses pictures of Alan Turing labeled as 'Alan Turing' to learn.
  • Importance of Training Data
    • Social media labelings help train ML algorithms.

Classification Tasks

  • Classification task: identifying and labeling data, such as recognizing faces or understanding text.
  • Applications in medical imaging (tumors), autonomous driving (Tesla), etc.
  • Difference from Generative AI: Classification vs. Generating new outputs.

Neural Networks

  • Concept: Artificial neurons mimicking biological neurons in the brain.
    • Human brain: ~86 billion neurons, each connected to up to 8,000 others.
    • Each neuron detects simple patterns and sends signals based on that detection.
  • Implementation in AI
    • Based on ideas from the 1940s; practical realization in the 21st century.
    • Requires: Big Data, Deep Learning methods, and significant computer power.
  • Training Neural Networks
    • Adjust network to produce desired outputs through extensive training data.
  • Takeoff in Technology
    • Advances around 2005; supercharged around 2012 with better computing power and Big Data.

GPT and Large Language Models (LLMs)

  • Transformer Architecture
    • Introduced in the paper "Attention is All You Need"; key innovation: attention mechanism.
  • GPT-3
    • Released June 2020 by OpenAI; ~175 billion parameters, trained on 500 billion words.
    • First impactful LLM; stark improvement over predecessors.
    • Applications: Prompt completion tasks (autocorrect, summaries, etc.).
  • Training Data
    • Includes downloaded text from entire web, scrapes PDF documents, etc.
  • Emergent Capabilities
    • Showed capabilities not explicitly trained for, surprising AI researchers.
  • Limitations
    • Needs extensive computing power, not feasible for universities.

Issues with AI

  • Misinformation
    • LLMs often generate plausible but incorrect information.
  • Bias and Toxicity
    • Training data from places like Reddit introduces biases and toxic content.
    • Companies employ guardrails but these are often not deep fixes (e.g., gaffa tape).
  • Intellectual Property
    • Absorbs copyrighted material, raising legal issues.
    • Example: Recognizing text from books.
  • GDPR Compliance
    • Neural networks can't exclude specific individuals' data as they're not structured that way.
  • Outside Training Data
    • Neural networks struggle with scenarios not covered in training data (e.g., Tesla misidentifying stop signs).

General Artificial Intelligence (AGI)

  • Types of General Intelligence
    • Type 1: Machine as capable as a human in all facets, including physical tasks.
    • Type 2: Cognitive tasks only, no physical manipulation.
    • Type 3: Any language-based task.
    • Type 4: Augmented LLMs calling specialized subroutines.
  • State of the Art
    • NLP is advanced, but other human intelligence aspects (e.g., manual dexterity) are not.

Machine Consciousness

  • Debates
    • Google's Lambda case by Blake Le Moine claimed AI sentience, debated widely.
    • Chat-GPT and Co. mimic conversations but don't possess self-awareness.
  • Consciousness Understanding
    • Hard problem: Electrochemical brain processes Vs. subjective experiences.
    • AI does not have subjective experiences or personal perspectives.

Conclusion

  • Current AI advancements mark a new era in AI research and applications.
  • Understanding and limitations critical for future developments.

Final Note: It's essential to use current AI technologies responsibly and understand their functional boundaries.