Deep Fake Detection with AI Technologies

Aug 27, 2024

IEEE Expert Lecture Notes

Introduction

  • Welcome to the IEEE expert session.
  • Presentation topic: Deep Fake Face Detection using Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM).
  • Emphasis on the importance of this project for society and future generations.

Background and Context

  • Recent viral incident with Rashmiya Mandana's deep fake face video.
  • AI technology is advancing rapidly, but poses risks such as deep fake videos.
  • Deep fakes can lead to severe societal issues, including mental health crises among affected individuals.
  • Need for proactive measures to combat the negative implications of AI.

Project Overview

  • Project Title: Deep Fake Face Detection.
  • Base Papers: Three recent IEEE papers from 2023 on deep fake detection.
  • Objective: Use AI to detect fake videos and prevent misuse.

Technical Details

Definitions

  • Deep Fake Videos: Videos generated using AI that manipulate faces, audio, or video to mislead viewers.
  • Existing System: Based on CNN with drawbacks identified.

Limitations of CNN

  1. Limited Temporal Understanding: CNN struggles with understanding features in fake vs. original videos.
  2. Large Computational Requirements: Needs high-end hardware (e.g., graphics processors).
  3. Vulnerability to Adversarial Attacks: Can be hacked, leading to inaccuracies.
  4. Training Data Imbalance: Insufficient data can lead to poor accuracy.

Proposed Solution

  • New Approach: Implement LSTM to improve detection capabilities.
  • Advantages of LSTM:
    • Robust detection of deep fake incidents.
    • Efficient handling of video data in real-time.
    • Achieves high accuracy (up to 98%) for real vs. fake detection.

Project Methodology

  1. Data Collection: Use datasets from Kaggle, focusing on video formats.
  2. Pre-Processing Module:
    • Extract frames from videos.
    • Detect and crop facial features.
  3. Training Module:
    • Utilize LSTM for training on extracted frames.
  4. Testing Module:
    • Evaluate new video uploads against trained models to classify as real or fake.

Software and Hardware Requirements

  • Minimum requirements:
    • Processor: i3
    • RAM: 4GB
    • OS: Windows or Mac
  • Programming Language: Python
  • Web Development: HTML, CSS, JavaScript
  • Framework: Flask

Demonstration

  • Project demonstration using Anaconda Navigator and TensorFlow.
  • Users can upload videos to check for authenticity:
    • Results provided as confidence percentages for real vs. fake classifications.
  • Example results:
    • Real video: 91% confidence.
    • Fake video: 99.9% confidence.

Conclusion

  • The use of LSTM enhances detection capabilities over traditional CNN methods.
  • Overall accuracy achieved: 98% in testing scenarios.
  • For more information or to access this project, contact iwxpert.com.