Implementing ANN with PyTorch Overview

Sep 15, 2024

PyTorch Lecture Notes

Introduction

  • Presenter: Krishna Iyak
  • Topic: Implementing Artificial Neural Networks (ANN) using PyTorch.

Overview of the Session

  • Focus on Object-Oriented Programming (OOP) concepts.
  • Creating a simple ANN model with dense layers.
  • Upcoming sessions will cover more complex examples, including feature engineering.

Resources

  • Recommended to follow tutorials on PyTorch.org for comprehensive information.

Problem Statement

  • Create an ANN to predict diabetes using the Pima diabetes dataset.
  • Will cover:
    • Creating, saving, and loading models.
    • Working with weights.

Data Preparation

  • Import Libraries:
    • Import Pandas as PD for data manipulation.
    • Dataset obtained from Kaggle.
  • Load Dataset:
    • Use pd.read_csv() to load the dataset.
    • Check for null values in the dataset.
    • Visualize data distribution using Seaborn's pair plot.

Data Visualization and Analysis

  • Convert the outcome variable to categorical format (diabetic vs non-diabetic).
  • Use train_test_split from sklearn for dataset splitting:
    • Define independent features (X) and dependent feature (y).
  • Independent Features: Pregnancy, Glucose, Blood Pressure, etc.

PyTorch Implementation

Creating Tensors

  • Convert independent features (X) into float tensors:
    • x_train and x_test using torch.float_tensor().
  • Convert dependent features (y) into long tensors:
    • y_train and y_test using torch.long_tensor().

Defining the ANN Model

  • Create a class ANN_model inheriting from torch.nn.Module:
    • Implement __init__ method to define layers:
      • Input features: 8
      • Hidden Layers: 20 nodes each.
      • Output Layer: 2 (for binary classification).
    • Implement forward method for forward propagation using ReLU activation function.

Training the Model

  • Define loss function as CrossEntropyLoss for multi-class classification.
  • Initialize the optimizer (Adam) with a learning rate of 0.01.
  • Loop through epochs (e.g., 500 iterations) to train the model:
    • Forward pass, compute loss, backpropagation, and optimizer step.
  • Print loss every 10 epochs to monitor training progress.

Plotting Loss

  • Use Matplotlib to visualize the loss reduction over epochs.

Prediction and Evaluation

  • Predict using the test data and display results:
    • Use confusion matrix to evaluate the model's performance.
    • Calculate accuracy using accuracy_score from sklearn.

Model Saving and Loading

  • Save the model using torch.save().
  • Load the model using torch.load() for future predictions.

New Data Prediction

  • Create a tensor for new data and predict using the trained model.
  • Display output to identify if the person is predicted to have diabetes.

Conclusion

  • Demonstrated how to implement an ANN in PyTorch from scratch.
  • Emphasized the importance of theoretical knowledge in ANN.
  • Encouraged viewers to subscribe for future complex problem-solving sessions.
  • Additional resources and code provided in the GitHub link in the description.

This concludes the session on implementing ANN using PyTorch. Remember to explore PyTorch documentation for more detailed functionalities and examples.