Quantum Neural Network Overview and Example
Introduction
- Presenter: Kadri Singh
- Topic: Quantum Neural Network (QNN)
- Follow-up from the previous video on Quantum Machine Learning (QML) and Support Vector Machine (SVM)
- Focus: QNN steps and example code
- Prerequisites: Basic quantum computing knowledge, classical neural network understanding, and Python coding basics.
Current Approach to Quantum Neural Network (QNN)
- Status: Enhancing classical machine learning algorithms with quantum computing for time-consuming calculations.
- Hybrid Approach: No 100% quantum approach currently; combines classical and quantum methods.
- Applications: Speech synthesis, classification, NLP, financial modeling, molecular modeling, etc.
- Challenges: Insufficient logical qubits in current quantum hardware to solve real-life problems fully.
- Motivation: Preparing QML algorithms for future hardware advancements.
Classical Neural Network Refresher
- Structure: Input layer, hidden layer(s), and output layer.
- Input Layer: Converts data into a network format (e.g., 28x28 pixel images flattened to 784 neurons).
- Output Layer: For digit classification (0-9, thus needing 10 neurons).
- Hidden Layer(s): Number of layers and neurons determined by experience.
- Process: Training and Testing Data.
- Dataset Example: National Institute of Standards and Technology (NIST) digit dataset.
- Data Representation: Pixel values (0 for black, 1 for white, and values in-between for grayscale).
- Activation & Cost Function: Uses weights, biases, and non-linearity functions like ReLU and sigmoid.
- Optimization: Gradient descent method helps minimize the cost function using techniques like backpropagation.
Convolutional Neural Network (CNN)
- Drawbacks of Traditional Neural Networks: No spatial awareness in image data.
- Solution: Convolutional Neural Networks (CNNs) better handle spatial hierarchies in data.
- Feature Map & Kernel: Capture features using filters (kernels) over input data.
- Pooling: Reduces feature map size (max pooling, average pooling).
Transition to Quantum Neural Network (QNN)
- Framework: Mixing classical and quantum components.
- Input Data: Managed classically, transformed into quantum states if needed.
- Structure: Layers can be either classical, quantum, or hybrid.
- Weights/Biases: Replaced by quantum parameters such as rotation angles (theta).
- Optimization: Combination of classical and quantum algorithms for functions (gradient descent, cost functions).
Examples of QNN Approaches
- IBM Example: Start with classical network layers, use quantum circuit for rotation (theta), and execute hybrid components.
- Penny Lane and Google Approaches: Hybrid models leveraging both classical and quantum processing (Transformers, TensorFlow, etc.).
Example Code Walkthrough
- Environment Setup: Libraries from Qiskit and PyTorch.
- Classical Network Definition: Use the MNIST dataset (digits 0 and 1 subset).
- Quantum Coding Class: Defined with initialization, measurement functions, and simulator backend.
- Training Process: Combines classical PyTorch layers with a single quantum computation layer.
- Optimization Step: Gradient descent using quantum circuit results to adjust model parameters.
- Evaluation: Testing using new data and checking accuracy.
- Code References: Built upon IBM resources and PyTorch functions.
Conclusion and Further Exploration
- Further Learning: Encouragement to explore various hybrid models, review Google/Penny Lane articles.
- Future Videos: Ongoing learning and upcoming videos on quantum algorithms.
- Feedback & Engagement: Request for thumbs up and suggestions for further content.
Note: Transcript included code snippets and training step-by-step walkthrough but omitted running outputs for time efficiency.