This repository contains a comprehensive performance comparison between Hybrid Quantum Convolutional Neural Networks (QCNN) and Fully Classical CNNs on the MNIST handwritten digit dataset.
This project implements and evaluates:
- A hybrid quantum-classical neural network architecture using PyTorch and PennyLane
- A comparable fully classical CNN architecture
- Performance metrics and visualizations for both approaches
The hybrid model combines classical convolutional layers with a quantum circuit layer:
- Classical convolutional feature extraction
- Dimension reduction to quantum circuit input size
- Quantum circuit with parameterized rotations and entangling operations
- Classical post-processing and classification
For fair comparison, a classical CNN with comparable architecture was implemented:
- Same convolutional layers as the hybrid model
- Classical MLP layers in place of the quantum circuit
- Matching input/output dimensions throughout
- Hybrid QCNN Accuracy: 90.54%
- Fully Classical CNN Accuracy: 81.97%
- Performance Difference: 8.57% (Hybrid outperforms Classical)
The repository explores:
- Training convergence of both models
- Test accuracy comparison
- Class-wise performance differences
- Resource usage considerations
- PyTorch - Deep learning framework
- PennyLane - Quantum machine learning library
- TensorFlow/Keras - Data loading and preprocessing
- Matplotlib/Seaborn - Visualization
- Rich - Terminal output formatting
- Clone this repository
- Install dependencies:
pip install torch pennylane tensorflow matplotlib seaborn rich
- Run the Jupyter notebook to train and compare the models
- Data loading and preprocessing
- Hybrid QCNN implementation
- Classical CNN implementation
- Training procedures
- Evaluation metrics
- Performance visualization
- Comparative analysis
The notebook demonstrates how quantum circuit layers can be integrated into traditional neural network architectures and analyzes the performance trade-offs compared to fully classical approaches.