Skip to content

This repository compares Hybrid Quantum Convolutional Neural Networks (QCNN) with Classical CNNs on MNIST digit classification. Using PyTorch and PennyLane, it implements both architectures, evaluates their performance, and provides visualizations to assess whether quantum-enhanced neural networks offer advantages over traditional approaches.

Notifications You must be signed in to change notification settings

ahkatlio/QCNN-vs-CNN-Performance-Analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Quantum Convolutional Neural Networks vs Fully Classical CNNs

This repository contains a comprehensive performance comparison between Hybrid Quantum Convolutional Neural Networks (QCNN) and Fully Classical CNNs on the MNIST handwritten digit dataset.

Overview

This project implements and evaluates:

  • A hybrid quantum-classical neural network architecture using PyTorch and PennyLane
  • A comparable fully classical CNN architecture
  • Performance metrics and visualizations for both approaches

Models

Hybrid QCNN

The hybrid model combines classical convolutional layers with a quantum circuit layer:

  • Classical convolutional feature extraction
  • Dimension reduction to quantum circuit input size
  • Quantum circuit with parameterized rotations and entangling operations
  • Classical post-processing and classification

Fully Classical CNN

For fair comparison, a classical CNN with comparable architecture was implemented:

  • Same convolutional layers as the hybrid model
  • Classical MLP layers in place of the quantum circuit
  • Matching input/output dimensions throughout

Results

Performance Summary

  • Hybrid QCNN Accuracy: 90.54%
  • Fully Classical CNN Accuracy: 81.97%
  • Performance Difference: 8.57% (Hybrid outperforms Classical)

Training History

Training History

Model Evaluation

Confusion Matrix Example Predictions

Comparison

Hybrid vs Classical Comparison Class-wise F1 Score Comparison

Key Findings

The repository explores:

  • Training convergence of both models
  • Test accuracy comparison
  • Class-wise performance differences
  • Resource usage considerations

Technologies Used

  • PyTorch - Deep learning framework
  • PennyLane - Quantum machine learning library
  • TensorFlow/Keras - Data loading and preprocessing
  • Matplotlib/Seaborn - Visualization
  • Rich - Terminal output formatting

Getting Started

  1. Clone this repository
  2. Install dependencies:
    pip install torch pennylane tensorflow matplotlib seaborn rich
    
  3. Run the Jupyter notebook to train and compare the models

Notebook Contents

  • Data loading and preprocessing
  • Hybrid QCNN implementation
  • Classical CNN implementation
  • Training procedures
  • Evaluation metrics
  • Performance visualization
  • Comparative analysis

The notebook demonstrates how quantum circuit layers can be integrated into traditional neural network architectures and analyzes the performance trade-offs compared to fully classical approaches.

About

This repository compares Hybrid Quantum Convolutional Neural Networks (QCNN) with Classical CNNs on MNIST digit classification. Using PyTorch and PennyLane, it implements both architectures, evaluates their performance, and provides visualizations to assess whether quantum-enhanced neural networks offer advantages over traditional approaches.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published