๐ซ Far Eastern Federal University, 2025
This project focuses on designing and training a Convolutional Neural Network (CNN) to classify images from the CIFAR-10 dataset. The main emphasis is on exploring CNN architecture and the impact of batch size and number of epochs on model accuracy.
Local launch via JupyterLab is also supported.
CIFAR-10 contains:
- 60 000 color images
32ร32 - 10 classes:
airplane,automobile,bird,cat,deer,dog,frog,horse,ship,truck - Training set: 50 000 images
- Test set: 10 000 images
CIFAR-10 is widely used in research and competitions as a benchmark for testing computer vision models.
Convolutional Neural Networks (CNNs) are a type of architecture designed specifically for image processing:
- ๐ฏ Detect local patterns (edges, shapes)
- ๐งฑ Reduce parameter count compared to MLPs
- ๐ Reuse filters across the image
- โฌ Robust to translation and scaling
| Block | Components |
|---|---|
| Input | 32ร32ร3 (RGB image) |
| Conv 1 | 64 filters (5ร5) โ BatchNorm โ ReLU โ MaxPool (2ร2) |
| Conv 2 | 128 filters (3ร3) โ BatchNorm โ ReLU |
| Conv 3 | 256 filters (3ร3) โ BatchNorm โ ReLU โ MaxPool (2ร2) |
| Flatten | Transition to fully connected layers |
| FC 1 | 1024 neurons โ ReLU โ Dropout(0.5) |
| FC 2 | 1024 neurons โ ReLU โ Dropout(0.5) |
| FC 3 | 512 neurons โ ReLU |
| Output | 10 neurons โ Softmax (for classification) |
- Clone the repository:
git clone https://github.com/Bit-Maximum/CNN-for-CIFAR.git
cd CNN-for-CIFAR- Install dependencies:
pip install -r requirements.txt- Run the project:
jupyter lab run.ipynb- ๐ Test accuracy: 78.6%
- ๐ Training graphs (available in Colab/report):
- Smooth decrease in loss
- Steady increase in accuracy
- ๐ Conclusion: The model shows confident learning and can be further improved by increasing the number of epochs.
๐ Some confusion observed between similar classes (e.g.,
catvsdog).
Example predictions from the trained model:

Covered topics include:
- ๐ฆ CIFAR-10 as a benchmark image classification dataset
- ๐ง CNN layers and pooling operations
- ๐งฎ BatchNorm, Dropout, ReLU/Softmax activations
- ๐ Effect of training parameters like batch size and number of epochs




