Recycling Classification System
CNN-based computer vision system for recyclable material classification
Project Overview
Developed a convolutional neural network (CNN) to automatically classify recyclable materials into six categories: cardboard, glass, metal, paper, plastic, and trash. The system leverages the TrashNet dataset and custom preprocessing techniques to achieve high accuracy.
This project implements a computer vision pipeline for image processing and analysis. It leverages OpenCV for handling image operations and Matplotlib for data visualization. The project focuses on image transformations, object detection, and result plotting.
Technical Approach
Architecture
Implemented a custom CNN architecture optimized for image classification tasks. The network consists of multiple convolutional layers with ReLU activation, batch normalization, max pooling, and dropout regularization to prevent overfitting.
Dataset & Preprocessing
- Dataset: TrashNet dataset with 2,527 images across 6 categories
- Augmentation: Random rotation, horizontal flip, color jitter, and random crop
- Normalization: ImageNet mean and standard deviation values
- Split: 80% training, 10% validation, 10% testing
Training Process
- Optimizer: Adam with learning rate scheduling
- Loss Function: Cross-entropy loss
- Batch Size: 32
- Early stopping with patience to prevent overfitting
- Model checkpointing to save best performing weights
Technologies Used
Python
PyTorch
OpenCV
NumPy/Pandas
scikit-learn
Results & Performance
Key Features
- Multi-class Classification: Accurately identifies 6 different material types
- Data Augmentation: Robust to various lighting conditions and orientations
- Transfer Learning Ready: Architecture supports fine-tuning with pre-trained models
- Modular Design: Easy to extend to additional material categories
- Performance Monitoring: Comprehensive logging and visualization of training metrics
Challenges & Solutions
Class Imbalance
The dataset had uneven distribution across categories. Implemented weighted sampling and class-balanced loss functions to ensure fair representation during training.
Overfitting Prevention
Added dropout layers, batch normalization, and extensive data augmentation. Implemented early stopping based on validation loss to prevent overfitting to the training set.
Real-World Variability
Materials appear differently under various conditions. Applied aggressive augmentation techniques including color jitter and random transformations to improve model robustness.