Skip to content

To learn PyTorch, I made a Neural Network using PyTorch to classify Handwritten digits from MNIST.

Notifications You must be signed in to change notification settings

Vaishnavidixit6/Handwritten-Digits-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MNIST Classifier with PyTorch

This project trains a simple feed-forward neural network on the MNIST handwritten digits dataset using PyTorch.

  1. Importing Libraries
import torch
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from torch import optim
import matplotlib
import helper

torch, nn, optim → PyTorch core modules

torchvision.datasets → provides MNIST dataset

transforms → preprocessing (convert to tensor & normalize)

helper → helper functions for visualization (from Udacity course)

  1. Data Preprocessing
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))
])

Converts images to tensors

Normalizes grayscale values from [0,1] to [-1,1]

  1. Load Datasets
trainset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=True, transform=transform)
testset = datasets.MNIST('~/.pytorch/MNIST_data/', download=True, train=False, transform=transform)

trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)

Downloads MNIST (60k train, 10k test images)

DataLoader creates batches of size 64

  1. Build the Neural Network
model = nn.Sequential(
    nn.Linear(784, 128),
    nn.ReLU(),
    nn.Linear(128, 64),
    nn.ReLU(),
    nn.Linear(64, 10),
    nn.LogSoftmax(dim=1)
)

Input: 784 (28×28 pixels flattened)

Hidden Layers: 128 → 64 units with ReLU activation

Output Layer: 10 classes (digits 0–9)

LogSoftmax outputs log-probabilities

  1. Loss Function & Optimizer
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)

Loss = Negative Log-Likelihood (since using LogSoftmax)

Optimizer = Stochastic Gradient Descent (learning rate = 0.003)

  1. Training Loop
epochs = 8
for e in range(epochs):
    running_loss = 0
    for images, labels in trainloader:
        images = images.view(images.shape[0], -1)  # flatten to [batch, 784]
        
        optimizer.zero_grad()  # reset gradients
        logits = model.forward(images)  # forward pass
        loss = criterion(logits, labels)  # compute loss
        loss.backward()  # backpropagation
        optimizer.step()  # update weights
        
        running_loss += loss.item()
    else:
        print(f"Training loss: {running_loss/len(trainloader)}")

Trains the model for 8 epochs

Flattens images from [64,1,28,28] → [64,784]

Tracks and prints average training loss

  1. Inference Example
images, labels = next(iter(trainloader))
img = images[0].view(1, 784)

with torch.no_grad():
    logits = model.forward(img)

ps = torch.exp(logits)  # convert log-probs to probabilities
helper.view_classify(img[0], ps)
print(ps)

Takes one image

Gets predicted probability distribution over 10 classes

Visualizes prediction with helper function

  1. Testing the Model
accuracy = 0
total = 0

for images, labels in testloader:
    for i in range(len(images)):
        img = images[i].view(1, 784)
        with torch.no_grad():
            output = model.forward(img)
        ps = torch.exp(output)
        pred = torch.argmax(ps)  # predicted digit
        if pred.item() == labels[i].item():
            accuracy += 1
        total += 1

print(f"Accuracy of the model: {accuracy/total:.4f}")

Evaluates on test dataset (10k images)

Predicts class with highest probability (argmax)

Compares predictions to true labels

Prints overall accuracy

About

To learn PyTorch, I made a Neural Network using PyTorch to classify Handwritten digits from MNIST.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages