Tensorboard summary writer 101

Tensorboard summary writer 101

Tensorboard is a very convenient way to record a neural network’s weights and parameters while training. Below are some examples on how to use it.

What is Tensorboard?

Tensorboard is a visualization toolkit that lets you see your training progress, analyze model architectures, and even watch activations and gradients flow in real-time.

And the best part? It’s super easy to set up. Just a few lines of code, and you’re in the analytics business.

Setting up Tensorboard Summary Writer

First, you initialize the SummaryWriter.

 from torch.utils.tensorboard import SummaryWriter # Initialize the TensorBoard Summary Writer. writer = SummaryWriter() 

Logging Scalars

Want to keep a tab on how loss and accuracy evolve over epochs? Just log ‘em as scalars.

 # Assuming 'epoch' is your loop counter, 'loss' is a float, and 'accuracy' is also a float writer.add_scalar('training/loss', loss, epoch) 
writer.add_scalar('training/accuracy', accuracy, epoch) 

Visualizing Models, Parameters and Feature Maps

Tensorboard’s got some nifty tricks up its sleeve. You can visualize your network architecture, or the histograms of weights and biases, and even look at your higher-dimensional feature maps into 3D space using PCA or t-SNE, etc.

# 'input_size' is the size of your input tensor, e.g., (1, 28, 28) for MNIST
writer.add_graph(model, torch.rand(input_size)) # This will plot histograms of model parameters - weights and biases. 
for name, param in model.named_parameters(): 
  writer.add_histogram(name, param.clone().cpu().data.numpy(), epoch) 

These tools are especially cool when you’re tweaking your model’s guts. They let you see if everything’s shaping up as expected or if there’s some untamed chaos that needs your attention.

Tracking Experiments

Tensorboard’s tagging and grouping features make it easy to compare these runs.

# Start each model run with a unique tag, like 'run_1', 'run_2', etc. 
for run_id in range(num_runs): 
  with SummaryWriter(comment=f'run_{run_id}') as writer: 
    for epoch in range(n_epochs): # Train your model 
        # Log training metrics with 'writer' 

This way, you can go full detective mode and sift through the happenings of each experiment.

Check Tensorboard’s documentation.