The W&B Command Line Interface (CLI) provides powerful tools for managing experiments, hyperparameter sweeps, artifacts, and more directly from your terminal. The CLI is essential for automating workflows, running sweeps, and managing W&B resources programmatically.
Installation and Setup
Install W&B
Install the W&B Python package, which provides both the Python SDK and the CLI tools:
pip install wandb
This single installation gives you:
- The
wandb
command-line tool for terminal use - The
wandb
Python library forimport wandb
in scripts - The
wb
command as a shorthand alias forwandb
Authenticate
Login to your W&B account:
wandb login
You’ll be prompted to paste your API key from https://wandb.ai/authorize.
Basic Usage
wandb [OPTIONS] COMMAND [ARGS]...
Options
Option | Description |
---|---|
--version |
Show the version and exit. (default: False) |
Commands
Command | Description |
---|---|
agent | Run the W&B agent |
artifact | Commands for interacting with artifacts |
beta | Beta versions of wandb CLI commands. |
controller | Run the W&B local sweep controller |
disabled | Disable W&B. |
docker | Run your code in a docker container. |
docker-run | Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER environment variables. |
enabled | Enable W&B. |
init | Configure a directory with Weights & Biases |
job | Commands for managing and viewing W&B jobs |
local | Start a local W&B container (deprecated, see wandb server –help) |
login | Login to Weights & Biases |
off | No description available |
offline | Disable W&B sync |
on | No description available |
online | Enable W&B sync |
projects | List projects |
pull | Pull files from Weights & Biases |
restore | Restore code, config and docker state for a run. |
server | Commands for operating a local W&B server |
status | Show configuration settings |
sweep | Initialize a hyperparameter sweep. |
sync | Upload an offline training directory to W&B |
verify | Checks and verifies local instance of W&B. |
W&B CLI Example: Running a Hyperparameter Sweep
One of the most popular uses of the W&B CLI is managing hyperparameter sweeps. Here’s a complete example:
1. Create a training script (train.py
):
import wandb
import random
import time
# Initialize W&B run
wandb.init()
# Access sweep parameters from sweep.yaml
config = wandb.config
print(f"Training with: lr={config.learning_rate}, batch_size={config.batch_size}, epochs={config.epochs}")
# Training loop
for epoch in range(config.epochs):
# Simulate training metrics
train_loss = random.uniform(0.1, 2.0) * (0.95 ** epoch)
val_loss = train_loss + random.uniform(-0.05, 0.15)
accuracy = min(0.99, 0.5 + (epoch * 0.05) + random.uniform(-0.02, 0.02))
# Log metrics to W&B
wandb.log({
"epoch": epoch,
"train_loss": train_loss,
"validation_loss": val_loss, # Sweep optimizes this
"accuracy": accuracy
})
# Small delay to simulate training time
time.sleep(0.5)
print(f"Epoch {epoch}: train_loss={train_loss:.3f}, val_loss={val_loss:.3f}, acc={accuracy:.3f}")
# Log final results
wandb.log({
"final_validation_loss": val_loss,
"final_accuracy": accuracy
})
2. Create a sweep configuration file (sweep.yaml
):
The sweep configuration defines which script to run and what parameters to try. The W&B agent will automatically inject these parameter values into your script via wandb.config
:
program: train.py
method: bayes
metric:
name: validation_loss
goal: minimize
parameters:
learning_rate:
distribution: log_uniform_values
min: 0.0001
max: 0.1
batch_size:
values: [16, 32, 64]
epochs:
value: 10
3. Initialize the sweep:
wandb sweep sweep.yaml
# Output: Created sweep with ID: abc123
# Run sweep agent with: wandb agent entity/project/abc123
4. Start sweep agents to run experiments:
# Run a single agent
wandb agent entity/project/abc123
# Or run multiple agents in parallel (in different terminals)
wandb agent --count 10 entity/project/abc123
The CLI will automatically manage the hyperparameter search, running your training script with different configurations to find the optimal parameters.