Submission Scripts
C3 uses the same #SBATCH syntax as Slurm, plus a few #C3 directives for cloud-specific features. If you've written Slurm scripts before, you already know 90% of what you need.
Basic structure
#!/bin/bash
#SBATCH --job-name=my-job
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#C3 OUTPUT ./results
python train.py
Supported SBATCH directives
| Directive | Description |
|---|---|
--job-name=NAME | Job name (required) |
--gres=gpu:N | Number of GPUs (default: 1) |
--time=HH:MM:SS | Maximum runtime—you're only charged for actual usage |
C3 directives
These extend the Slurm syntax for cloud workflows:
| Directive | Description |
|---|---|
#C3 OUTPUT ./path | Upload this directory as artifacts when the job completes |
#C3 DATASET /remote ./local | Mount a pre-uploaded dataset at the given path |
#C3 PYTHON use --uv-lock ./uv.lock | Install Python dependencies from a lockfile |
Example: Training with a dataset
#!/bin/bash
#SBATCH --job-name=train-model
#SBATCH --gres=gpu:4
#SBATCH --time=04:00:00
#C3 DATASET /datasets/imagenet ./data
#C3 OUTPUT ./checkpoints
python train.py --data ./data --output ./checkpoints
The dataset is already in the cloud (uploaded via c3 data cp), so it mounts instantly—no download wait.
Example: Reproducible Python environment
#!/bin/bash
#SBATCH --job-name=experiment
#SBATCH --gres=gpu:1
#SBATCH --time=00:30:00
#C3 PYTHON use --uv-lock ./uv.lock
#C3 OUTPUT ./results
python experiment.py
The uv.lock file pins exact dependency versions. C3 uses uv for fast, reproducible installs.