Marketplace
C3 aggregates GPU capacity from multiple data centers. When you submit a job, we find available compute at competitive rates—no need to manage cloud accounts or hunt for capacity yourself.
Available GPUs
| GPU | VRAM | Best for |
|---|---|---|
| A100 80GB | 80GB | Large models, multi-GPU training |
| A100 40GB | 40GB | Standard deep learning workloads |
| RTX 4090 | 24GB | Development, inference, smaller models |
More GPU types are added as we onboard new providers.
Pricing
You're billed per second of actual compute time—not for time spent queuing or provisioning. Check your balance and rates:
c3 account balance
c3 pricing
How jobs run
- QUEUED — Job submitted, waiting for a GPU
- PROVISIONING — Spinning up a VM (2-5 min cold start, ~30s from warm pool)
- PREPARING — Downloading your code and mounting datasets
- RUNNING — Your script is executing
- UPLOADING — Saving results to cloud storage
- COMPLETED — Done. Download results with
c3 pull
Warm pool
Popular GPU types are kept warm, so repeat jobs start in ~30 seconds instead of minutes. The first job of the day may take longer while a fresh VM provisions.
Providers
Jobs currently run on Hyperstack. We're onboarding additional providers to increase capacity and reduce prices.