Breakthroughs on demand
Train, fine-tune, and serve models on 1 to 8 NVIDIA GPU instances
Designed for builders
Launch NVIDIA HGX B200, H100, A100, or GH200 instances in minutes with self-serve, first-come access.
01/
Launch in minutes
Spin up an instance and get straight to training or inference. No lengthy setup, no driver installs, just NVIDIA GPUs on CDSNA Stack.
02/
Multi-GPU instances
Choose 8x, 4x, 2x, or 1x GPU instances to fit a wide range of AI workloads, from POC to production.
03/
Use UI, API, or CLI
Automate with the CDSNA Cloud API to create, stop, and restart instances from your CLI, CI/CD, or orchestration scripts.
Pay by the minute
Transparent pricing with no egress fees.
VRAM/GPU
vCPUs
RAM
STORAGE
PRICE/GPU/HR*
NVIDIA B200 SXM6
180 GB
208
2900 GiB
22 TiB SSD
$4.74
NVIDIA H100 SXM
80 GB
208
1800 GiB
22 TiB SSD
$2.44
NVIDIA A100 SXM
80 GB
240
1800 GiB
19.5 TiB SSD
$1.06
NVIDIA A100 SXM
40 GB
124
1800 GiB
5.8 TiB SSD
$0.68
NVIDIA Tesla V100
16 GB
88
448 GiB
5.8 TiB SSD
$0.48
* plus applicable sales tax. Pricing effective Mar. 2, 2026. Log in to your account to see current pricing.
VRAM/GPU
vCPUs
RAM
STORAGE
PRICE/GPU/HR*
NVIDIA B200 SXM6
180 GB
208
2900 GiB
22 TiB SSD
$4.74
NVIDIA H100 systems
Commitment
As low as (per GPU-hour)
* plus applicable sales tax
NVIDIA H100 systems
Commitment
As low as (per GPU-hour)
* plus applicable sales tax
GPU instances purpose-built for AI
- Turnkey performance: Full GPU access, zero throttling, and an optimized ML stack with essential tools like PyTorch and CUDA pre-installed via CDSNA Stack.
- Real-time visibility: Monitor GPU, memory, and network performance directly from the dashboard or API to catch bottlenecks before they slow training or inference.
- Easy storage: Keep datasets, checkpoints, and outputs attached between sessions and scale up or down without re-uploading data or incurring egress fees.
Built-in observability
- Catch performance issues before they start. Instant insights into what’s happening inside your workloads with live, minute-by-minute updates.
Fresh NVIDIA HGX B200s
- Get 2× the VRAM and FLOPS of H100 GPUs for up to 3× faster training and 15× faster inference at $4.99.
Ready to get started?
Create your Lambda Cloud account and launch NVIDIA GPU instances in minutes. Looking for long-term capacity? Talk to our team.