Compute
Krutrim Cloud provides a range of compute options designed to meet the demands of modern workloads—from general-purpose virtual machines to high-performance GPU clusters for large-scale AI training. Whether you are building web applications, running simulations, or deploying complex machine learning models, our compute infrastructure is engineered for performance, scalability, and transparency.
Available Compute Services
Krutrim Cloud offers four categories of compute services:
CPU Virtual Machines
General-purpose VMs for development environments, web applications, and backend services
GPU Virtual Machines
Ideal for model training, batch inference, and high-performance computing workflows
GPU Baremetals
Dedicated, non-virtualized GPU machines for full control over drivers, kernels, and performance tuning
AI Pods
Kubernetes-native GPU compute for orchestrated AI workloads including training, fine-tuning, and inference
1. CPU Virtual Machines
Krutrim’s CPU VMs are powered by AMD EPYC 9554 processors and come with scalable vCPU-RAM configurations, suitable for a wide range of workloads.
CPU-1x-4GB
1 vCPU / 4 GB
Hour
₹3.00
₹2.85
₹2.70
₹2.10
CPU-2x-8GB
2 vCPU / 8 GB
Hour
₹6.00
₹5.70
₹5.40
₹4.20
CPU-4x-16GB
4 vCPU / 16 GB
Hour
₹13.00
₹12.35
₹11.70
₹9.10
CPU-8x-32GB
8 vCPU / 32 GB
Hour
₹25.00
₹23.75
₹22.50
₹17.50
CPU-16x-64GB
16 vCPU / 64 GB
Hour
₹49.00
₹46.55
₹44.10
₹34.30
CPU-32x-128GB
32 vCPU / 128 GB
Hour
₹97.00
₹92.15
₹87.30
₹67.90
2. GPU Virtual Machines
GPU VMs offer virtualized access to NVIDIA A100 and H100 GPUs. These instances are suited for intensive compute use cases such as model training, inference serving, and simulation workloads.
A100-80GB-NVLINK-1x
A100 80GB ×1
96
80
24
Hour
₹189
₹148
₹132
₹98
H100-NVLINK-1x
H100 ×1
200
80
24
Hour
₹213
₹198
₹186
₹173
H100-NVLINK-2x
H100 ×2
400
160
48
Hour
₹426
₹396
₹372
₹346
H100-NVLINK-4x
H100 ×4
800
320
96
Hour
₹852
₹792
₹744
₹692
3. GPU Baremetals
Baremetal GPU instances provide direct access to powerful NVIDIA GPUs without the abstraction of virtualization. These are ideal for users who require advanced GPU configurations, low-level control, and high throughput.
4. AI Pods
AI Pods offer a Kubernetes-native GPU compute environment, allowing you to run distributed AI workloads such as model training, evaluation, inference, and MLOps workflows. These are managed clusters provisioned with pre-configured GPU compute SKUs.
A100-NVLINK-Tiny
A100 (Tiny)
30
5
16
Hour
₹24.00
A100-NVLINK-Nano
A100 (Nano)
60
10
16
Hour
₹49.00
A100-NVLINK-Mini
A100 (Mini)
60
20
16
Hour
₹73.00
A100-NVLINK-Standard-1x
A100 ×1
60
40
16
Hour
₹170.00
A100-NVLINK-Standard-2x
A100 ×2
125
80
16
Hour
₹340.00
A100-NVLINK-Standard-4x
A100 ×4
250
160
128
Hour
₹510.00
A100-NVLINK-Standard-8x
A100 ×8
1000
320
128
Hour
₹1,360.00
H100-NVLINK-Tiny
H100 (Tiny)
60
10
16
Hour
₹30.00
H100-NVLINK-Nano
H100 (Nano)
60
20
16
Hour
₹61.00
H100-NVLINK-Mini
H100 (Mini)
60
40
16
Hour
₹91.00
H100-NVLINK-Standard-1x
H100 ×1
125
80
16
Hour
₹213.00
H100-NVLINK-Standard-2x
H100 ×2
250
160
52
Hour
₹425.00
H100-NVLINK-Standard-4x
H100 ×4
1004
320
104
Hour
₹850.00
H100-NVLINK-Standard-8x
H100 ×8
2008
640
208
Hour
₹1,700.00
Ephemeral-SSD
Ephemeral-SSD
Hour
₹0.006
₹4.38
Persistent-SSD
Persistent-SSD
Hour
₹0.006
₹4.38
Last updated
Was this helpful?

