Compute

Krutrim Cloud provides a range of compute options designed to meet the demands of modern workloads—from general-purpose virtual machines to high-performance GPU clusters for large-scale AI training. Whether you are building web applications, running simulations, or deploying complex machine learning models, our compute infrastructure is engineered for performance, scalability, and transparency.


Available Compute Services

Krutrim Cloud offers four categories of compute services:

Service
Description

CPU Virtual Machines

General-purpose VMs for development environments, web applications, and backend services

GPU Virtual Machines

Ideal for model training, batch inference, and high-performance computing workflows

GPU Baremetals

Dedicated, non-virtualized GPU machines for full control over drivers, kernels, and performance tuning

AI Pods

Kubernetes-native GPU compute for orchestrated AI workloads including training, fine-tuning, and inference


1. CPU Virtual Machines

Krutrim’s CPU VMs are powered by AMD EPYC 9554 processors and come with scalable vCPU-RAM configurations, suitable for a wide range of workloads.

Instance Flavour
vCPUs
RAM
Price/hour

CPU-1x-4GB

1

4 GB

₹3

CPU-2x-8GB

2

8 GB

₹6

CPU-4x-16GB

4

16 GB

₹13

CPU-8x-32GB

8

32 GB

₹25

CPU-16x-64GB

16

64 GB

₹49

CPU-32x-128GB

32

128 GB

₹97


2. GPU Virtual Machines

GPU VMs offer virtualized access to NVIDIA A100 and H100 GPUs. These instances are suited for intensive compute use cases such as model training, inference serving, and simulation workloads.

SKU
GPUs
vCPUs
RAM
GPU Memory
Price (₹/hour)

A100-40GB-NVLINK-1x

1

24

96 GB

40 GB

₹170

A100-80GB-NVLINK-1x

1

24

96 GB

80 GB

₹189

A100-40GB-NVLINK-2x

2

48

192 GB

80 GB

₹340

A100-40GB-NVLINK-4x

4

96

384 GB

160 GB

₹680

A100-40GB-NVLINK-8x

8

192

768 GB

320 GB

₹840

H100-NVLINK-1x

1

24

200 GB

80 GB

₹213

H100-NVLINK-2x

2

48

400 GB

160 GB

₹425

H100-NVLINK-4x

4

96

800 GB

320 GB

₹850


3. GPU Baremetals

Baremetal GPU instances provide direct access to powerful NVIDIA GPUs without the abstraction of virtualization. These are ideal for users who require advanced GPU configurations, low-level control, and high throughput.


4. AI Pods

AI Pods offer a Kubernetes-native GPU compute environment, allowing you to run distributed AI workloads such as model training, evaluation, inference, and MLOps workflows. These are managed clusters provisioned with pre-configured GPU compute SKUs.

SKU
GPUs
vCPUs
RAM
GPU Memory
Price (₹/hour)

A100-NVLINK-Tiny

1

16

30 GB

5 GB

₹24

H100-NVLINK-Tiny

1

16

30 GB

10 GB

₹30

A100-NVLINK-Nano

1

16

60 GB

10 GB

₹48

H100-NVLINK-Nano

1

16

60 GB

20 GB

₹61

A100-NVLINK-Mini

1

16

60 GB

20 GB

₹73

H100-NVLINK-Mini

1

16

60 GB

40 GB

₹91

A100-NVLINK-Standard-1x

1

24

60 GB

40 GB

₹170

H100-NVLINK-Standard-1x

1

24

128 GB

80 GB

₹213

A100-NVLINK-Standard-2x

2

48

128 GB

80 GB

₹340

H100-NVLINK-Standard-2x

2

52

250 GB

160 GB

₹425

A100-NVLINK-Standard-4x

4

96

250 GB

160 GB

₹610

H100-NVLINK-Standard-4x

4

104

500 GB

320 GB

₹850

A100-NVLINK-Standard-8x

8

192

1000 GB

320 GB

₹1300

H100-NVLINK-Standard-8x

8

208

2000 GB

640 GB

₹1700

Last updated

Was this helpful?