Documentation

Learn Cnalylabs

Everything you need to deploy, run, and scale your AI models with Cnalylabs.

Quick Start

1. Install the SDK

pip install cnalylabs

2. Set your API Key

export CNALYLABS_API_KEY="your-api-key"

3. Deploy your first model

from cnalylabs import deploy, run

# Deploy a model
model = deploy("./my-model")
print(f"Model deployed: {model.id}")

# Run inference
result = run(model.id, {"prompt": "Hello, world!"})
print(result)

API Reference

POST/v1/deploy

Deploy a model to the Cnalylabs platform

Parameters:
  • model_path - Path to your model files
  • name - Optional display name for the model
  • gpu_type - GPU type preference (optional)
POST/v1/run/{model_id}

Run inference on a deployed model

Parameters:
  • model_id - ID of the deployed model
  • input - Input data for inference
  • timeout - Request timeout in seconds (optional)

Full API documentation coming soon. Request early access to our complete developer docs.

Request Access

Frequently Asked Questions

What GPU types do you support?

We support a wide range of NVIDIA GPUs including A100, H100, RTX 4090, and more. Our platform automatically selects the optimal GPU based on your model requirements.

How does pricing work?

You pay only for actual GPU compute time used. When your model is idle, you pay nothing. Check our pricing page for detailed rates per GPU type.

Can I deploy custom models?

Yes! You can deploy any PyTorch, TensorFlow, or ONNX model. We also support popular frameworks like Hugging Face Transformers out of the box.

What about data privacy?

Your data is encrypted in transit and at rest. We never train on customer data and offer options for private deployments and dedicated infrastructure.