Learn Cnalylabs
Everything you need to deploy, run, and scale your AI models with Cnalylabs.
Quick Start
Get up and running with Cnalylabs in 5 minutes
API Reference
Complete API documentation with examples
Model Deployment
Learn how to deploy and manage models
Authentication
Secure your API calls with proper authentication
Guides
In-depth tutorials for common use cases
FAQ
Answers to frequently asked questions
Quick Start
1. Install the SDK
pip install cnalylabs2. Set your API Key
export CNALYLABS_API_KEY="your-api-key"3. Deploy your first model
from cnalylabs import deploy, run
# Deploy a model
model = deploy("./my-model")
print(f"Model deployed: {model.id}")
# Run inference
result = run(model.id, {"prompt": "Hello, world!"})
print(result)API Reference
/v1/deployDeploy a model to the Cnalylabs platform
model_path- Path to your model filesname- Optional display name for the modelgpu_type- GPU type preference (optional)
/v1/run/{model_id}Run inference on a deployed model
model_id- ID of the deployed modelinput- Input data for inferencetimeout- Request timeout in seconds (optional)
Full API documentation coming soon. Request early access to our complete developer docs.
Request AccessFrequently Asked Questions
What GPU types do you support?
We support a wide range of NVIDIA GPUs including A100, H100, RTX 4090, and more. Our platform automatically selects the optimal GPU based on your model requirements.
How does pricing work?
You pay only for actual GPU compute time used. When your model is idle, you pay nothing. Check our pricing page for detailed rates per GPU type.
Can I deploy custom models?
Yes! You can deploy any PyTorch, TensorFlow, or ONNX model. We also support popular frameworks like Hugging Face Transformers out of the box.
What about data privacy?
Your data is encrypted in transit and at rest. We never train on customer data and offer options for private deployments and dedicated infrastructure.