Configuration
The installer uses sensible defaults. This page covers everything you can change — from cluster naming and port mapping to GPU configuration, manual Helm deployment, and day-to-day cluster management.Installer Options
Override defaults by setting environment variables before the install command. Useful when you need a custom cluster name, multiple worker nodes, or non-standard ports.| Variable | Default | Description |
|---|---|---|
CLUSTER_NAME | tracebloc | Name of the k3d cluster |
SERVERS | 1 | Number of control-plane nodes |
AGENTS | 1 | Number of worker nodes |
K8S_VERSION | v1.29.4-k3s1 | k3s image tag |
HTTP_PORT | 80 | Host port mapped to cluster HTTP ingress |
HTTPS_PORT | 443 | Host port mapped to cluster HTTPS ingress |
HOST_DATA_DIR | ~/.tracebloc | Persistent data directory on host |
Cluster Management
The installer creates a k3d cluster that runs inside Docker. You can stop it to free resources, start it again later, or delete it entirely. Your data persists inHOST_DATA_DIR between stop/start cycles.
View logs
The jobs manager is the main tracebloc process. Check its logs when debugging connectivity or job execution issues:Useful commands
Common kubectl commands for inspecting cluster state:~/.tracebloc/install-*.log.
GPU Support
The installer auto-detects GPU hardware and configures the cluster accordingly. No manual setup required on Linux — the installer handles drivers, container toolkit, and Kubernetes device plugin.NVIDIA (Linux)
Fully automatic. The installer:- Detects NVIDIA GPUs via
nvidia-smiorlspci - Installs drivers if missing (Ubuntu, RHEL/CentOS, Arch)
- Installs the NVIDIA Container Toolkit and configures Docker
- Deploys the NVIDIA k8s device plugin into the cluster
- Passes
--gpus=allto k3d
AMD (Linux)
Auto-detected. ROCm is installed automatically on Ubuntu and RHEL/CentOS. A logout/login may be needed for full GPU access.macOS
CPU only. Docker Desktop on macOS does not support GPU passthrough. For GPU workloads, deploy on a Linux machine with NVIDIA GPUs or use AWS (EKS).Windows
The installer does not install GPU drivers on Windows. Pre-install NVIDIA drivers before running the installer. The installer detects them vianvidia-smi and configures the cluster to use them.
Manual Deployment
Skip the installer entirely. Use this if you already have a Kubernetes cluster, need custom resource limits, or want full control over the Helm deployment.Add the Helm repository
Get default values
Export the chart’s default configuration to customize it:Configure values.yaml
Authentication
Connect the client to your tracebloc account:Resource Limits
Control how much CPU, memory, and GPU each training job can consume. Size these according to your workloads and available hardware:Storage
Persistent volumes for the database, logs, and training data. Adjust sizes based on your dataset:Proxy (optional)
Only needed if your machine accesses the internet through a corporate proxy:Deploy
Install the chart into a new namespace:Update
Pull the latest chart version and apply your configuration:Uninstall
Remove the client and all associated resources:Security
Tracebloc is designed so your data never has to leave your network. Here’s how:- Data stays local. Training data never leaves your infrastructure. Only metadata and metrics are shared with the platform.
- Encrypted. All communication between client and platform is TLS-encrypted.
- Isolated. Training runs in containers with restricted system access. Kubernetes namespaces separate workloads from each other.
- Scanned. Submitted models are analyzed for vulnerabilities before execution on your infrastructure.
- Minimal footprint. The installer only modifies
~/.tracebloc/and Docker. No system-wide changes.