Documentation Index
Fetch the complete documentation index at: https://docs.tracebloc.io/llms.txt
Use this file to discover all available pages before exploring further.
Everything the installer does, explained. This page walks through the full setup process — what each step does, what to expect, and how to verify it worked.
Requirements
The installer runs on any modern machine. These are the minimum specs to run a workspace comfortably:
| Minimum | Recommended |
|---|
| CPU | 4 cores | 8+ cores |
| RAM | 8 GB | 16+ GB |
| Disk | 20 GB free | 50+ GB free |
Supported platforms: macOS (Intel & Apple Silicon) · Linux (x86_64 & arm64) · Windows (x86_64 & arm64)
Outbound access needed: The installer downloads container images and connects to the tracebloc platform. Make sure your network allows traffic to *.docker.io, *.tracebloc.io, github.com, and pypi.org.
1. Create an Account
Sign up at ai.tracebloc.io. Free to get started — no credit card required.
2. Register a Client
A client is your workspace’s identity on the platform. It ties a specific machine to your account and controls what data and use cases are accessible from it.
Open the client page and click ”+”.
| Field | What to enter |
|---|
| Name | A name for your workspace, e.g. my-team |
| Location | Where this machine is deployed |
| Password | A secure password (not your account password) |
The client starts as Pending while the backend provisions resources. Note the Client ID and password — you need both in the next step.
Provisioning usually takes a few seconds. Refresh the page if the status doesn’t update.
Client Status Reference
Your client moves through these states as it goes from registration to running:
| Status | Meaning |
|---|
| Pending | Registration received, being provisioned |
| Online | Deployed and connected to the platform |
| Offline | Disconnected or not running |
3. Deploy
One command sets up your entire workspace on any machine — macOS, Linux, or Windows. The installer is idempotent: it detects what’s already installed and skips it, so it’s safe to re-run at any time.
bash <(curl -fsSL https://tracebloc.io/i.sh)
Open PowerShell as Administrator:irm https://tracebloc.io/i.ps1 | iex
Nothing on your machine is modified outside:
~/.tracebloc/ — data and config
- Docker — container runtime
What the Installer Does
The installer runs four clearly labelled steps:
Step 1/4 — Check system requirements
Verifies Docker is installed and running, detects GPU hardware (falls back to CPU mode if none), and installs missing system tools (e.g. conntrack).
Step 2/4 — Set up secure compute environment
Provisions a lightweight local Kubernetes cluster inside Docker. First run takes 1–2 minutes to download components.
Step 3/4 — Install tracebloc client
Prompts for a workspace name (e.g. berlin-team, vision-lab, ml-mardan). This identifies the client on your machine and becomes the Kubernetes namespace.
Step 4/4 — Connect to tracebloc network
Prompts for your Client ID and password from step 2 above. This links your secure local environment to the tracebloc platform so vendors can submit models for evaluation.
When it finishes you’ll see a summary like:
tracebloc client installed successfully
Workspace : <workspace>
Mode : CPU # or GPU
Logs: ~/.tracebloc/
Data: /tracebloc/<workspace>
Install logs are kept in ~/.tracebloc/ if you need to debug anything.
GPU Support
The installer auto-detects GPU hardware and configures the cluster accordingly:
- Linux (NVIDIA/AMD) — drivers, container toolkit, and Kubernetes device plugin are installed automatically. A reboot may be required after driver installation.
- macOS — CPU-only. For GPU workloads, deploy on a Linux machine or use AWS (EKS).
- Windows — pre-install NVIDIA drivers before running the installer. The installer detects them via
nvidia-smi.
See Configuration > GPU for detailed platform-specific behavior.
4. Verify
After the installer finishes, confirm that your workspace is running:
kubectl get pods -n <workspace>
You should see two pods in Running state:
| Pod | Role |
|---|
mysql-... | Local metadata store — tracks jobs, metrics, and configuration |
tracebloc-jobs-manager-... | The client — executes training jobs and communicates with the platform |
Then open ai.tracebloc.io and check that your client status shows Online. This confirms the client has established a secure connection to the tracebloc backend.
Stuck on Offline? Check the Troubleshooting page — most issues resolve with a pod restart or credential check.
What’s Next
Your workspace is running. Here’s where to go from here:
Data owners: Create a use case — select datasets, define evaluation metrics, and invite contributors to submit models.
Data scientists: Join a use case — connect to a workspace, train models on real data, and submit results.
Advanced configuration: Configuration — customize installer options, manage the cluster, configure GPU settings, or deploy manually with Helm.