クイックスタート

HELM、govern をインストールして、最初の AI ツール呼び出しを実行し、5 分以内に証明を検証します。

01 INSTALLChoose a runtimecurl | bashgo installdocker run --help02 ONBOARDCreate local statehelm onboard --yessqlite + Ed25519 keys03 PROXYRun HELM locallyhelm proxyupstream OpenAIlocalhost:909004 CLIENTPoint your appbase_urlhttp://localhost:9090/v105 VERIFYExport proofhelm exporthelm verify --bundleverification: PASSResponses carry X-Helm-Receipt-ID, X-Helm-Output-Hash, X-Helm-Lamport-Clock, and X-Helm-Decision-ID.
クイックスタート: このページの技術リファレンスです。

クイックスタート

In this guide, you'll install HELM, run a governed AI tool call, and export a cryptographic proof bundle — all in under 5 minutes.

What you'll have at the end: a working HELM proxy with a signed receipt chain you can verify offline.

Prerequisites

  • Docker + Docker Compose or Go 1.22+
  • jq (for JSON output)
  • An OpenAI API key (or any OpenAI-compatible provider)

1. Install HELM

Pick one:

# Script install (macOS / Linux) — ~10 seconds
curl -fsSL https://raw.githubusercontent.com/Mindburn-Labs/helm-oss/main/install.sh | bash

# Or: Go install
go install github.com/Mindburn-Labs/helm-oss/core/cmd/helm@latest

# Or: Docker
docker run --rm ghcr.io/mindburn-labs/helm-oss:latest --help

2. Onboard

helm onboard --yes

Creates a local SQLite database, generates Ed25519 signing keys, and writes the default policy. Zero external dependencies — everything stays on your machine.

3. Start the governed proxy

helm proxy --upstream https://api.openai.com/v1

helm proxy starts a standalone OpenAI-compatible proxy on localhost:9090 by default. Every tool call is now intercepted, policy-checked, and written to a signed receipt.

4. Point your app

Change one line — your existing code is now governed:

Python:

import openai

client = openai.OpenAI(base_url="http://localhost:9090/v1")

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "List files in /tmp"}]
)
print(response.choices[0].message.content)

TypeScript / fetch:

const response = await fetch("http://localhost:9090/v1/chat/completions", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    model: "gpt-4",
    messages: [{ role: "user", content: "What time is it?" }],
  }),
});

Every response includes governance headers:

Header Value
X-Helm-Receipt-ID rec_a1b2c3...
X-Helm-Output-Hash sha256:...
X-Helm-Lamport-Clock 42
X-Helm-Decision-ID dec_a1b2c3...

5. Export and verify

# Export a deterministic EvidencePack
helm export --evidence ./data/evidence --out evidence.tar

# Verify offline — zero network required
helm verify --bundle evidence.tar

Expected: verification: PASS


What just happened

Your app → HELM Proxy → Policy check → Tool executes → Receipt signed → ProofGraphEvidencePack (verifiable offline)

  1. Every tool call passed through the Policy Enforcement Point (PEP)
  2. Each call was JCS-canonicalized (RFC 8785) and SHA-256 hashed
  3. A cryptographic receipt was generated — Ed25519 signed, Lamport-ordered
  4. Receipts form a ProofGraph — an append-only, hash-linked chain
  5. The EvidencePack is verifiable without a server — air-gapped safe

Next steps

Goal Guide
Understand how it works How HELM Works
Add HELM to Claude, Cursor, VS Code MCP Integration
Write custom policy rules Policy Files
Run the full demo (15 receipts, 7 phases) Run the Demo