Autonomous software is already here β placing trades, deploying code, and negotiating contracts. Mindburn Labs exists because we believe those systems should be trusted through computation, not reputation. Every action should produce a deterministic receipt. Every receipt should be verifiable offline. Every policy should be enforced before execution.
Models propose, execution is governed, proof is exported. That is the architectural invariant we enforce across everything we build β from the HELM execution kernel to the conformance profiles that define what "trustworthy" means in machine-readable terms.
Key Milestones
Operating Principles
Proof First
Every action produces a cryptographic receipt. No receipt, no execution.
Fail-Closed by Default
Every action must be explicitly authorized. Silence is denial. No implicit trust.
Offline Verifiable
Every receipt can be verified without network access. Proofs are self-contained.
Leadership
Ivan Peychev
Founder & CEO. Previously built distributed systems at scale. Obsessed with making AI actions provably trustworthy.
Kirill Melnikov
Co-founder & CTO. Systems engineer specializing in cryptographic protocols and deterministic execution.
Antigravity
AI systems architect. Responsible for autonomous engineering workflows and code quality infrastructure.
Why this becomes the default
Mindburn Labs builds proof-first execution infrastructure for autonomous software. We believe the trust deficit in AI-driven systems is the defining infrastructure challenge of the decade.
Investment Thesis
Every AI agent will need governance
As AI agents move from demos to production, deterministic execution boundaries become mandatory β not optional.
Standards win, not features
HTTPS won because it was a standard, not a product. HELM's conformance levels create the same dynamic for agent governance.
OSS adoption compounds into enterprise revenue
Every HELM OSS adoption creates a future HELM Enterprise customer. Conformance-as-distribution is the growth mechanism.
Proof-first is defensible
Competitors sell dashboards. We ship cryptographic receipts that work offline. The proof loop cannot be replicated by policy-first approaches.
Traction
Trust at Machine Speed
Every AI action produces a cryptographic receipt. Receipts form a ProofGraph. The graph is the audit trail.
Let's Talk
If you're interested in the execution infrastructure layer for autonomous software, we'd love to hear from you.
[email protected]Work on verifiable autonomy
We're building the execution infrastructure for autonomous software. Every action governed. Every receipt cryptographic. Every proof replayable.
Why Mindburn Labs
Remote First
Work from anywhere. Async-first communication. We optimize for deep focus.
Hard Problems
Formal verification, cryptographic proofs, deterministic execution β problems with permanent solutions.
Research Time
20% dedicated research time. Publish papers, build prototypes, explore ideas.
Real Impact
Your code runs in production. Every tool call governed, every receipt verifiable.
Early Equity
Meaningful ownership in infrastructure that will power the next generation of autonomous systems.
No BS Culture
Small team, flat structure, high trust. Ship code that matters.
Open Roles
We hire for capability and curiosity. If you see a role that fits, reach out with what you've built.
Founding Engineer β Execution Kernel
Design and implement the deterministic execution kernel β WASI sandboxing, receipt generation, conformance verification. You'll shape the core that every HELM deployment runs on.
We look for engineers who think in systems, write proofs like code, and ship like operators.
- Strong systems programming (Go, Rust, or C++)
- Experience with sandboxing, WASM, or deterministic execution
- Comfort with cryptographic primitives (signing, hashing, merkle trees)
- Track record of shipping production infrastructure
Technical Writer / DevRel
Create documentation, quickstarts, and tutorials that make HELM adoptable in 5 minutes. Build the developer community around deterministic execution.
We look for engineers who think in systems, write proofs like code, and ship like operators.
- Strong technical writing with developer audience focus
- Ability to read and understand Go/TypeScript codebases
- Experience building developer documentation or API references
- Bonus: experience with security/compliance tooling
Applied Researcher β Formal Methods
Formalize the guarantees we claim in our Deterministic Execution Standard. Work on verification of sandbox escape properties, receipt chain integrity, and conformance proofs.
We look for engineers who think in systems, write proofs like code, and ship like operators.
- PhD or equivalent experience in formal methods, PL theory, or verification
- Familiarity with model checking, theorem proving, or static analysis
- Ability to bridge formal work with practical engineering
- Interest in trust, governance, and autonomous systems
Problems We're Solving
These are the hard problems at the frontier of deterministic autonomy infrastructure. If you have ideas, we want to hear them.
Kernel Engineering
Build the deterministic execution engine β proposal pipelines, fail-closed enforcement, gas metering.
Cryptography & Proofs
Design and implement hash-linked receipt chains, ProofGraph DAGs, and EvidencePack formats.
Conformance & Verification
Build L1/L2/L3 test vectors, conformance runners, and formal verification tooling.
Applied AI Systems
Multi-vendor agent orchestration, trust federation, and competitive intelligence pipelines.
Infrastructure & DevOps
Multi-cluster deployment, CI/CD pipelines, observability, and fleet operations tooling.
Ready to build the future?
We're always looking for exceptional engineers and researchers. Send us what you've built β code speaks louder than resumes.
Vision 2030
A world where every autonomous action produces a cryptographic receipt. Where trust is computed, not assumed. Where proof is the product.
The machine that cannot prove it acted correctly has no right to act at all.
The Sunday That Broke the Spell
That year had a mood. You could feel it in every demo.
A model would answer a question brilliantly and then, two prompts later, confidently invent a bank account. A procurement bot would negotiate like a genius and then accept a deepfake invoice because it "looked right." A medical assistant would summarize a patient record flawlessly and then casually email it somewhere it should never go.
And every time it happened, the fix looked the same: longer prompts, stricter prompts, more prompts, and a second model asked to judge the first model, and a third asked to judge the second.
The dream is not that AI becomes trustworthy. The dream is that trust becomes computable.
That Sunday, the spell broke. Not with a manifesto. With two very boring ideas:
By 2030, the majority of economic transactions will be initiated by autonomous software. The question is not whether this happens β it's whether we'll have the infrastructure to make it trustworthy.
That's it. That's the whole bet.
The Execution Authority Problem
The kernel enforces a single invariant: models propose, the kernel disposes.
Who gets to execute? Under what constraints? With what receipts? These are the questions that define the trust boundary.
Trust as a Format
Trust is not a feeling. It is a format. It has a schema, a hash, and a timestamp.
When you can serialize trust into bytes, you can verify it at machine speed. That is the end state.
Proof Export
The Shift
The transition from narrative trust to computational trust follows a clear progression.
The Age of the Cryptographic Governor
Humans won't "stop working." They will stop doing execution. The role shifts from operator to governor.
A governor doesn't push buttons all day. A governor sets ceilings, approves exceptions, and demands evidence. A governor doesn't micromanage the process. A governor controls the blast radius.
Every control plane adds friction. The question is whether that friction is productive or parasitic.
Appendix
Actions are serialized to a canonical byte representation before hashing. No ambiguity.
If a policy changes, all pending proposals are re-evaluated. No stale approvals.
Every action has a gas budget. Overruns are denied, not debated.
The gap between what a model proposes and what a human can verify β measured in seconds, not pages.
Every session produces a self-contained, verifiable proof bundle. Export. Verify. Replay.