Open source core.
Simple pricing.

Start free with the full engine. When you need high availability, hands-on support, or enterprise governance — that's when pricing kicks in.

  • Full platform at every tier — multi-repo, workflows, integrations, all included
  • Pay when you need HA deployment, production support, and governance
  • Your infra, your models, your rules — no lock-in, no phoning home
  • Most teams are live within a day
Plans

Start free. Scale when you're ready.

The open source gives you the full engine. Paid plans give you a partner — high availability, hands-on support, and enterprise governance.

MonthlyAnnual Save 20%

Open Source

Free

Full engine, one node, zero cost. Ideal for solo founders and small teams.

Includes
  • Full Probe + Visor stack — multi-repo, chat, workflows, integrations
  • Dynamic skills, automations, scheduled jobs
  • Single-node deployment — you run it, you own it
  • Community support via GitHub issues and discussions
You'll outgrow this when
  • You need high-availability multi-node deployment
  • You want hands-on onboarding and someone to tune workflows to your stack
  • You need production SLAs and guaranteed response times
  • Your org requires policy-as-code governance and access boundaries

Enterprise

Contact us

Unlimited scale, policy-as-code governance, and a named engineer on your account.

What you get
  • Everything in Business, plus:
  • Unlimited repos and users
  • Open Policy Agent (OPA) for policy-as-code
  • Per-workflow tool permissions and scoped context retrieval
  • A named engineer assigned to your account — direct access via Slack or email, not a ticket queue
  • Custom SLAs with guaranteed response times
  • Professional services for org-wide AI transformation at scale

"As a Product Manager, Probe helps me understand the true behaviour of the software so I can go beyond the documentation and validate edge case scenarios. This saves a lot of time and disruption to the development teams."

Andy OstSenior Product Manager at Tyk.io

"I'm using Probe Labs tools daily as a technical lead, and they've been adopted across marketing, sales, documentation, product, delivery, and engineering. The YAML-based automation makes it easy to wire in tools like JIRA, Zendesk, and GitHub for agentic flows that actually work from day one."

Laurentiu GhiurTechnical Lead at Tyk.io

Professional Services

Every org is different. The tools people use, how teams talk to each other, where knowledge lives, what slows things down — it's never the same twice.

We sit down with your teams, understand the pain points, map how everything connects, and build the workflows and integrations that actually fit.

  • Deep-dive with each team to understand their workflows and bottlenecks
  • Custom integrations across your tools and data sources
  • Workflow design, implementation, and tuning
  • Ongoing partnership — quarterly reviews, workflow tuning, new integrations as your needs evolve

Common questions

How are seats counted?

A seat is any user who actively uses Probe in the last 30 days. Inactive users don't count toward your bill. If your team size fluctuates, you only pay for who's actually using it.

Is there a trial?

You can start with the open source right now — same core engine, no time limit. But if you want the fastest path to value, reach out. We run a free proof-of-concept with your team: we connect your repos, wire up your tools, and show you what it looks like running on your actual stack. No commitment, no contract until you're ready.

What LLMs do you support?

Any. OpenAI, Anthropic, Google, or your own self-hosted models. For best results, we recommend models at the Gemini 2.5 Pro level or above. You can also configure fallback models per workflow — if one provider goes down, your processes keep running. Switch anytime, no lock-in, no re-indexing.

Does my code leave my infrastructure?

No. Code indexing and retrieval run locally or on your own infra. You control what context goes to the LLM. Full on-prem deployment available.

How is this different from ChatGPT or Copilot?

They work with what you paste in. Probe has a purpose-built context retrieval engine designed for enterprise-scale, multi-repo codebases. It connects to your real systems — code repos, Jira, Zendesk, Confluence, Slack — reasons across all of them, and takes actions: opens PRs, updates tickets, posts summaries.

What's included in support?

Business: onboarding, workflow help, and ongoing tuning via a shared channel. Enterprise: a named contact, guaranteed response times, and dedicated rollout support.

Can I start with Open Source and upgrade later?

Yes. The open source runs the same core engine. When you're ready, we help you migrate to Business or Enterprise — same infrastructure, same data, no re-deployment. The upgrade adds HA, support, and governance on top of what you already have.

What does onboarding look like?

We start with a kickoff call to understand your stack, repos, and team structure. Then we connect your tools, configure workflows, and tune the assistant for your architecture. Most teams are in production within a week. After launch, we stay in a shared channel for ongoing support and tuning.