Start free with the full engine. When you need high availability, hands-on support, or enterprise governance — that's when pricing kicks in.
The open source gives you the full engine. Paid plans give you a partner — high availability, hands-on support, and enterprise governance.
Full engine, one node, zero cost. Ideal for solo founders and small teams.
Billed annually. HA deployment, hands-on support. Most teams live within a day.
Unlimited scale, policy-as-code governance, and a named engineer on your account.
"As a Product Manager, Probe helps me understand the true behaviour of the software so I can go beyond the documentation and validate edge case scenarios. This saves a lot of time and disruption to the development teams."
"I'm using Probe Labs tools daily as a technical lead, and they've been adopted across marketing, sales, documentation, product, delivery, and engineering. The YAML-based automation makes it easy to wire in tools like JIRA, Zendesk, and GitHub for agentic flows that actually work from day one."
Every org is different. The tools people use, how teams talk to each other, where knowledge lives, what slows things down — it's never the same twice.
We sit down with your teams, understand the pain points, map how everything connects, and build the workflows and integrations that actually fit.
A seat is any user who actively uses Probe in the last 30 days. Inactive users don't count toward your bill. If your team size fluctuates, you only pay for who's actually using it.
You can start with the open source right now — same core engine, no time limit. But if you want the fastest path to value, reach out. We run a free proof-of-concept with your team: we connect your repos, wire up your tools, and show you what it looks like running on your actual stack. No commitment, no contract until you're ready.
Any. OpenAI, Anthropic, Google, or your own self-hosted models. For best results, we recommend models at the Gemini 2.5 Pro level or above. You can also configure fallback models per workflow — if one provider goes down, your processes keep running. Switch anytime, no lock-in, no re-indexing.
No. Code indexing and retrieval run locally or on your own infra. You control what context goes to the LLM. Full on-prem deployment available.
They work with what you paste in. Probe has a purpose-built context retrieval engine designed for enterprise-scale, multi-repo codebases. It connects to your real systems — code repos, Jira, Zendesk, Confluence, Slack — reasons across all of them, and takes actions: opens PRs, updates tickets, posts summaries.
Business: onboarding, workflow help, and ongoing tuning via a shared channel. Enterprise: a named contact, guaranteed response times, and dedicated rollout support.
Yes. The open source runs the same core engine. When you're ready, we help you migrate to Business or Enterprise — same infrastructure, same data, no re-deployment. The upgrade adds HA, support, and governance on top of what you already have.
We start with a kickoff call to understand your stack, repos, and team structure. Then we connect your tools, configure workflows, and tune the assistant for your architecture. Most teams are in production within a week. After launch, we stay in a shared channel for ongoing support and tuning.