For Software Services & Delivery Partners

Stop Asking PermissionTo Understand the Codebase

Probe gives agencies the leverage to deliver like an internal team — with the flexibility clients hired you for.

On-premAny LLMOpenTelemetryOpen source

Free and open source. Business and Enterprise plans available.

What changes for your agency
Before
2-3 weeks to first meaningful PR
Quality depends on who's assigned
Constant Slack messages to client team
After
First meaningful PR in 2-3 days
Same quality bar for every engineer
Self-serve answers from source of truth
Client-safe by design
On-prem Any LLM OpenTelemetry

Why agencies choose Probe

Understand Any Codebase

Your engineers get answers about architecture, patterns, and ownership in seconds, not days. No waiting on Slack responses.

Consistent Quality

Every engineer works against the same quality bar. AI-powered code review enforces the client's actual standards.

Ship Confidently

Know the dependencies, the patterns, and the gotchas before you touch a line. Fewer bugs. Faster reviews.

Scale Without Tax

New team members get the same depth of understanding as your veterans, from day one. Say yes to expansion.

The Real Problem

You're not selling code. You're selling trust.

And trust erodes every time your team needs to ask "where is this implemented?" or "why was this built this way?"

01

The Onboarding Reality

New client signs. Engineers clone repos and hit walls. Architecture isn't documented. Questions pile up in Slack. Days pass billing hours, not shipping code. The client starts wondering if they made the right choice.

02

The Quality Drift

Your senior engineer who "gets" the codebase goes on vacation. The replacement follows different patterns. Small inconsistencies compound. Six months later, the client blames "agency code quality."

03

The Scaling Trap

Client wants to double the team. Your current engineers are the only ones who understand the system. Onboarding new people means pulling productive engineers into endless knowledge transfer. You grow headcount but not output.

What agency teams actually ask Probe

Real prompts from delivery teams. Questions, investigations, and automations — across client codebases.

Question
Show me where API authentication lives in the codebase and walk me through the request lifecycle
Authentication lives in middleware/auth_check.go. The request lifecycle: 1. Listener accepts connection (server/http.go) 2. Router matches endpoint (mux/router.go) 3. Middleware chain executes: rate-limit → auth → transform → proxy 4. Response flows back through the chain...
Question
Upstream cache control is not working as described in the ticket — is this actually a bug?
Yes, this is a bug in the response caching logic. The issue lies in how the gateway handles the absence of upstream cache control headers. The problematic code is in gateway/res_cache.go — it incorrectly defaults to caching...
Investigation
Memory consumption increased between v5.8.2 and v5.8.5 — what could cause it?
Based on the code changes between v5.8.2 and v5.8.5, here are PRs that might be related: PR #2847 modified buffer handling in the request pipeline, PR #2891 changed connection pooling defaults, PR #2903 added eager-loading...
Question
If I adopt your fix, what tests should I write to prove it?
1. Test Case: Cache Control Enabled, Header ABSENT (Verifies the Bug Fix). 2. Test Case: Header PRESENT (Verifies Correct Behavior). 3. Test Case: Control Disabled (Verifies No Regression)
Investigation
These 4 tickets look similar — trace the code and tell me if they share a root cause. Can we fix them in one PR?
After tracing the code paths: 3 of the 4 tickets share the same root cause — a race condition in the session store when Redis connections reset. PROJ-8821 is unrelated (UI rendering issue). The 3 related bugs can be fixed in one effort by adding a mutex in storage/redis.go...
Digest
Build release notes for v5.10.1 from the merged PRs and Jira tickets
Release Notes for v5.10.1 — Changed: Upgraded Go version to enhance security. Fixed: Addressed high-impact CVEs. Fixed critical issue where Hybrid mode would hang. Patched "Zip Slip" vulnerability in bundle decompression...
Question
Show me where API authentication lives in the codebase and walk me through the request lifecycle
Authentication lives in middleware/auth_check.go. The request lifecycle: 1. Listener accepts connection (server/http.go) 2. Router matches endpoint (mux/router.go) 3. Middleware chain executes: rate-limit → auth → transform → proxy 4. Response flows back through the chain...
Question
Upstream cache control is not working as described in the ticket — is this actually a bug?
Yes, this is a bug in the response caching logic. The issue lies in how the gateway handles the absence of upstream cache control headers. The problematic code is in gateway/res_cache.go — it incorrectly defaults to caching...
Investigation
Memory consumption increased between v5.8.2 and v5.8.5 — what could cause it?
Based on the code changes between v5.8.2 and v5.8.5, here are PRs that might be related: PR #2847 modified buffer handling in the request pipeline, PR #2891 changed connection pooling defaults, PR #2903 added eager-loading...
Question
If I adopt your fix, what tests should I write to prove it?
1. Test Case: Cache Control Enabled, Header ABSENT (Verifies the Bug Fix). 2. Test Case: Header PRESENT (Verifies Correct Behavior). 3. Test Case: Control Disabled (Verifies No Regression)
Investigation
These 4 tickets look similar — trace the code and tell me if they share a root cause. Can we fix them in one PR?
After tracing the code paths: 3 of the 4 tickets share the same root cause — a race condition in the session store when Redis connections reset. PROJ-8821 is unrelated (UI rendering issue). The 3 related bugs can be fixed in one effort by adding a mutex in storage/redis.go...
Digest
Build release notes for v5.10.1 from the merged PRs and Jira tickets
Release Notes for v5.10.1 — Changed: Upgraded Go version to enhance security. Fixed: Addressed high-impact CVEs. Fixed critical issue where Hybrid mode would hang. Patched "Zip Slip" vulnerability in bundle decompression...
Automation
@Probe every morning: check Datadog for any service with error rate above 1% or p99 above 500ms in the last 12 hours — post findings to #on-call
Morning health check scheduled for 8:00 AM daily. I'll query Datadog for services exceeding thresholds, correlate any anomalies with recent deploys from GitHub, and post a summary to #on-call. If everything is healthy, I'll post a green status. If issues are found, I'll include: affected service, metric values, likely cause, and link to the relevant deploy or config change.
Automation
@Probe every hour: check newly created Jira tickets, find related Zendesk cases, classify the issue, and add early analysis as a comment
Hourly ticket enrichment active. Every hour I'll scan Jira for tickets created in the last 60 minutes, search Zendesk for related customer cases by keyword and account, classify each issue (bug, feature request, regression, config error), and post a comment with: related Zendesk tickets, affected customers, preliminary root cause from the codebase, and suggested priority. Already processed 4 new tickets — PROJ-4210 linked to 3 Zendesk cases from the same customer, classified as regression in the billing module.
Automation
Read ticket PROJ-8410, then find the original task and spec it was built from. Think like QA — cover positive, negative, and edge cases. Update the ticket with detailed test cases, then check tests/ for gaps in automated coverage.
Done. Read PROJ-8410 (webhook retry logic). Traced back to PROJ-7201 (original spec) and RFC-031 (design doc). Generated 14 test cases: 5 positive (successful retry, backoff timing, max retries), 4 negative (malformed payload, timeout, auth failure), 5 edge (concurrent retries, partial failure, clock skew). Updated PROJ-8410 with all cases. Gap analysis: tests/webhook_test.go covers 6/14 — missing timeout, concurrent, and all edge cases.
Automation
Nightly CI failed on main — analyze the failure, identify root cause, and post a summary to #engineering with steps to fix
Build #4721 failed at 03:12 UTC. Root cause: integration test test_api_gateway_timeout depends on external sandbox endpoint that returned 503. Not a code regression — infrastructure issue. Posted to #engineering: "Nightly build failed — external sandbox was down. Tests are green when sandbox is healthy. Recommend: add retry logic to integration suite or mock the external dependency. Created INFRA-892 for the sandbox team."
Automation
Ticket PROJ-6100 changed the webhook retry logic — update the docs, open a PR, and link it back to the Jira ticket
Done. Read PROJ-6100 and the merged PR #672. Changes: max retries increased from 3 to 5, backoff changed from fixed to exponential, new X-Retry-Count header added. Updated docs/webhooks/retry-policy.md with new behavior table and code examples. Opened PR #701 "Update webhook retry docs for PROJ-6100". Linked PR back to PROJ-6100 in Jira. Also flagged: the API changelog needs an entry — created PROJ-6105 for that.
Investigation
We have 14 microservices — which ones are missing health checks, structured logging, or graceful shutdown? Audit all repos.
Scanned all 14 service repos. Missing health checks: payment-service, notification-worker, analytics-ingest (3/14). Missing structured logging: legacy-api, pdf-generator still use fmt.Println (2/14). Missing graceful shutdown: notification-worker, batch-processor, analytics-ingest — all exit immediately on SIGTERM (3/14). Priority: analytics-ingest fails all 3 checks. Created PLAT-320 through PLAT-327 with per-service remediation steps.
Automation
@Probe every morning: check Datadog for any service with error rate above 1% or p99 above 500ms in the last 12 hours — post findings to #on-call
Morning health check scheduled for 8:00 AM daily. I'll query Datadog for services exceeding thresholds, correlate any anomalies with recent deploys from GitHub, and post a summary to #on-call. If everything is healthy, I'll post a green status. If issues are found, I'll include: affected service, metric values, likely cause, and link to the relevant deploy or config change.
Automation
@Probe every hour: check newly created Jira tickets, find related Zendesk cases, classify the issue, and add early analysis as a comment
Hourly ticket enrichment active. Every hour I'll scan Jira for tickets created in the last 60 minutes, search Zendesk for related customer cases by keyword and account, classify each issue (bug, feature request, regression, config error), and post a comment with: related Zendesk tickets, affected customers, preliminary root cause from the codebase, and suggested priority. Already processed 4 new tickets — PROJ-4210 linked to 3 Zendesk cases from the same customer, classified as regression in the billing module.
Automation
Read ticket PROJ-8410, then find the original task and spec it was built from. Think like QA — cover positive, negative, and edge cases. Update the ticket with detailed test cases, then check tests/ for gaps in automated coverage.
Done. Read PROJ-8410 (webhook retry logic). Traced back to PROJ-7201 (original spec) and RFC-031 (design doc). Generated 14 test cases: 5 positive (successful retry, backoff timing, max retries), 4 negative (malformed payload, timeout, auth failure), 5 edge (concurrent retries, partial failure, clock skew). Updated PROJ-8410 with all cases. Gap analysis: tests/webhook_test.go covers 6/14 — missing timeout, concurrent, and all edge cases.
Automation
Nightly CI failed on main — analyze the failure, identify root cause, and post a summary to #engineering with steps to fix
Build #4721 failed at 03:12 UTC. Root cause: integration test test_api_gateway_timeout depends on external sandbox endpoint that returned 503. Not a code regression — infrastructure issue. Posted to #engineering: "Nightly build failed — external sandbox was down. Tests are green when sandbox is healthy. Recommend: add retry logic to integration suite or mock the external dependency. Created INFRA-892 for the sandbox team."
Automation
Ticket PROJ-6100 changed the webhook retry logic — update the docs, open a PR, and link it back to the Jira ticket
Done. Read PROJ-6100 and the merged PR #672. Changes: max retries increased from 3 to 5, backoff changed from fixed to exponential, new X-Retry-Count header added. Updated docs/webhooks/retry-policy.md with new behavior table and code examples. Opened PR #701 "Update webhook retry docs for PROJ-6100". Linked PR back to PROJ-6100 in Jira. Also flagged: the API changelog needs an entry — created PROJ-6105 for that.
Investigation
We have 14 microservices — which ones are missing health checks, structured logging, or graceful shutdown? Audit all repos.
Scanned all 14 service repos. Missing health checks: payment-service, notification-worker, analytics-ingest (3/14). Missing structured logging: legacy-api, pdf-generator still use fmt.Println (2/14). Missing graceful shutdown: notification-worker, batch-processor, analytics-ingest — all exit immediately on SIGTERM (3/14). Priority: analytics-ingest fails all 3 checks. Created PLAT-320 through PLAT-327 with per-service remediation steps.

Three things that change everything

01

Instant Codebase Understanding

Ask any question about any codebase and get accurate, contextual answers in seconds.

  • "How does authentication work in this system?"
  • "What services does the payment flow touch?"
  • "Where was this feature implemented and why?"
  • "What's the safest way to modify this endpoint?"

The system reads code semantically — understanding functions, classes, dependencies, and call graphs — not just doing text search. It pulls context from linked Jira tickets, Confluence docs, and historical PRs to give you the full picture.

Multi-repo awarenessQuery across the entire system, not just one project
Jira & Zendesk integrationCode answers include ticket context and customer impact
Historical understandingLearn from past implementations and decisions
02

AI-Powered Code Reviews

Every PR reviewed against the client's actual standards — automatically, consistently, thoroughly.

Generic AI code review tools don't know your client's conventions. They don't understand that this client cares deeply about error handling patterns, or that the team prefers composition over inheritance, or that touching the billing service requires extra scrutiny.

Probe learns each client's standards and applies them to every PR. It checks for security issues, performance problems, architectural violations, and cross-service dependencies. It explains why something is flagged, not just that it is.

Client-specific rulesConfigure quality standards per client, per repo, per team
Cross-project dependenciesCatch breaking changes before they hit production
Security & performanceAutomated checks for OWASP vulnerabilities and perf regressions
03

Multi-Project Intelligence

Work across an entire system of services, not just isolated repositories.

Real enterprise systems aren't single repos. They're dozens of services with complex dependencies, shared libraries, and cross-cutting concerns. A change in one service can break three others. Traditional tools can't see this.

Probe maps the entire system. It understands which services depend on what, how data flows between them, and what happens when you change something. When a client asks you to modify an API, you'll know every consumer that needs to be updated.

System-wide visibilitySee dependencies and impacts across all repositories
Version comparisonUnderstand what changed between releases and why
Regression analysisFind which change introduced a bug or performance issue

Workflow packs for every engagement phase

Pre-built automation workflows you can deploy immediately. Customize per client. Version like code. Improve over time.

Day 1-2

Client Onboarding

Connect to client repos and automatically generate system architecture documentation, ownership maps, and local development guides. Your engineers understand the system before writing a line of code.

  • Architecture overview document
  • Service dependency map
  • Ownership & contact directory
  • Local setup instructions
Every PR

Quality Assurance

Automated code review against client-specific standards. Catches security issues, architectural violations, and cross-service breaking changes before human review. Reduces review cycles from days to hours.

  • Standards compliance check
  • Security vulnerability scan
  • Dependency impact analysis
  • Performance regression flags
Team Changes

Knowledge Transfer

When engineers rotate between projects or new team members join, automatically generate personalized onboarding based on what they need to know. No more pulling senior engineers into endless briefings.

  • Role-specific documentation
  • Common task walkthroughs
  • Codebase orientation guides
  • Historical decision context
Scaling Up

Team Expansion

When clients want to grow the engagement, spin up new capacity without proportionally increasing chaos. New engineers get the same depth of understanding as your veterans, from day one.

  • Standardized quality gates
  • Self-serve knowledge base
  • Consistent review standards
  • Parallel onboarding capacity

Built for client trust requirements

On-Premises Deployment

Runs entirely inside client infrastructure. Code never leaves their environment. Full data sovereignty and compliance compatibility.

Any LLM Provider

Use the client's preferred model — Claude, GPT, open-source, or self-hosted. No vendor lock-in. Switch providers without changing workflows.

Full Audit Trail

OpenTelemetry instrumentation captures every query, every workflow run, every decision. Complete traceability for compliance and debugging.

Open Source Core

The core engine is open source and auditable. Clients can inspect exactly how their code is being processed. No black boxes.

Open Source vs Enterprise

Start with the open-source core to evaluate the technology, then scale to enterprise when you need multi-project capabilities.

Probe Open Source

Free forever

The core code intelligence engine. Perfect for evaluating the technology on a single project or for individual engineers exploring a codebase.

  • Single-project code understanding — Ask questions about one repository at a time
  • Semantic code search — Understands code as code (functions, classes, dependencies), not just text
  • No indexing required — Works instantly on any codebase, runs locally
  • MCP integration — Use with Claude Code, Cursor, or any MCP-compatible tool
  • Any LLM provider — Claude, GPT, open-source models — your choice
  • Privacy-first — Everything runs locally, no data sent to external servers

Probe Enterprise

Contact for pricing

Everything in Open Source, plus multi-project architecture support, workflow automation, and integrations with your existing tools.

  • Multi-repository architecture — Query across your entire system of services, not just one project
  • System-wide dependency awareness — Understand how services connect, which changes break what
  • AI-powered code reviews — Automated PR review with customizable, evolving rules per client
  • Jira integration — Pull ticket context, specs, and acceptance criteria into code understanding
  • Zendesk integration — Connect support tickets to code for faster issue resolution
  • Version comparison — Analyze changes between releases, find regression sources
  • Workflow automation — Pre-built workflows for onboarding, code review, knowledge transfer
  • Intelligent routing — System automatically determines which repos, tickets, docs needed for each query
  • Slack/Teams integration — Ask questions from where you already work
  • On-premises deployment — Runs entirely in client infrastructure for maximum security

How to evaluate Probe

We recommend a two-phase approach: first, validate the core technology with open source on a single project, then pilot the enterprise features on a real client engagement.

Phase 1

Technical Validation

~10 minutes

Pick any of these and have something running before your next meeting. No account required.

~2 min

Add Probe to AI Coding Tools

Get enterprise-grade code understanding in Claude Code, Cursor, or any MCP-compatible tool. Auto-detects auth, works with any LLM API. One command to install.

You get: A specialized AI agent for code search and analysis — finds the right context and reduces wrong answers with bounded, structured retrieval.
AI code editor setup →
~5 min

Automate PR Reviews

Add a GitHub Action for automated code review. Every PR gets a first-pass review for security, performance, and quality with inline comments. Fully customizable per client.

You get: Every issue triaged automatically. Every PR reviewed. Configure multiple specialized checks (security, performance, dependencies) per project.
~10 min

Deploy a Codebase-Aware Slack Bot

Create a Slack bot that answers questions about your codebase. Your team can ask questions in Slack and get intelligent answers grounded in actual code — no context switching.

You get: A Slack bot your team can query about any client codebase. Run locally to test, then deploy anywhere.
Full setup guide →
Phase 2

Pilot Engagement

2-4 weeks

Once you've validated the core technology, run a pilot on a real client engagement to test the full enterprise workflow: multi-repo understanding, automated code reviews, and integrations.

1
Pick a pilot client

Choose an engagement where you have multiple repositories, existing Jira/ticket systems, and ideally where onboarding or quality has been challenging.

2
Architecture setup session

We'll work with your team to map the system architecture, configure cross-repo dependencies, and set up integrations with Jira/Zendesk.

3
Configure code review rules

Define client-specific quality standards, security requirements, and patterns. These rules live with the project and evolve over time.

4
Measure the difference

Track time-to-first-PR for new team members, questions asked to client team, and PR review cycles. Compare to previous engagements.

Success criteria: Measurable reduction in ramp-up time and client interruptions. New engineers productive faster. Consistent code review quality across the team.

Want to discuss how a pilot would work for your team?

Schedule a Technical Discussion

What agencies ask us

How is this different from GitHub Copilot or ChatGPT?

Copilot and ChatGPT work with whatever code is in front of them. They don't understand your client's architecture, conventions, or the relationships between services. They can't tell you that changing this function will break three other services, or that this pattern violates the team's coding standards.

Probe understands the entire system — across all repositories, integrated with Jira and Confluence, aware of historical decisions. It's the difference between asking someone who just read the file versus someone who built the system.

Does this work with legacy codebases?

Yes, and this is where it shines. Legacy systems are exactly where context is hardest to get — documentation is outdated, original authors are gone, patterns are inconsistent. Probe builds understanding from the code itself, not from documentation that may or may not exist.

How do we handle different standards for different clients?

Each client gets their own configuration. Code review rules, quality standards, workflow automations — all customized per client and versioned like code. When a client's preferences change, you update the config and the entire system adapts.

What's the implementation timeline?

Basic setup takes hours, not weeks. Connect your repos, configure initial rules, start using it. The system learns and improves over time, but you get value immediately. Most agencies start with one client engagement as a pilot, then roll out across their portfolio.

Ready to stop asking permission?

Let's talk about how Probe can transform your client engagements. We'll show you how it works on a real codebase — yours or one of your clients' — and discuss how to structure a pilot.