For Software Engineers & Developers

Understand Any CodebaseIn Minutes, Not Days

Understand any codebase in minutes, not days. AI-powered code intelligence that gives you the full picture before you touch a single line.

On-premAny LLMOpenTelemetryOpen source

Free and open source. Business and Enterprise plans available.

What changes for you
Before
Days scrolling through unfamiliar code
Waiting on Slack for "who knows this?"
Breaking things you didn't know were connected
After
Full architecture context in seconds
Self-serve answers from the actual source code
Know every dependency before you push
Privacy-first by design
On-prem Any LLM OpenTelemetry

Why engineers choose Probe

Instant Codebase Understanding

Ask questions about architecture, patterns, and dependencies and get accurate answers in seconds. No more grep-and-pray.

Faster Code Reviews

Automated first-pass review catches security issues, style violations, and breaking changes before human review even starts.

Ship Without Fear

Know the dependencies, the side effects, and the blast radius before you push. Fewer production incidents. Fewer rollbacks.

Zero Interruptions

Stop waiting on teammates who "know how this works." Get answers from the source code itself, any time, on any part of the system.

The Real Problem

Your IDE sees files. You need to see the system.

Every tool you have works at the file level. But the bugs you ship, the regressions you cause, and the time you waste all come from not understanding how the pieces fit together.

01

The Onboarding Wall

New repo. No documentation. The README hasn't been updated in two years. You clone it, open the project, and stare at 400 files. You spend the first week just figuring out where things live and who to ask. Your first real PR takes 10 days instead of 2.

02

The Hidden Dependency

You change a function signature. Tests pass. Linter is green. You push. Three hours later, a service you've never heard of breaks in production because it depends on the exact return format you just modified. Nobody told you. Nothing in the code warned you.

03

The Context Switch Tax

You're deep in a debugging session. You need to understand how the auth middleware works. You find the file, but it calls three other modules. Those modules import shared utilities. An hour later, you've read 15 files and lost the thread of what you were originally fixing.

What engineers actually ask Probe

Real prompts from developers. Questions, debugging sessions, and investigations -- across unfamiliar codebases.

Question
How does the authentication flow work in this app? Walk me through from login request to session creation.
Login flow: 1. POST /auth/login hits controllers/auth.go 2. Validates credentials against users table via models/user.go 3. Generates JWT in pkg/auth/token.go (RS256, 24h expiry) 4. Creates session record in Redis via services/session.go 5. Returns token + refresh token. Middleware in middleware/auth.go validates on every request...
Investigation
I'm getting a nil pointer panic in the order processing pipeline. Trace the data flow and tell me where the nil can come from.
The panic is in services/order.go line 142 where it calls order.Customer.Address.ZipCode. The nil can come from 3 places: 1. Customer is nil when order is created via API without auth (guest checkout path in handlers/checkout.go) 2. Address is nil when customer exists but has no saved address (models/customer.go allows nullable address) 3. The batch importer in jobs/import_orders.go skips address validation entirely...
Question
What's the safest way to add a new field to the User model? What migrations, serializers, and tests need to change?
Adding a field to User touches 6 files: 1. models/user.go (add field + DB tag) 2. migrations/ (new migration file) 3. serializers/user.go (add to JSON response) 4. handlers/user.go (accept in create/update) 5. tests/models/user_test.go 6. tests/handlers/user_test.go. Also check: the GraphQL schema in schema/user.graphql auto-generates from the model, so run codegen after. The admin panel in admin/views/users.vue reads from the API directly...
Investigation
Response times on /api/v2/products doubled after last week's deploy. What changed that could cause this?
3 PRs merged last week that touch the products path: PR #445 added eager-loading for product images (models/product.go) -- this actually should improve perf. PR #451 changed the search query to use ILIKE instead of trigram index (repositories/product_repo.go line 89) -- this bypasses the GIN index on the name column. PR #458 added a new middleware that logs full request/response bodies. Likely culprits: PR #451 (query plan change) and PR #458 (I/O overhead)...
Question
I need to understand the event system. What events exist, who publishes them, and who subscribes?
Event bus defined in pkg/events/bus.go using NATS. 14 event types defined in pkg/events/types.go. Publishers: order-service (OrderCreated, OrderCancelled), payment-service (PaymentProcessed, PaymentFailed), user-service (UserRegistered, UserUpdated). Subscribers: notification-service listens to all events, analytics-service to OrderCreated and PaymentProcessed, inventory-service to OrderCreated and OrderCancelled. Dead letter queue configured in infrastructure/nats.conf...
Question
Show me every place where we make an external HTTP call. I need to add timeout and retry logic to all of them.
Found 11 external HTTP calls across the codebase: 1. services/payment.go -- Stripe API (no timeout set) 2. services/shipping.go -- FedEx API (30s timeout) 3. services/email.go -- SendGrid (10s timeout) 4. services/geocoding.go -- Google Maps (no timeout) 5. jobs/webhook_dispatcher.go -- customer webhooks (5s timeout, no retry) 6. integrations/slack.go -- Slack notifications (no timeout)... 5 of 11 have no timeout configured. Only 2 have retry logic.
Question
How does the authentication flow work in this app? Walk me through from login request to session creation.
Login flow: 1. POST /auth/login hits controllers/auth.go 2. Validates credentials against users table via models/user.go 3. Generates JWT in pkg/auth/token.go (RS256, 24h expiry) 4. Creates session record in Redis via services/session.go 5. Returns token + refresh token. Middleware in middleware/auth.go validates on every request...
Investigation
I'm getting a nil pointer panic in the order processing pipeline. Trace the data flow and tell me where the nil can come from.
The panic is in services/order.go line 142 where it calls order.Customer.Address.ZipCode. The nil can come from 3 places: 1. Customer is nil when order is created via API without auth (guest checkout path in handlers/checkout.go) 2. Address is nil when customer exists but has no saved address (models/customer.go allows nullable address) 3. The batch importer in jobs/import_orders.go skips address validation entirely...
Question
What's the safest way to add a new field to the User model? What migrations, serializers, and tests need to change?
Adding a field to User touches 6 files: 1. models/user.go (add field + DB tag) 2. migrations/ (new migration file) 3. serializers/user.go (add to JSON response) 4. handlers/user.go (accept in create/update) 5. tests/models/user_test.go 6. tests/handlers/user_test.go. Also check: the GraphQL schema in schema/user.graphql auto-generates from the model, so run codegen after. The admin panel in admin/views/users.vue reads from the API directly...
Investigation
Response times on /api/v2/products doubled after last week's deploy. What changed that could cause this?
3 PRs merged last week that touch the products path: PR #445 added eager-loading for product images (models/product.go) -- this actually should improve perf. PR #451 changed the search query to use ILIKE instead of trigram index (repositories/product_repo.go line 89) -- this bypasses the GIN index on the name column. PR #458 added a new middleware that logs full request/response bodies. Likely culprits: PR #451 (query plan change) and PR #458 (I/O overhead)...
Question
I need to understand the event system. What events exist, who publishes them, and who subscribes?
Event bus defined in pkg/events/bus.go using NATS. 14 event types defined in pkg/events/types.go. Publishers: order-service (OrderCreated, OrderCancelled), payment-service (PaymentProcessed, PaymentFailed), user-service (UserRegistered, UserUpdated). Subscribers: notification-service listens to all events, analytics-service to OrderCreated and PaymentProcessed, inventory-service to OrderCreated and OrderCancelled. Dead letter queue configured in infrastructure/nats.conf...
Question
Show me every place where we make an external HTTP call. I need to add timeout and retry logic to all of them.
Found 11 external HTTP calls across the codebase: 1. services/payment.go -- Stripe API (no timeout set) 2. services/shipping.go -- FedEx API (30s timeout) 3. services/email.go -- SendGrid (10s timeout) 4. services/geocoding.go -- Google Maps (no timeout) 5. jobs/webhook_dispatcher.go -- customer webhooks (5s timeout, no retry) 6. integrations/slack.go -- Slack notifications (no timeout)... 5 of 11 have no timeout configured. Only 2 have retry logic.
Automation
@Probe on every PR: check for new dependencies, verify they have compatible licenses, and flag any with known vulnerabilities
PR #312 adds 2 new dependencies: lodash.merge@4.6.2 (MIT, no known CVEs) and xml2js@0.5.0 (MIT, CVE-2023-0842 -- prototype pollution, severity: high). Flagged xml2js. Recommended alternative: fast-xml-parser (MIT, no known CVEs, 3x faster parsing). Also noticed: this PR removes the existing input sanitization in utils/xml.go that was protecting against XXE. Posted inline comment with details.
Automation
@Probe before every commit: run the test suite, and if tests fail, analyze the failure, suggest a fix, and ask me to confirm before applying
3 tests failed. 1. TestOrderTotal -- expected 99.99, got 99.98 (floating point rounding after your discount calculation change). Fix: use decimal.NewFromFloat instead of float64 in services/pricing.go line 47. 2. TestUserCreate -- timeout after 5s (your new validation calls external API synchronously). Fix: mock the external call in test setup. 3. TestWebhookRetry -- flaky, passed on re-run. Shall I apply fixes for #1 and #2?
Automation
I just got assigned ticket PROJ-2847. Read it, find the relevant code, understand the problem, and give me a plan of attack.
PROJ-2847: "Webhook deliveries silently fail for customers with special characters in URLs." Root cause: url.Parse in services/webhook.go line 67 doesn't handle URL-encoded paths correctly. The issue is in buildWebhookURL which concatenates path segments without encoding. Affected code: services/webhook.go, services/webhook_test.go. Plan: 1. Fix URL encoding in buildWebhookURL 2. Add test cases for special chars 3. Backfill failed deliveries from the dead letter queue (check jobs/dlq_processor.go). Estimated: 2-3 hours.
Investigation
The codebase has 3 different ways to handle errors. Which is the "right" pattern and where are the inconsistencies?
3 error handling patterns found: 1. Custom error types with codes (pkg/errors/app_error.go) -- used in 60% of handlers, this is the intended pattern per the team's ADR-007. 2. Plain fmt.Errorf wrapping -- used in 30% of handlers, mostly in older code (services written before Q2 2024). 3. Panic/recover -- used in 2 places: jobs/batch_import.go and legacy/processor.go. The ADR says to use pattern #1 everywhere. 14 files still use pattern #2. Created list with file paths and line numbers.
Automation
Generate a complete test plan for the checkout flow. Cover happy path, edge cases, and failure modes. Check existing test coverage and identify gaps.
Checkout flow spans 5 files. Current coverage: 62%. Generated 18 test cases: 6 happy path (single item, multi-item, with coupon, guest checkout, saved payment, new payment), 5 edge cases (empty cart, expired coupon, out-of-stock during checkout, concurrent purchase of last item, maximum quantity limit), 7 failure modes (payment declined, timeout, partial failure, webhook delivery failure, inventory lock timeout, session expiry mid-checkout, database connection loss). Gaps: no tests for concurrent purchases, session expiry, or partial failures.
Question
I'm refactoring the notification service. What would break if I change the NotificationPayload struct?
NotificationPayload is used in 8 places: 1. services/notification.go (definition) 2. handlers/webhook.go (serialized to JSON for webhook delivery) 3. jobs/email_sender.go (read from queue) 4. jobs/push_sender.go (read from queue) 5. services/audit_log.go (logged for compliance) 6. tests/ (3 test files). Breaking change risk: the webhook delivery in handlers/webhook.go serializes this struct directly to external consumers. Changing field names or removing fields will break customer webhook integrations. The queue workers in jobs/ expect the exact struct shape -- changing it requires a migration strategy for in-flight messages.
Automation
@Probe on every PR: check for new dependencies, verify they have compatible licenses, and flag any with known vulnerabilities
PR #312 adds 2 new dependencies: lodash.merge@4.6.2 (MIT, no known CVEs) and xml2js@0.5.0 (MIT, CVE-2023-0842 -- prototype pollution, severity: high). Flagged xml2js. Recommended alternative: fast-xml-parser (MIT, no known CVEs, 3x faster parsing). Also noticed: this PR removes the existing input sanitization in utils/xml.go that was protecting against XXE. Posted inline comment with details.
Automation
@Probe before every commit: run the test suite, and if tests fail, analyze the failure, suggest a fix, and ask me to confirm before applying
3 tests failed. 1. TestOrderTotal -- expected 99.99, got 99.98 (floating point rounding after your discount calculation change). Fix: use decimal.NewFromFloat instead of float64 in services/pricing.go line 47. 2. TestUserCreate -- timeout after 5s (your new validation calls external API synchronously). Fix: mock the external call in test setup. 3. TestWebhookRetry -- flaky, passed on re-run. Shall I apply fixes for #1 and #2?
Automation
I just got assigned ticket PROJ-2847. Read it, find the relevant code, understand the problem, and give me a plan of attack.
PROJ-2847: "Webhook deliveries silently fail for customers with special characters in URLs." Root cause: url.Parse in services/webhook.go line 67 doesn't handle URL-encoded paths correctly. The issue is in buildWebhookURL which concatenates path segments without encoding. Affected code: services/webhook.go, services/webhook_test.go. Plan: 1. Fix URL encoding in buildWebhookURL 2. Add test cases for special chars 3. Backfill failed deliveries from the dead letter queue (check jobs/dlq_processor.go). Estimated: 2-3 hours.
Investigation
The codebase has 3 different ways to handle errors. Which is the "right" pattern and where are the inconsistencies?
3 error handling patterns found: 1. Custom error types with codes (pkg/errors/app_error.go) -- used in 60% of handlers, this is the intended pattern per the team's ADR-007. 2. Plain fmt.Errorf wrapping -- used in 30% of handlers, mostly in older code (services written before Q2 2024). 3. Panic/recover -- used in 2 places: jobs/batch_import.go and legacy/processor.go. The ADR says to use pattern #1 everywhere. 14 files still use pattern #2. Created list with file paths and line numbers.
Automation
Generate a complete test plan for the checkout flow. Cover happy path, edge cases, and failure modes. Check existing test coverage and identify gaps.
Checkout flow spans 5 files. Current coverage: 62%. Generated 18 test cases: 6 happy path (single item, multi-item, with coupon, guest checkout, saved payment, new payment), 5 edge cases (empty cart, expired coupon, out-of-stock during checkout, concurrent purchase of last item, maximum quantity limit), 7 failure modes (payment declined, timeout, partial failure, webhook delivery failure, inventory lock timeout, session expiry mid-checkout, database connection loss). Gaps: no tests for concurrent purchases, session expiry, or partial failures.
Question
I'm refactoring the notification service. What would break if I change the NotificationPayload struct?
NotificationPayload is used in 8 places: 1. services/notification.go (definition) 2. handlers/webhook.go (serialized to JSON for webhook delivery) 3. jobs/email_sender.go (read from queue) 4. jobs/push_sender.go (read from queue) 5. services/audit_log.go (logged for compliance) 6. tests/ (3 test files). Breaking change risk: the webhook delivery in handlers/webhook.go serializes this struct directly to external consumers. Changing field names or removing fields will break customer webhook integrations. The queue workers in jobs/ expect the exact struct shape -- changing it requires a migration strategy for in-flight messages.

Three things that change everything

01

Instant Codebase Understanding

Ask any question about any codebase and get accurate, contextual answers in seconds.

  • "How does the payment flow work end to end?"
  • "What services does this function call downstream?"
  • "Where is rate limiting implemented and how does it work?"
  • "What's the safest way to modify this database schema?"

Probe reads code semantically -- understanding functions, classes, dependencies, and call graphs -- not just doing text search. It pulls context from linked Jira tickets, PRs, and historical decisions to give you the full picture of why code exists, not just what it does.

Multi-repo awarenessQuery across microservices, shared libraries, and infrastructure code
AST-aware searchUnderstands functions, types, and call graphs -- not just string matching
Historical contextLearn why code was written this way from past PRs and tickets
02

AI-Powered Code Reviews

Every PR gets a thorough first-pass review -- automatically, consistently, before your teammate even looks at it.

Generic AI code review tools don't know your team's conventions. They don't know that error handling must use the custom AppError type, that database queries must go through the repository layer, or that touching the billing module requires extra scrutiny.

Probe learns your team's patterns and applies them to every PR. It catches security vulnerabilities, performance regressions, breaking API changes, and cross-service dependency issues. It explains why something is flagged, suggests a fix, and links to the relevant code.

Team-specific rulesConfigure review standards per repo, per team, per language
Breaking change detectionCatch API changes that will break consumers before they hit production
Security & performanceAutomated checks for OWASP vulnerabilities, N+1 queries, and missing timeouts
03

Full System Intelligence

See how your change affects the entire system, not just the file you're editing.

Real systems aren't single repos. They're dozens of services with shared libraries, event buses, and cross-cutting dependencies. A renamed field in one service can break deserialization in three others. A changed API response format can silently corrupt data downstream. Traditional tools can't see this.

Probe maps the entire system. It understands service boundaries, data flows, event subscriptions, and API contracts. Before you push, you know exactly what your change touches -- across every repo, every service, every consumer.

Dependency mappingSee every consumer of the code you're changing across all repositories
Impact analysisKnow the blast radius of your change before you push
Regression tracingFind which change introduced a bug by comparing versions

Workflow packs for everyday engineering

Pre-built automation workflows you can deploy immediately. Customize per repo. Version like code. Improve over time.

Every PR

Automated Code Review

Every PR gets a first-pass review for security issues, performance problems, style violations, and breaking changes. Customizable rules per repo. Catches what linters miss because it understands the full codebase context.

  • Security vulnerability scan
  • Performance regression check
  • Breaking API change detection
  • Style and pattern enforcement
New Project

Codebase Onboarding

Point Probe at any repo and get an instant architecture overview, entry points, key abstractions, and how things connect. Cut onboarding time from weeks to hours. Works even when documentation doesn't exist.

  • Architecture overview document
  • Service dependency map
  • Key abstractions guide
  • Common task walkthroughs
Debugging

Root Cause Analysis

From error message to root cause in minutes. Probe traces the code path, identifies relevant recent changes, checks for related issues, and suggests the fix. No more spending hours reading code you've never seen.

  • Code path trace from symptom
  • Recent change correlation
  • Related issue history
  • Suggested fix with context
Before Push

Impact Analysis

Before you push, know exactly what your change affects. Probe traces dependencies across repos, identifies consumers of changed APIs, and flags potential breaking changes. Ship with confidence.

  • Cross-repo dependency trace
  • Consumer impact report
  • Test coverage gaps
  • Migration requirements

Built for how developers actually work

Runs Locally

Everything runs on your machine. Your code never leaves your environment. No cloud indexing, no data exfiltration, no compliance headaches.

Any LLM Provider

Use Claude, GPT, open-source models, or your company's self-hosted LLM. No vendor lock-in. Switch providers without changing your workflow.

Full Audit Trail

OpenTelemetry instrumentation captures every query, every workflow run, every decision. Export to Datadog, Grafana, or Splunk. Debug AI workflows like code.

Open Source Core

The core engine is open source and auditable. You can read exactly how your code is being processed. No black boxes between you and your tools.

Open Source vs Enterprise

Start with the open-source CLI to try it on your current project. Scale to enterprise when you need multi-repo intelligence and team workflows.

Probe Open Source

Free forever

The core code intelligence engine. Perfect for exploring a single codebase or integrating with your existing AI coding tools.

  • Single-repo code understanding -- Ask questions about one repository at a time
  • Semantic code search -- Understands code as code (functions, classes, dependencies), not just text
  • No indexing required -- Works instantly on any codebase, runs locally
  • MCP integration -- Use with Claude Code, Cursor, or any MCP-compatible tool
  • Any LLM provider -- Claude, GPT, open-source models -- your choice
  • Privacy-first -- Everything runs locally, no data sent to external servers

Probe Enterprise

Contact for pricing

Everything in Open Source, plus multi-repo intelligence, automated workflows, and integrations with your team's existing tools.

  • Multi-repository architecture -- Query across your entire system of services, not just one repo
  • Cross-service dependency mapping -- Understand how services connect, which changes break what
  • AI-powered code reviews -- Automated PR review with customizable, evolving rules per repo
  • Jira integration -- Pull ticket context, specs, and acceptance criteria into code understanding
  • Zendesk integration -- Connect support tickets to code for faster bug resolution
  • Version comparison -- Analyze changes between releases, find regression sources
  • Workflow automation -- Pre-built workflows for onboarding, review, debugging, and impact analysis
  • Intelligent routing -- System determines which repos, tickets, and docs are relevant per query
  • Slack/Teams integration -- Ask questions about code from where you already work
  • On-premises deployment -- Runs entirely in your infrastructure for maximum security

How to start using Probe

Two phases: validate the technology in 10 minutes with open source, then unlock the full platform for your team.

Phase 1

Try It Yourself

~10 minutes

Pick any of these and have something running before your next standup. No account required.

~2 min

Add Probe to Your AI Coding Tool

Get codebase-aware intelligence in Claude Code, Cursor, or any MCP-compatible tool. Probe becomes a specialized agent that finds the right context and reduces hallucinations with bounded, structured retrieval.

You get: An AI coding assistant that actually understands your codebase -- not just the file you have open.
AI code editor setup →
~5 min

Set Up Automated PR Review

Add a GitHub Action that reviews every PR for security, performance, and quality. Inline comments with explanations and suggested fixes. Fully customizable rules per repo.

You get: Every PR reviewed automatically before your teammates even look at it. Fewer review cycles. Faster merges.
~10 min

Query a Codebase from Slack

Deploy a Slack bot that answers questions about your codebase. Ask questions in Slack and get answers grounded in actual code -- no context switching to your IDE.

You get: A Slack bot your team can query about any codebase. Run locally to test, then deploy anywhere.
Full setup guide →
Phase 2

Scale to Your Team

1-2 weeks

Once you've validated the technology, roll it out to your team with multi-repo intelligence, automated workflows, and integrations with your existing tools.

1
Connect your repositories

Point Probe at your team's repos. It builds a system-wide understanding of how services connect, what depends on what, and how data flows.

2
Configure review rules

Define your team's coding standards, security requirements, and architectural patterns. Rules live in the repo, version like code, and evolve over time.

3
Integrate with your tools

Connect Jira, Slack, and your monitoring stack. Probe pulls ticket context into code answers and posts automated analysis where your team already works.

4
Measure the difference

Track onboarding time for new team members, PR review cycles, and time-to-resolution for bugs. Most teams see 50-70% reduction in onboarding time within the first month.

Success criteria: Faster onboarding, fewer review cycles, less time spent reading unfamiliar code. Engineers shipping meaningful code from day one on new projects.

Want to see how Probe works on your actual codebase?

Schedule a Technical Discussion

What engineers ask us

How is this different from GitHub Copilot or Cursor?

Copilot and Cursor are great at completing code in the file you're editing. They don't understand your system architecture, your team's conventions, or the dependencies between your services. They can't tell you that renaming this field will break deserialization in three other services, or that this function has an undocumented side effect that triggers a webhook.

Probe understands the entire system -- across all repositories, with full dependency awareness and historical context. It's the difference between autocomplete and actual understanding.

Does my code leave my machine?

No. The open-source version runs entirely locally. Your code stays on your machine. You choose which LLM provider to use -- including fully self-hosted options. The enterprise version can be deployed on-premises inside your company's infrastructure.

What languages and frameworks does it support?

Probe supports all major programming languages through tree-sitter parsers: JavaScript/TypeScript, Python, Go, Rust, Java, C/C++, Ruby, PHP, and many more. It also understands Terraform, Kubernetes YAML, Dockerfiles, and CI/CD configs. AST-aware search works across all supported languages.

How accurate are the answers?

Probe uses bounded, structured retrieval -- it searches the actual codebase semantically and returns grounded answers with file paths and line numbers. It doesn't hallucinate code that doesn't exist. When it doesn't know something, it tells you what it searched and what it found. Every answer is traceable back to the source.

Ready to stop reading code line by line?

See how Probe works on a real codebase -- yours. We'll walk you through the setup and show you how engineers are using it to ship faster with fewer bugs.