Workflow Engine

VisorPredictable agent automation

Define workflows in YAML and run them as graphs: branching, fan-out/fan-in, routing, bounded loops, and human-in-the-loop. Mix deterministic steps and AI steps, enforce schemas and quality gates, and trace every run with OpenTelemetry.

On-premAny ModelOpenTelemetryMCP ToolsOpen Source

Predictable means workflow behavior: same steps, same gates, same visibility—across teams and repos.

workflow.yaml
Trigger
PR opened, Slack thread, webhook, schedule
Execute Steps
AI + MCP + GitHub + HTTP + shellSchema validation + quality gates
Observable Output
Traced
Validated
Reproducible

Workflows You Can Ship—Not Prompts You Hope Behave

PR Automation

Structured PR overview, architecture/security review, labels/comments—schema validated, repeatable.

Issue Triage

Auto-categorize issues, propose answers, ask follow-up questions, route to the right workflow.

Slack Assistant

Support/PM/QA can query source-of-truth context, generate escalation packets, and follow runbooks.

Webhooks + Control Plane

Run workflows from inbound endpoints (deploy hooks, incident hooks)—not only GitHub events.

Cross-Repo Analysis

Fan out to multiple repos/services, run per-repo analysis, aggregate results, enforce policy gates.

Scheduled Audits

Nightly security audits and periodic health checks with consistent reporting and traceability.

How Visor Stays Predictable

Visor is built around explicit control. You define what can happen, what can be called, what outputs must look like, and what happens when a step fails. No hidden glue code.

Config-first workflows

One YAML defines steps, routing, schemas, templates, and outputs.

Graph execution

Dependencies, fan-out, fan-in, routing, and bounded loops are first-class.

Deterministic artifacts

Schemas + templates produce stable outputs for humans and machines.

Multi-provider steps

AI, GitHub, HTTP, shell, MCP tools, memory, and reusable sub-workflows.

Stateful runs

Built-in memory namespaces and isolated run state without external DBs.

Observability by default

OpenTelemetry traces + log correlation (trace_id/span_id).

Testable workflows

Validate behavior with fixtures/mocks before rolling workflows out.

AI is a step—not the system.

CI Is Great for Builds. Visor Is Built for Agent Workflows.

CI tools are optimized for building, testing, and deploying code. Visor is optimized for governed automation across teams and tools—where you need routing, human-in-loop, validated outputs, and auditable runs.

CI Pipelines

  • Optimized for build/test/deploy
  • Linear or matrix execution
  • Exit codes as signals
  • Logs for debugging

Visor Workflows

  • Control flow is explicit: routing and loops are defined and bounded
  • Outputs are guaranteed: schemas enforce structure
  • Frontends are first-class: GitHub, Slack, webhooks, schedules
  • Tool usage is auditable: tools are declared and traceable
  • Runs are reproducible: state is isolated, testable like code

A Single YAML That Demonstrates the Core Benefits

This example shows composition, observability, state, frontends, safe loops, MCP tools, GitHub automation, and more—all in one file.

pr-review.yaml
extends: [default, ./environments/prod.yaml]
telemetry: { enabled: true }
routing: { max_loops: 6 }
slack: { threads: required }
steps:
  pr-overview:
    type: ai
    on: [pr_opened, pr_updated]
    schema: overview
    prompt: Summarize the PR in 5 bullets.
  lint-fix:
    type: command
    exec: "npm run lint -- --fix"
    on_fail: { run: [lint-remediate] }
  lint-remediate:
    type: claude-code
    depends_on: [lint-fix]
    allowedTools: [Read, Edit]
    maxTurns: 3
    prompt: |
      Fix the lint errors below:
      {{ outputs['lint-fix'].stderr }}
  security-scan:
    type: mcp
    command: npx
    args: ["-y", "@semgrep/mcp"]
    method: scan
  trusted-note:
    type: github
    if: "hasMinPermission('MEMBER')"
    op: comment.create
  nightly-audit:
    type: ai
    schedule: "0 2 * * *"
    schema: code-review
1

Composition + environments

extends layers org defaults, team overrides, and env-specific config.

2

MCP tool integration

Native Semgrep, custom tools, and any MCP server—declared and auditable.

3

Permission gating

hasMinPermission('MEMBER') restricts steps to trusted contributors.

4

Scheduled + event-driven

PR events, Slack threads, webhooks, and cron schedules in one workflow.

5

Schema-validated AI

AI steps return structured JSON that validates against schema: overview.

6

Safe remediation loops

on_fail triggers Claude Code with linter output—bounded tools, max turns, traceable.

Teams Own Workflows. Leadership Owns Predictability.

Visor is designed for organizations where different teams own different services—and still need to execute as one system.

Org-wide standards via composition

Keep defaults in one place; override per team/env via extends.

Policy gates for critical workflows

Fail fast on violations (fail_if), route failures deterministically, require human approval when needed.

Permission-aware automation

Gate actions (hasMinPermission('MEMBER')) so workflows stay safe.

Bounded control plane

Retries and loops are capped; "self-healing" is deterministic and observable.

This is how you scale automation without centralizing every decision.

Every Run Is Traceable

Visor emits OpenTelemetry traces and correlates logs for every step. You can answer "what happened?" with evidence, not guesswork.

Run Tracerun_id: abc-123-def
pr-review.checkout1.2sok
pr-review.analyze3.4sok
pr-review.review8.7sok
pr-review.validate0.3sretry
pr-review.validate (retry)0.2sok
pr-review.post0.8sok
  • Workflow version + run_id
  • Step timings + retries + loop iterations
  • Tool/MCP invocations (what was allowed vs used)
  • Schema validation pass/fail
  • Produced artifacts (PR comments, labels, webhook payloads)

Built-in Test Framework—Tests Live With Your Workflows

Visor includes an integration test framework. Write tests in the same YAML as your workflows, run them in CI, and catch regressions before they hit production.

pr-review.test.yaml
tests:
  pr-overview-returns-schema:
    trigger:
      event: pr_opened
      fixture: ./fixtures/small-pr.json
    mock:
      ai: ./mocks/overview-response.json
    assert:
      - "outputs['pr-overview'].bullets.length === 5"
      - "outputs['pr-overview'].risk != null"
  lint-fix-retries-on-fail:
    trigger:
      event: pr_opened
    mock:
      command: { exitCode: 1, then: 0 }
    assert:
      - "steps['lint-fix'].retries === 1"
      - "steps['lint-fix'].status === 'ok'"

Fixtures + mocks

Simulate PR payloads, AI responses, and command outputs without calling real services.

Assertions on outputs

Verify schema shapes, step statuses, retry counts, and routing decisions.

Run in CI

visor test ./workflows/ runs all tests and fails the build on regressions.

Replay production runs

Capture real inputs, replay them in tests, and assert the same outputs.

Mix Providers in One Workflow

In your YAML you're already combining multiple "providers" (step types) in a single run.

AIaiSchema outputs, session reuse
CCclaude-codeTool-scoped reasoning
MCPmcpSemgrep, internal servers
GHgithubLabels, comments, PR ops
HTTPhttpWebhooks + polling
CMDcommandDeterministic shell steps
GITgit-checkoutIsolated repo checkouts
HIhuman-inputSupervised flows via Slack/CLI
MEMmemoryState without external DB
WFworkflowImport and reuse workflows

You can keep AI steps narrow and predictable—most reliability comes from the workflow design.

Try Visor Locally, Then Deploy the Same YAML

1

Run the example YAML

Clone the repo and run examples/pr-review.yaml locally.

2

Turn on GitHub automation

Enable issues + PR workflows with a GitHub App.

3

Run a Slack bot locally

Same pipeline, easier debugging.

Common Questions

Is Visor "deterministic AI"?

No. Visor makes the workflow behavior deterministic: explicit steps, constrained tools, validated outputs, bounded loops, and traceability. AI is a controlled step inside that system.

Does this replace CI?

Not necessarily. CI is still great for builds/tests/deploys. Visor complements CI when you need routing, human-in-loop, multi-provider automation, schema outputs, and observable agent workflows.

How do you prevent "one mega-agent with 100 tools"?

Workflows declare tools and MCP servers explicitly. AI steps can disable tools entirely or run with allowlists. Tool usage is auditable.

How do teams customize without breaking standards?

Use extends and imports: org defaults live centrally, teams override per environment or repo. Policy gates enforce what must remain true.

Can we run this on-prem and use our preferred model?

Yes—Visor is designed to run on your infrastructure and to support provider/model choice (the workflow defines what you use).

How do we observe and debug runs?

OpenTelemetry tracing + log correlation are built-in (telemetry.enabled). You can inspect step timings, retries, loop iterations, and validation outcomes.

Build Workflows You Can Trust—and Prove

If you're moving toward agent-first development, Visor gives you the control plane: explicit steps, schemas, bounded loops, and full observability—so automation scales without chaos.