All use cases

Catch stuck PRs before they block your release

Zero monitors your GitHub merge queue, diagnoses why PRs are stuck, and alerts the right people — so your release pipeline never stalls silently.

Zero connects:GitHubSlack

Why your release pipeline stalls and nobody notices

PR #9842 has been in the merge queue for two hours. CI failed on a flaky test that's unrelated to the PR's changes. Three more PRs are stacked behind it. Nobody noticed because everyone is heads-down on their own work, and GitHub doesn't send a notification when a PR gets stuck in the queue. The release is blocked. By the time someone discovers it at 4 PM, half the day's deploy window is gone. Zero catches this in minutes.

How to ask Zero to watch your merge queue

@Zero monitor the vm0-ai/vm0 merge queue. Check for PRs that have been in the queue for more than 30 minutes with failing CI. Diagnose the failure cause and alert the PR author in Slack.

How Zero detects and diagnoses stuck PRs

Zero checks the merge queue on your schedule
Zero queries GitHub's merge queue API and examines each queued PR — how long it's been waiting, whether CI is passing, and whether it's blocking other PRs behind it.
Zero diagnoses why stuck PRs are stuck
For any PR that's been queued too long or has failing checks, Zero reads the CI logs, identifies the specific failing test or check, and determines whether it's related to the PR's changes or a known infrastructure issue.
Zero alerts the right people with actionable context
Instead of a generic "PR is stuck" notification, Zero posts a detailed diagnosis to Slack: which check failed, why it failed, who authored the PR, and what action will unblock it. The right person sees it and acts — no detective work required.

Keep your pipeline flowing

Re-run a failed CI check
Ask Zero to re-run the specific failing check to clear a flaky test.
@Zero re-run the cli-e2e-03-runner check on PR #9842
Whitelist known flaky tests
Tell Zero which tests are known to be flaky so it doesn't over-alert.
@Zero note that cli-e2e-03-runner is a known flaky test — don't alert on it unless it fails 3 times in a row
Make it routine
Schedule merge queue checks to match your team's PR velocity.
@Zero every day at noon and 4pm, check the merge queue and alert on stuck PRs in #dev

Required integrations: GitHub and Slack

GitHub
GitHub
GitHub — read access to the merge queue, CI check status, and PR details. Optional write access to re-run failed checks.
Required
Slack
Slack
Slack — posts merge queue alerts with diagnosis details to your engineering channel.
Required

Best practices for merge queue monitoring

Set the check frequency to match your team's PR velocity — high-velocity teams need hourly checks, most teams are fine with twice daily.
Maintain a list of known flaky tests and tell Zero to exclude them from alerts. This prevents alert fatigue and keeps the signal strong.
Chain with auto-merge-releases for a full release pipeline: merge-queue-monitor catches stuck PRs, auto-merge-releases ships the release once the queue clears.