Launch HN: Relvy (YC F24) – On-call runbooks, automated
11 points by behat 2 hours ago | 10 comments
Hey HN! We are Bharath, and Simranjit from Relvy AI (https://www.relvy.ai). Relvy automates on-call runbooks for software engineering teams. It is an AI agent equipped with tools that can analyze telemetry data and code at scale, helping teams debug and resolve production issues in minutes. Here’s a video: [[[https://www.youtube.com/watch?v=BXr4_XlWXc0]]]

A lot of teams are using AI in some form to reduce their on-call burden. You may be pasting logs into Cursor, or using Claude Code with Datadog’s MCP server to help debug. What we’ve seen is that autonomous root cause analysis is a hard problem for AI. This shows up in benchmarks - Claude Opus 4.6 is currently at 36% accuracy on the OpenRCA dataset, in contrast to coding tasks.

There are three main reasons for this: (1) Telemetry data volume can drown the model in noise; (2) Data interpretation / reasoning is enterprise context dependent; (3) On-call is a time-constrained, high-stakes problem, with little room for AI to explore during investigation time. Errors that send the user down the wrong path are not easily forgiven.

At Relvy, we are tackling these problems by building specialized tools for telemetry data analysis. Our tools can detect anomalies and identify problem slices from dense time series data, do log pattern search, and reason about span trees, all without overwhelming the agent context.

Anchoring the agent around runbooks leads to less agentic exploration and more deterministic steps that reflect the most useful steps that an experienced engineer would take. That results in faster analysis, and less cognitive load on engineers to review and understand what the AI did.

How it works: Relvy is installed on a local machine via docker-compose (or via helm charts, or sign up on our cloud), connect your stack (observability and code), create your first runbook and have Relvy investigate a recent alert.

Each investigation is presented as a notebook in our web UI, with data visualizations that help engineers verify and build trust with the AI. From there on, Relvy can be configured to automatically respond to alerts from Slack

Some example runbook steps that Relvy automates: - Check so-and-so dashboard, see if the errors are isolated to a specific shard. - Check if there’s a throughput surge on the APM page, and if so, is it from a few IPs? - Check recent commits to see if anything changed for this endpoint.

You can also configure AWS CLI commands that Relvy can run to automate mitigation actions, with human approval.

A little bit about us - We did YC back in fall 2024. We started our journey experimenting with continuous log monitoring with small language models - that was too slow. We then invested deeply into solving root cause analysis effectively, and our product today is the result of about a year of work with our early customers.

Give us a try today. Happy to hear feedback, or about how you are tackling on-call burden at your company. Appreciate any comments or suggestions!


sanghyunp 4 minutes ago
Runbook automation is one of those things every team says they'll build internally and never does. After 6 years on backend teams, our "runbooks" were always a Notion page nobody updated. The hard part is always the boundary between what can be automated and what still needs human judgment.
reply
hrimfaxi 2 hours ago
How does this differ from cursor cloud agents where I can hook up MCPs, etc and even launch the agent in my own cloud to connect directly to internal hosts like dbs?
reply
behat 2 hours ago
Thanks. Yeah, Cursor / Claude code + MCP is powerful. We differentiate on two fronts, mainly:

1) Greater accuracy with our specialized tools: Most MCP tools allow agents to query data, or run *ql queries - this overwhelms context windows given the scale of telemetry data. Raw data is also not great for reasoning - we’ve designed our tools to ensure that models get data in the right format, enriched with statistical summaries, baselines, and correlation data, so LLMs can focus on reasoning.

2) Product UX: You’ll also find that text based outputs from general purpose agents are not sufficient for this task - our notebook UX offers a great way to visualize the underlying data so you can review and build trust with the AI.

reply
hrimfaxi 2 hours ago
To be clear, are the main differentiators basically better built-in MCPs and better UX? Not knocking just trying to understand the differences.

I have had incredible success debugging issues by just hooking up Datadog MCP and giving agents access to it. Claude/cursor don't seem to have any issues pulling in the raw data they need in amounts that don't overload their context.

Do you consider this a tool to be used in addition to something like cursor cloud agents or to replace it?

reply
behat 2 hours ago
For the debugging workflow you described, we would be a standalone replacement for cursor or other agents. We don't yet write code so can't replace your cursor agents entirely.

Re: diffentiation - yes, faster, more accurate and more consistent. Partially because of better tools and UX, and partially because we anchor on runbooks. On-call engineers can quickly map out that the AI ran so-and-so steps, and here's what it found for each, and here's the time series graph that supports this.

Interesting that you have had great success with Datadog MCP. Do you mainly look at logs?

reply
esafak 50 minutes ago
They claim a 12% lead (from 36% to 48%) over Opus 4.6 in a RCA benchmark: https://www.relvy.ai/blog/relvy-improves-claude-accuracy-by-...
reply
behat 42 minutes ago
heh, I was just about to post the following on your previous comment re: reproducible benchmark results. Thanks for posting the blog.

With the docker images that we offer, in theory, people can re-run the benchmark themselves with our agent. But we should document and make that easier.

At the end of it, you really would have to evaluate on your own production alerts. Hopefully the easy install + set up helps.

reply
rishav 32 minutes ago
Woohoo!!! Congrats on the big launch y'all
reply
ramon156 2 hours ago
Congrats on the launch! I dig the concept, seems like a good tool :)
reply
behat 2 hours ago
Thank you :)
reply