At Catio, we build a copilot for cloud architecture. Early on, we realized something obvious but often overlooked:
A cloud architecture is a graph.
Services rely on databases, talk to one another through APIs, and move data through queues and pipelines. Configurations define how those pieces interact — what connects to what, and under what conditions. To truly reason about an architecture — whether you’re optimizing it, troubleshooting an outage, or planning a redesign — you have to understand those relationships as a graph.
That realization led us to a simple but critical need: an agent that could reason structurally about architecture — tracing dependencies, evaluating impact, and uncovering optimization opportunities.
So we built GraphQA — an agent that treats your systems as graphs and uses real algorithms to reason over them.
And while it started with cloud architecture, the same idea applies far beyond it. GraphQA can power any workflow where structure matters: mapping dependencies in microservice ecosystems, understanding data lineage, analyzing supply-chain risks, or modeling user journeys.
Wherever relationships define behavior, GraphQA helps agents see — and reason over — the structure behind the system.
LLMs are powerful at reasoning over text — they can summarize, explain, and even plan. But when it comes to structure, they struggle. A language model can describe relationships, not compute them.
Graphs fill that gap. They make relationships explicit and computable — paths, dependencies, hierarchies, communities — all the things that give systems their shape. This isn’t new; decades of graph theory have made structural reasoning a solved problem. What’s new is bringing that capability into an agent’s reasoning loop.
Instead of treating a graph as text to be parsed or summarized, GraphQA lets agents use real graph algorithms — exploring neighborhoods, tracing impact, detecting bottlenecks — alongside their language reasoning.
In practice, this means an agent can not only understand what your architecture is but also how it behaves when something changes.
That’s the philosophy behind GraphQA: bringing graph-native reasoning into an agent’s loop.
When we set out to answer questions about graphs, a few “obvious” paths came up — and we learned quickly why they fall short.
That approach clicked. It was fast, interpretable, and agent-friendly — a foundation we could actually build on.
GraphQA isn’t a new query language — it’s an agentic interface for graph reasoning built on a modular, extensible stack. Instead of generating Cypher or Gremlin queries, it lets an agent reason directly through graph algorithms, chaining steps just like a human would.
At its core:
Together, these pieces form a reasoning loop where language and structure meet — the LLM interprets intent, LangGraph manages the process, and NetworkX makes relationships computable.
Here’s what that looks like:
A user asks a question through the CLI or API. The GraphQA Agent, orchestrated by LangGraph, interprets the question and invokes the right graph tool (via NetworkX or another backend) to compute dependencies, paths, or patterns from the Graph Data Store. The results feed back into the reasoning loop and are logged in Langfuse for traceability.
The result: an agent that doesn’t just talk about your graph — it actually computes over it, combining the flexibility of natural language with the rigor of algorithmic reasoning.
Let’s look at how GraphQA compares to Cypher.
Question: “If this database fails, which services are impacted?”
Cypher:
MATCH (db:Database {name: "orders-db"})<-[:DEPENDS_ON*]-(s:Service)
RETURN DISTINCT s.name;
GraphQA:
{"query_type": "explore_neighborhood", "parameters": {"node_id": "orders-db", "depth": 2}}
Typical Output:
🧠 I found 12 services depending (directly or indirectly) on orders-db.
Top impacted: checkout-service, analytics-service, notifications-service.
Impact depth: up to 3 levels.
GraphQA automatically chose a neighborhood traversal algorithm (multi-hop dependency search) and computed the dependency graph.
Question: “Which of my databases are over- or under-used?”
Cypher + Graph Data Science:
CALL gds.degree.stream({
nodeProjection: 'Database',
relationshipProjection: {
DEPENDS_ON: {
type: 'DEPENDS_ON',
orientation: 'REVERSE'
}
}
})
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS database, score
ORDER BY score DESC;
GraphQA:
{"stat_type": "centrality", "top_k": 10}
Typical output:
📊 Found 5 databases with unusually high dependency counts:
- orders-db (21 connections)
- payments-db (17 connections)
2 under-used databases detected with low centrality scores:
- logs-db
- archive-db
GraphQA identified this as a centrality problem, ran degree and betweenness centrality algorithms, and summarized results — all automatically.
We built GraphQA because our agents needed to reason over structure — a gap LLMs and traditional RAG systems still can’t fill.
Graphs are everywhere: in cloud architectures, supply chains, social networks, biological systems — yet the ability to reason over them remains locked behind specialized query languages and vendor ecosystems.
GraphQA bridges that gap — giving agents a clean, universal interface for algorithmic reasoning over graphs, regardless of the backend.
We designed it for our own copilots first, because without graph-native reasoning, even the smartest LLM agents are effectively blind to structure.
GraphQA is our attempt at a middle ground:
GraphQA isn’t about replacing databases or query languages — it’s about giving agents access to the structural reasoning layer they’ve been missing.
GraphQA already powers our recommendation and chat modules at Catio — helping our agents trace dependencies, analyze impact, and make architectural reasoning explainable.
But we think it can be useful far beyond cloud architecture.
If you build with graphs, we’d love for you to:
We believe graphs are the reasoning substrate of modern systems — and with GraphQA, that reasoning is now accessible, agent-friendly, and open to everyone.