At Catio, we don’t talk about “AI-powered” workflows in vague terms. Our engineers and product leads are already using AI to improve the way we design, build, and ship software across architecture, integration, planning, and execution. This post captures how five of us use AI to move faster, improve quality, and preserve control.
Each example is real, in production, and evolved from our own development cycles. This is how we work with AI today, not in theory, but in practice. It’s important to learn from each other, how are your teams using AI in their daily SDLC?
Working with AI every day still feels like having a superpower. This wasn’t one of those “AI built it for us” stories. There was no tradeoff between velocity and quality. It was human-led, AI-assisted, and engineered for both speed and reliability. - Iman Makaremi
This past month, we put that to the test one more time and the results were wild. In just 28 days, we built the core of Catio’s new recommendation engine and a bespoke multi-agent framework to power it.
Here's what came out of it:
Documentation was developed in parallel with implementation. Every section went through a focused technical review cycle. Test suites were written for each functional unit and executed on every change.
We used a documentation-first approach, modular planning, and continuous testing to maintain speed without sacrificing structure.
By standard estimates of 600 to 1,000 tested Python lines per developer-month, this output represents roughly 45 to 75 person-months of work completed in four calendar weeks.
This outcome shows how disciplined engineering, paired with the right AI support, can produce extraordinary results.
“This whole workflow has changed the way I think about development. I’m not just writing code, I’m collaborating with something that’s good at repetition, recall, and generation.” Victor Kazakov, Lead Engineer
One great example of this accelerated workflow is in the area of integrations with data sources. Integrations are both high-leverage and high-friction work and are a fundamental way for our AI to get good quality data to feed our recommendation engine. Our goal was to reduce manual overhead while improving consistency across systems. This was a natural fit for AI-assisted tooling.
We use a retrieval-augmented generation (RAG) approach to accelerate and standardize how we build integrations. Engineers describe the integration in natural language. The system retrieves relevant documentation and schema metadata from a vector database. It then translates the request into a structured DSL query that feeds into our internal architecture inventory model.
To further streamline the process, we use Claude to generate scaffolds for new extractors, including connection logic, polling, and response mapping. This lets engineers focus on core logic and correctness instead of boilerplate.
Once the data is extracted, it’s normalized into a unified model. This unlocks system-wide observability, dependency mapping, and architecture hygiene enforcement - critical for our multi-agent framework.
The result is faster integration cycles, consistent implementation patterns, and system-wide visibility with minimal manual effort.
We’ve cut down on handoffs, reduced style drift, and maintained better fidelity from mockup to production code. Engineers still own the final implementation, but the starting point is faster, cleaner, and more consistent. - Devon Miller, Lead Full-Stack Engineer
We’ve been using AI to reduce the gap between design and implementation in our front-end workflow.
Using Figma’s Dev Mode MCP, we connect Figma wireframes directly to Cursor. Instead of manually eyeballing pixels or hunting for the right styles, I can drop a selection from Figma into their IDE and get context-rich, design-aligned code scaffolds including components, tokens, structure, and all.
When we ask Cursor to scaffold UI components, it now pulls directly from our Design System. Typography, color, spacing, and structure come through automatically, aligned with our design standards.
The result is a tighter loop between design and working UI. We’ve cut down on handoffs, reduced style drift, and maintained better fidelity from mockup to production code.
Engineers still own the final implementation, but the starting point is faster, cleaner, and more consistent.
Claude as an AI co-pilot is absolutely helping us move faster without sacrificing quality. I believe we are close to a future where AI agents can own small, well-scoped microservices from start to finish. - Jack Cusick, Sr. Full-Stack Engineer
One of the biggest wins is combining Claude with Git hooks. It handles lint fixes and formatting automatically, which removes friction from cleanup. With checks in place like lint, unit tests, and style guides, everything lands in GitHub clean from the start.
For more complex work, I treat Claude as a planning assistant first. I ask it to generate a step-by-step plan before touching code. I usually refine the plan over a few iterations, sometimes with a second opinion from ChatGPT, and save it alongside the implementation. Then I walk it through execution, step by step.
Prompting well is half the battle. I keep prompts small and scoped, usually working in a single thread until the task is complete. I also manage context by compacting threads manually at the right time. If it compacts too early, it can derail the session.
This workflow is not perfect. Claude is reliable when generating new files but can introduce unintended changes when editing existing code. In those cases, I stay hands-on or break the work into smaller units.
With proper linting, testing, and checkpoints, this already feels achievable. The key is to keep the human in the loop. Not to micromanage, but to guide, correct, and maintain quality.
When I delegate a task, I do not say "just go." I say, "See this plan. Implement the section about X. Track your work separately without changing the original document." That structure keeps things modular and recoverable if context is lost.
This workflow has changed how I think about development. I am not just writing code. I am collaborating with something that is good at repetition, recall, and generation. That frees me up to focus on the architecture, trade-offs, and review layers.
As the product manager for a lean engineering team, I focus on efficiency and clear communication. To do this at scale, I use AI tools to automate routine tasks so I can focus more on strategic work. - Dipock Das, Sr. Product Manager
I built a custom MCP server to connect Notion, Jira, and our product's gRPC APIs. I also wired in CLI access to GitHub, which helps automate development tracking. My primary tools are Claude, Gemini, and Codex. Here is how I use them in practice:
By integrating these tools, I reduce admin overhead, speed up research, and keep our roadmap tightly aligned with both execution and customer feedback. It saves hours every week and gives me more time to focus on high-leverage work.
Each of these workflows reflects a different layer of the architecture lifecycle, from code scaffolding and UI fidelity to integration generation and architecture verification. But they share a common thread. These are not AI experiments. They are production workflows that improve consistency, save time, and help us design better systems.
The goal is not just velocity. The goal is quality at velocity.
Catio builds software that helps architecture teams make better decisions, faster. Our own workflows reflect that same principle. We believe AI is most powerful when it is used by humans who know what good looks like and build systems that guide the rest.
We will be sharing more soon. Until then, feel free to reach out or follow along as we continue to refine how modern architecture is designed, built, and operated.
Want to see how Catio applies these principles to your tech stack?
Schedule a demo at catio.tech or get started here