The Rewrite Decision
Why we chose to rebuild our AI dashboard from scratch in November 2025 instead of iterating on v1.

November 21, 2025. Commit message: first commit. Two words that represented the hardest decision we’d made in four years of building.
Dashboard v1 had 2,892 commits. It served real users across four continents. It supported 30+ AI models, 100+ templates, knowledge bases with vector search, image generation, and organization management with role-based access. It was, by any reasonable measure, a working product.
We rewrote it from scratch.
The state of v1
By late 2025, Dashboard v1 was a capable but constrained system. The backend ran on Koa.js with 120+ REST endpoints. MongoDB held 31+ collections. Mongoose ORM handled schema validation and relationships. The frontend was React with a custom state management layer.
Every piece worked. But the architecture had been designed for a chat product, not an agent platform. And by November 2025, we didn’t want to build a chat product anymore.
The problems were structural. Koa’s middleware model made it difficult to build modular service boundaries – everything was a middleware chain, and cross-cutting concerns like agent context, sandbox management, and tool execution didn’t fit neatly into that pattern. Mongoose’s schema enforcement, which had been valuable when we had straightforward CRUD operations, became a bottleneck when agent operations needed flexible, deeply nested document structures that changed with every iteration.
Most critically, v1 had no concept of sandboxed execution. Agents that could run code, manage files, and operate autonomously needed isolation – a sandbox where a misbehaving script couldn’t affect other users’ data. Retrofitting E2B sandbox integration into a codebase that assumed all operations were stateless API calls wasn’t impossible, but the architectural contortions would have made the code unmaintainable.
The decision framework
We didn’t make this decision lightly. There’s a famous essay by Joel Spolsky arguing that rewrites are the single worst strategic mistake a software company can make. He’s right, most of the time. Rewrites fail because teams underestimate the complexity hiding in the old codebase – the hundreds of edge cases that were fixed one bug report at a time, the implicit business logic embedded in conditional branches nobody remembers writing.
Our situation was different in one specific way: we weren’t rebuilding the same product. We were building a fundamentally different product that happened to serve similar users. Dashboard v1 was a multi-model chat interface with templates and knowledge bases. Dashboard v2 was an agent platform with sandboxed execution, a virtual file system, and autonomous task delegation.
The overlap was maybe 30%. Chat, user management, model selection – those carried over conceptually. Everything else was new: agent architecture, E2B integration, VFS, skills marketplace, background task chaining, evaluation framework.
We asked ourselves three questions:
Can we add agent capabilities to v1 without breaking existing features? The answer was technically yes, practically no. Every agent-related addition would fight the existing architecture.
How long would a rewrite take? We estimated 90 days to reach feature parity on the core capabilities, with agent features included. This assumed two senior developers working full-time with deep domain knowledge.
What do we lose by not iterating? Active users on v1 would need to be supported on the existing platform until migration. That was a maintenance cost, not a development cost.
The math worked. We started on November 21.
The first ten days
163 commits between November 21 and November 30. That number sounds aggressive because it was.
The technology choices were deliberate. NestJS 11 replaced Koa because NestJS’s module system maps cleanly to bounded contexts – auth module, chat module, agent module, sandbox module, each with its own controllers, services, and providers. React 19 replaced the older React version because Server Components and the new hooks simplified data fetching patterns. Tailwind 4 with shadcn/ui replaced the custom component library because we didn’t want to maintain UI primitives.
The first decision that surprised us: we ripped out Mongoose on day two. Dashboard v1 used Mongoose for schema validation and relationship management. For v2, we went with the native MongoDB driver. No ORM. No schema enforcement at the database layer. TypeScript interfaces handled type safety at compile time. Runtime validation happened in NestJS DTOs with class-validator.
Why? Because agent operations produce unpredictable data structures. A code execution result might be a string, an object, a file reference, or an error with a stack trace. Mongoose schemas fight that flexibility. The native driver doesn’t care what shape your documents are.
By day nine, we had: chat with model selection, a virtual file system backed by MongoDB, workspaces for organizing conversations, internationalization in four languages (English, Russian, Chinese Simplified, Chinese Traditional), and file attachments with Google Cloud Storage. Not polished. Not production-ready. But structurally sound.
What carried over
The rewrite wasn’t starting from zero. It was starting from four years of product knowledge.
Protobuf contracts carried over unchanged. The service boundaries we’d defined for auth, billing, dashboard, and workers were still valid. Dashboard v2 spoke the same Protobuf language as v1, which meant it plugged into the existing shared infrastructure (auth-api, billing-api, RabbitMQ) without modification.
Domain knowledge carried over. We knew which features users actually used (chat, model selection, knowledge bases) and which they didn’t (half the templates, the document composer’s advanced formatting). v2 didn’t rebuild the features nobody used.
Infrastructure carried over entirely. Auth, billing, RabbitMQ messaging, Google Cloud Storage, Langfuse observability – all shared services remained untouched. Dashboard v2 was a new consumer of existing infrastructure, not a replacement for it.
Deployment patterns carried over. Same Docker multi-stage builds, same Kubernetes manifests via Pulumi, same GitHub Actions workflows (adapted for the new repository).
What we left behind
Koa.js and its middleware chain. NestJS’s dependency injection and module system was a better fit for a complex service with dozens of providers.
Mongoose and its schema enforcement. The native MongoDB driver gave us the flexibility that agent operations demanded.
120+ REST endpoints that encoded two years of incremental feature additions. v2’s API was designed from scratch with agent operations as a first-class concern, not bolted on.
The entire frontend. v1’s React frontend had accumulated years of state management complexity. v2 started with TanStack Query for server state, Zustand for client state, and shadcn/ui for components. Clean separation from day one.
100+ templates that looked impressive in a feature list but represented a maintenance burden. v2 replaced templates with agents – instead of a “Blog Post Template” that fills in blanks, a writing agent that understands context and iterates.
The pace
The commit history tells the story better than any narrative:
- November: 163 commits. Foundation: chat, VFS, workspaces, i18n, file attachments.
- December: 201 commits. Agents, E2B sandbox integration, skills marketplace, background tasks.
- January: Evaluation framework, billing migration, agent-builder, scheduling system.
- February: LikeClaw variant, local auth, credits-only billing, production deployment.
706 commits in 88 days. 177 version tags, which means roughly two releases per day. Two people – Marina Gonokhina (355 commits) and Alex Su (163 commits) – produced 87% of the total output. The rest came from the broader team.
That pace was only possible because we knew what we were building. The rewrite wasn’t an exploration. It was a reconstruction with better materials. Every architectural decision was informed by mistakes we’d already made in v1.
The CLAUDE.MD file
One detail that might seem trivial but mattered: Dashboard v2 got its own CLAUDE.MD file. 463 lines of context for AI coding assistants – project structure, naming conventions, API patterns, testing requirements, deployment procedures.
This file existed because we used Claude as a development tool throughout the rewrite. Having a comprehensive project context file meant AI-assisted coding was consistent with our patterns. Every generated controller followed the same NestJS conventions. Every generated test used the same setup patterns.
In a rewrite where speed matters, anything that reduces the cognitive load of maintaining consistency pays dividends. CLAUDE.MD was one of those things.
What the rewrite taught us
The conventional wisdom is that rewrites are dangerous because you lose institutional knowledge embedded in code. That’s true when you rewrite a product you don’t understand. It’s not true when the team that built v1 is the same team building v2.
We didn’t lose the knowledge in those 2,892 commits. We distilled it. The edge cases that v1 handled through 120+ endpoints are handled in v2 through 40+ agents with proper error boundaries. The billing complexity that accumulated over three years was rebuilt in a week because we’d already made every billing mistake possible.
The rewrite wasn’t the risky decision. Continuing to iterate on an architecture that couldn’t support our product vision – that was the risky decision.
88 days from first commit to production deployment. Not because we’re fast. Because we’d already spent four years learning what to build. The rewrite just let us build it right.
Alexey Suvorov
CTO, AIWAYZ
10+ years in software engineering. CTO at Bewize and Fulldive. Master's in IT Security from ITMO University. Builds AI systems that run 100+ microservices with small teams.
LinkedIn