Skip to main content

The LikeClaw Variant

How we turned a cloud-dependent AI platform into a self-contained product with local auth and billing in four days.

By Alexey Suvorov · · Updated · 6 min read
Featured image for The LikeClaw Variant

February 14, 2026. Dashboard v2 was 85 days old, with 632 commits behind it and a working AI agent platform. But it couldn’t stand alone. Everything we’d built depended on five shared services spread across our Google Kubernetes Engine cluster. Auth lived in one container. Billing lived in another. The LLM proxy ran as its own deployment. RabbitMQ tied them together. None of it could be peeled apart without surgery.

We needed a self-contained product. Four days later, we had one.

The dependency problem

Dashboard v2 was born as a child of our existing infrastructure. The Fulldive variant – our internal name for the full-stack deployment – treated the dashboard as one node in a larger system. User authentication happened through an external auth-api that issued JWTs. Billing went through an external billing-api that communicated via RabbitMQ with five handler types: wizard, approval, users, organizations, and management. LLM requests routed through an external proxy that tracked tokens and costs.

This architecture made sense when we were running three products on one cluster. Shared services meant shared maintenance, shared monitoring, shared scaling. One auth service for everything. One billing pipeline for everything.

But shared services also mean shared deployment. You can’t ship one product without shipping all of them. And if you want to deploy somewhere new – say, Cloudflare Pages for a lightweight frontend – the whole dependency graph comes with you.

The LikeClaw variant needed to break that graph.

The DEPLOYMENT_VARIANT pattern

The solution was an environment variable: DEPLOYMENT_VARIANT. Two possible values. Two completely different runtime configurations.

VariantAuthBillingLLMFrontendBackend
FulldiveExternal auth-api (JWT)External billing-api (RabbitMQ)External proxyGKEGKE
LikeClawLocal email/code authLocal Stripe creditsLocal OpenRouter proxyCloudflare PagesGKE

The pattern itself isn’t clever. Feature flags have existed for decades. What made it work was the modularity we’d accidentally built into Dashboard v2 during its first 85 days. Auth, billing, and LLM routing were already behind interfaces. The external implementations made HTTP calls to other services. The local implementations did the same work inline.

Seven commits told the story:

  • feat: add local email/code auth for likeclaw variant
  • feat: add local Stripe billing for likeclaw variant (credits-only)
  • feat: add local LLM proxy module for likeclaw variant
  • feat: per-model credit pricing for LLM proxy
  • feat: configure likeclaw frontend for Cloudflare Pages deployment
  • feat: configurable UI branding for likeclaw variant
  • feat: adapt LLM model schema to match existing DB docs

Each commit replaced one external dependency with a local module. No shared code was broken. The Fulldive variant kept working exactly as before.

Local auth in a day

The Fulldive auth flow involved a standalone service, its own PostgreSQL database, JWT issuance, token refresh endpoints, and OAuth2 providers. Replicating all of that would have taken weeks.

We didn’t replicate it. We built the smallest possible auth system that could serve a standalone product: email and code. No passwords. No OAuth. A user enters their email, receives a verification code, and gets a session. That’s it.

The entire auth module shipped in a single commit. No separate service. No separate database. User records stored in the same MongoDB that Dashboard v2 already used. Session management through the same Redis instance that handled caching and streaming state.

Was it as full-featured as the Fulldive auth service? Not even close. Did it need to be? No.

Stripe credits without the billing service

The Fulldive billing system was the most complex piece we had to replace. It handled subscription tiers, plan management, usage tracking, and organization-level billing through a dedicated API that communicated with the main platform via RabbitMQ message queues. Five different message handler types coordinated state between services.

The LikeClaw variant needed something simpler: pay-as-you-go credits. No subscriptions. No plans. No message queues.

We used Stripe’s inline price_data instead of pre-created Stripe products. This meant we didn’t need to set up product catalogs in Stripe’s dashboard – the price was calculated at checkout time based on how many credits the user wanted. One Stripe webhook handler for successful payments. Credit balance stored as a field on the user document.

Free-tier detection was a conditional check: if the user hadn’t purchased credits and was within the free allowance, skip the credit deduction. Admin credit grants used a simple endpoint that looked up users by email and added to their balance.

Per-model credit pricing added the cost dimension. Each LLM model had a credit cost per thousand tokens, configured in the model schema. When a request completed, we calculated the token count, multiplied by the model’s credit rate, and deducted from the user’s balance. DALL-E was disabled entirely for the LikeClaw variant – the cost per image generation didn’t fit the credit model. Replicate models got their own credit deduction logic.

The Google Play in-app purchase verification endpoint was a future-proofing decision. We don’t have a mobile app yet, but the endpoint exists for when we do.

The LLM proxy that isn’t a proxy

The Fulldive LLM proxy was a standalone service that handled model routing, provider failover, token counting, and cost tracking. It was arguably the most important piece of shared infrastructure – every AI request from every product passed through it.

For LikeClaw, we replaced it with a local module that routes requests directly to OpenRouter. OpenRouter already handles multi-model routing, so we didn’t need to rebuild that logic. The local module handles authentication (adding the OpenRouter API key), request formatting, streaming response passthrough, and credit deduction.

The model schema needed adaptation. Dashboard v2’s model documents had evolved from the Fulldive format, and the LikeClaw variant needed to match the existing MongoDB documents without a migration. One commit – feat: adapt LLM model schema to match existing DB docs – handled the translation layer.

Cloudflare Pages for the frontend

Moving the frontend to Cloudflare Pages required more than just changing the deployment target. The entire build pipeline needed variant-specific configuration.

We created dedicated Vite build modes: likeclaw and likeclaw-dev. Each mode set the right API endpoint, feature flags, and brand assets. BrowserRouter’s basename was derived from Vite’s base config, so the app could run under any path prefix. Brand asset paths – logos, favicons, Open Graph images – were prefixed with BASE_URL so they’d resolve correctly regardless of where the frontend was hosted.

The feat: configurable UI branding for likeclaw variant commit handled the visual identity. Different logo, different color accents, different product name in the header. Same React app, same components, same layout. The branding layer was thin by design.

Cloudflare Pages deployment was a config file and a build command. No Docker. No Kubernetes. No ingress controllers. Push to a branch and the frontend is live on a global CDN.

The tradeoffs of self-contained vs distributed

Four days gave us a working standalone product. But self-contained comes with costs.

Duplication. The local auth module and the external auth service solve the same problem differently. Bug fixes don’t automatically propagate. If we discover a session management vulnerability in one, we have to remember to fix it in the other.

Feature drift. The Fulldive billing system supports subscriptions, organization-level billing, and plan management. The LikeClaw billing system supports credits. Over time, features will diverge further. Eventually, the variants might share very little billing logic.

Testing surface. Every variant multiplies the test matrix. We now test the same codebase in two configurations. CI runs both. Staging environments exist for both. The DEPLOYMENT_VARIANT flag is simple, but the testing infrastructure it requires isn’t.

These are acceptable tradeoffs for our situation. We needed a standalone product fast. The variant pattern delivered that without forking the codebase – both variants live in the same repository, share the same core logic, and deploy from the same CI pipeline.

What four days taught us

The LikeClaw variant worked because of decisions we made months earlier. The interface boundaries between auth, billing, and LLM routing weren’t accidental – they reflected a separation of concerns that happened to make replacement possible.

If we’d tightly coupled authentication to a specific JWT library, we couldn’t have swapped it. If billing had been woven into the request lifecycle instead of isolated behind a service boundary, we couldn’t have replaced it with a credit check.

The lesson isn’t about the DEPLOYMENT_VARIANT pattern. It’s about building systems where the boundaries are in the right places. When the boundaries are right, you can replace what’s behind them in four days. When they’re wrong, you rewrite.

We’ve done both. This time, we got to replace.

Alexey Suvorov

CTO, AIWAYZ

10+ years in software engineering. CTO at Bewize and Fulldive. Master's in IT Security from ITMO University. Builds AI systems that run 100+ microservices with small teams.

LinkedIn

Related Posts

See what AIWAYZ can do for your team

Start a free trial — no credit card, no commitment.

© 2026 AIWAYZ. All rights reserved.

+1-332-208-14-10