Every month, we get 2-3 inquiries that start the same way: "We have an app. It kind of works. Our developer left. We need someone to fix it and get it to production."
The story behind the story is usually worse. The "kind of works" means it works in the demo environment with test data. In production, with real users and real data volumes, it falls apart. The "developer left" means the relationship ended badly and there's no documentation, no deployment guide, and sometimes no access to the hosting accounts.
GuardianRx was one of these projects. It was a compliance tracking platform for pharmaceutical disposal operations — DEA-regulated, audit-heavy, with real legal consequences for errors. David K., the CEO, had invested $45,000 with a previous team over 6 months. What he had: a prototype that could demo the core workflow. What he didn't have: a product that could survive a DEA audit.
Step 1: The Codebase Audit (Don't Skip This)
Before we write a single line of code, we run a structured codebase audit. This costs $2,000-$5,000 and takes 1-2 weeks. It's the most important money you'll spend on a rescue project because it determines the rescue strategy: targeted refactoring, partial rewrite, or full rewrite.
What We Assess
1. Data Model & Database
- Are the core entities correctly modeled? (users, resources, relationships)
- Is the data normalized appropriately? (over-normalization kills performance; under-normalization creates inconsistencies)
- Are there indexes on commonly queried fields?
- Is there migration history or was the schema built manually?
2. Architecture & Code Quality
- Separation of concerns: is business logic in the controllers? (common in junior code)
- Error handling: do failures propagate gracefully or crash the app?
- Authentication & authorization: proper implementation or roll-your-own?
- API design: consistent patterns or ad-hoc endpoint creation?
3. Security
- SQL injection vulnerabilities (parameterized queries or raw string concatenation?)
- Authentication bypass potential (JWT validation, session management)
- Secrets management (hardcoded API keys? .env files committed to git?)
- Input validation (or lack thereof)
4. Infrastructure & DevOps
- Deployment process: automated CI/CD or manual SSH deploys?
- Environment configuration: reproducible or "works on my machine"?
- Monitoring: any alerting on errors, performance, or uptime?
- Backups: automated? Tested? Restorable?
5. Testing
- Test coverage: any automated tests at all?
- Critical path coverage: are the money-making features tested?
- Test quality: do tests actually assert behavior or just check that code runs?
GuardianRx Audit Results
Here's what we found in the GuardianRx codebase:
- Data model: Salvageable — Core entities (disposal events, witness sessions, compliance records) were correctly modeled. Some missing relationships, but fixable.
- Architecture: Needs restructuring — All business logic was in Express route handlers. No service layer. No middleware for auth. But the API endpoints mapped to real workflows.
- Security: Critical failures — No audit logging (DEA requires this). JWT tokens never expired. No role-based access. API keys hardcoded in frontend JavaScript.
- Infrastructure: Missing — No CI/CD. Deployed via SSH to a single EC2 instance. No backups. No monitoring.
- Testing: Zero — Not a single automated test.
Verdict: Targeted refactoring, not full rewrite. The data model was sound and 65+ API endpoints existed that mapped to real business workflows. Rewriting would have meant rebuilding 875 hours of work. Instead, we restructured the architecture, added the missing security and compliance layers, and built proper infrastructure around the existing code.
The Rewrite vs. Refactor Decision Framework
| Signal | Refactor | Rewrite |
|---|---|---|
| Data model | Core entities are correct | Fundamentally wrong relationships |
| Business logic | Logic is correct, just poorly organized | Logic is wrong (misunderstood domain) |
| Tech stack | Appropriate for the domain | Wrong tool (e.g., PHP for real-time video) |
| Existing features | 50+ endpoints that work | Few features, mostly broken |
| Timeline pressure | Need production in <10 weeks | Can wait 12-16 weeks |
The 70% rule
In our experience across 50+ projects, roughly 70% of broken MVPs are salvageable through targeted refactoring. The other 30% need partial or full rewrites — usually because the data model is fundamentally wrong or the tech stack can't support the core requirements (e.g., real-time collaboration built on a request-response framework with no WebSocket support).
The GuardianRx Rescue: 8 Weeks to Production
Here's what we actually did, week by week:
Week 1-2: Stabilization
- Set up CI/CD pipeline (GitHub Actions → AWS ECS)
- Created staging environment (production clone for safe testing)
- Rotated all compromised credentials (API keys, database passwords, JWT secrets)
- Added basic monitoring (Datadog for errors, uptime, and API response times)
- Wrote tests for the 5 most critical user flows
Week 3-4: Security & Compliance Layer
- Built comprehensive audit logging system (every record access logged with who, what, when, why)
- Implemented RBAC with 4 role levels matching DEA requirements
- Added WebRTC-based remote video witnessing (key compliance feature that was missing)
- Implemented biometric authentication for witness sessions
- Encrypted all sensitive data at rest (DEA record requirements)
Week 5-6: Architecture Restructuring
- Extracted business logic from route handlers into service layer
- Built middleware for authentication, authorization, and audit logging
- Added input validation across all 65+ endpoints
- Optimized database queries (added indexes, rewrote N+1 queries)
- Automated 100% of DEA compliance checks (previously manual)
Week 7-8: Production Hardening & Launch
- Load testing (simulated 100 concurrent disposal sessions)
- Penetration testing (focused on compliance-critical flows)
- Automated backups with tested restore procedure
- Documentation for David's team (deployment guide, architecture overview, API docs)
- Production deployment with zero-downtime switchover
Result: 70% faster disposal operations (remote video witnessing eliminated travel time), 100% DEA compliance automated, and David had a product he could demo to regulators with confidence.
What Rescue Projects Actually Cost
| Rescue Type | Cost | Timeline | When to Choose |
|---|---|---|---|
| Codebase audit only | $2,000-$5,000 | 1-2 weeks | Before committing to any rescue |
| Targeted refactoring | $15,000-$35,000 | 6-10 weeks | Sound data model, poor code quality |
| Partial rewrite | $25,000-$45,000 | 8-12 weeks | Some modules salvageable, others not |
| Full rewrite | $30,000-$60,000+ | 10-16 weeks | Wrong architecture, wrong stack, wrong data model |
The most common mistake: jumping to a full rewrite because the code "looks bad." Bad code that implements correct business logic is worth more than you think. It represents hundreds of hours of domain learning that you'll have to repeat in a rewrite.
How to Avoid Needing a Rescue in the First Place
Prevention is cheaper than rescue. If you're about to hire a development team for your MVP:
- Ask for a production deployment plan before they write code. If they can't describe how the app gets from their laptop to production, that's a red flag.
- Require automated tests for critical paths. Not 100% coverage — just the flows that make money or handle compliance.
- Own your infrastructure accounts. AWS, domain registrar, GitHub — all in YOUR name, with your team having admin access.
- Get weekly deployable builds. If they can't deploy after week 2, something is wrong.
- Have someone else review the code. A $500-$1,000 mid-project review can catch structural problems before they become expensive.
We wrote more about this in our $35k MVP vs $200k MVP comparison — the difference between a professional MVP and a prototype that needs rescuing is architecture decisions, not features.
Frequently Asked Questions
Should I rewrite from scratch or refactor?
Refactor if the data model is sound and business logic is correct but poorly implemented. Rewrite if the architecture fundamentally can't support your requirements. 70% of broken MVPs can be rescued through refactoring. Full rewrites cost 2-3x more. Always start with a codebase audit to make the data-driven decision.
How much does it cost to rescue a broken MVP?
Audit: $2,000-$5,000 (1-2 weeks). Targeted refactoring: $15,000-$35,000 (6-10 weeks). Full rewrite: $30,000-$60,000+ (10-16 weeks). The audit determines which path makes financial sense.
What are the signs my MVP needs professional rescue?
Features that break in production with real data, no documentation or deployment guide, database timeouts on moderate volumes, security vulnerabilities blocking enterprise sales, zero automated tests, and original team unavailable or unresponsive.
How long does a typical rescue take?
Targeted rescue: 6-10 weeks. Comprehensive rescue: 8-14 weeks. Full rewrite: 10-16 weeks. GuardianRx went from broken prototype to production in 8 weeks.
Can you rescue a codebase in an unfamiliar tech stack?
Usually yes for React/Node.js, Laravel/Vue, and Python/Django. For exotic stacks, the audit still works but the rescue might involve migrating to a more maintainable stack. The audit determines whether the current stack is part of the problem.
Next Steps
If your MVP is broken, the worst thing you can do is throw more money at the same approach. Start with an audit. Understand what you have. Then make a data-driven decision about rescue vs. rewrite.
- Book a free 30-minute assessment call — we'll discuss your situation and tell you honestly whether a rescue makes sense
- $35k MVP vs $200k MVP — what separates a professional MVP from a prototype
- HIPAA-Compliant App Development — if your rescue involves healthcare compliance
- SaaS MVP Development — when building new is the right call