In my experience, a tech project can lose 3 to 6 months before the very first PR. Not because of code, but because of incomplete framing: wrong problem, wrong scope, wrong assumptions.
As a tech partner, my role does not start at sprint 1. It starts much earlier, when a founder tries to turn an idea into something buildable. SaaS, web app, mobile app: the product type changes, but framing questions stay the same. Across multiple projects, this is how I approach that phase.
Because what gets decided before code is often more important than the code itself.
Why most tech projects drift before coding starts
The classic pattern is always the same: ambitious ideas, an oversized backlog, a stack chosen by habit, then a first iteration that ships many features and very little impact.
The naive approach usually means choosing "modern" technologies, listing 20 to 40 features, then starting development to "learn by building". The business problem appears immediately. When real user needs are not clarified, you optimize production speed, not delivered value. Costs rise fast, trust drops fast, and you end up in the worst-case scenario: lots of code, very little impact.
Take INSTAT (Madagascar's National Institute of Statistics), where I worked as a developer in 2021, and which we will use as the running example in this article. The initial request was: "We want a system to recruit survey agents at scale."
This is exactly the kind of brief that leads to the wrong outcome without proper framing. The project was already moving toward building, in the MVP, an AI personality test for each survey agent. It looked impressive on paper, but it was neither aligned with the top business priority nor with the budget. The real challenge, much more practical, was to filter agents by geography and assign those already on site to drastically reduce logistics costs.
This kind of technical drift, where teams build complex functionality instead of solving the real business problem, is something I have seen across many teams and very different projects.
To avoid these expensive drifts and protect client budgets, I built a strict framing method. Before opening a single backlog item, we must clarify three things:
- What is the real problem?
- What do we build first? (MVP)
- How do we know it works? (KPI)
Frame the problem first
The starting point is never the stack. It is a testable business sentence:
"For [persona], I want to reduce [pain] from [current state] to [target state] within [timeframe]."
This sentence looks simple. In practice, it forces three decisions teams often have not made yet: who the real user is, which problem is measurable, and what outcome counts as success.
Concrete examples:
"For an SME HR manager (30-200 employees), I want to reduce initial CV screening from 4 days to 24 hours within 60 days of deployment."
"For an SME finance director, I want to reduce overdue invoices from 31% to 15% within 90 days."
"For a logistics operations manager, I want to reduce late deliveries from 22% to 10% within 12 weeks."
But the sentence alone is not enough. It gives direction, not scale. To know what we can realistically build within budget and timeline, we need contextual numbers: number of users, volume, and maximum acceptable cost.
| Project | Context | Max phase-1 cost |
|---|---|---|
| HR | 1,200 applications/month, 5 recruiters | $18,000 over 3 months |
| Finance | 3,500 invoices/month, 2 managers | $14,000 over 3 months |
| Logistics | 480 deliveries/day, 40 vehicles | $22,000 over 4 months |
For me, this combo, testable business sentence + scale, avoids the most expensive mistake: confusing requested features with the expected outcome. The request is often "a dashboard", while the outcome is "faster, more accurate decisions". Without this distinction from day one, you build screens instead of solving problems.
For INSTAT, the business sentence was: "For a survey coordinator at INSTAT, I want to reduce logistics deployment costs by 70% by prioritizing local survey-agent assignment for the next campaign." This sentence killed the AI personality test in the MVP. It made it obvious that the top priority was logistics and finance, not psychometrics.
MVP scope: decide together what is in and what is out
The recurring tension is this: product vision is often broad and valid, but phase 1 cannot carry everything.
My MVP definition:
An MVP (Minimum Viable Product) is not a smaller version of the final product. It is the minimum experience required to validate the business promise.
My role as a tech partner is to help design that experience so we do not build in a vacuum. Instead of writing a wish-list of features, we start from the indispensable user journey, the famous "Jobs To Be Done".
1. The indispensable user story
I start by describing the most basic user story that answers the problem we framed earlier, with a sentence like:
When [situation], I want [action], so that [benefit].
For INSTAT: "When I have a survey campaign to run, I want to filter qualified survey agents by city, so I can optimize logistics costs by assigning people already on site first."
Then we keep only the strictly indispensable steps for that story to work end-to-end. Anything outside this direct narrative goes into a "later" column. No "predictive matching", no "global dashboard". Just the core flow that validates the business hypothesis.
2. Prioritize what remains with RICE
Even with strict discipline, peripheral ideas will come back to the table ("It would still be nice to have PDF export"). To evaluate them without emotional debate, we use decision frameworks, and I particularly like RICE:
- Reach: How many users will this feature impact at launch?
- Impact: How strongly does it contribute to validating our primary KPI?
- Confidence: How confident are we in our impact and effort estimates?
- Effort: How many development days does it require?
This gives us an objective score. If a feature requires high technical effort but has low reach (for example, a complex multi-role admin space for a team of two founders), its score collapses. It is mathematically excluded from phase 1.
This double approach (strict narrative + RICE for exceptions) has three benefits:
- It depersonalizes decisions: we do not reject a founder's idea, we test it against the core story and an objective score.
- It protects budget: we validate the core business hypothesis before investing in comfort features.
- It accepts constraints: we decide together what will remain manual or imperfect at launch, and we own that choice.
A product that reliably covers the indispensable flow gets adopted. A "complete" product with 40% reliability gets bypassed.
KPI: define success before coding
A KPI (Key Performance Indicator) is the key metric that objectively tells us whether the product we are building actually solves the target problem. In practice, it is essential to define a few precise KPIs before writing the first line of code: we already decided what to build with the MVP, now we decide how we will measure success.
1. OMTM (One Metric That Matters)
Instead of a complex dashboard, we isolate the OMTM: the single metric that determines MVP success. The big advantage of strict framing is that this metric emerges naturally.
If INSTAT's promise is to optimize logistics costs, our OMTM is obvious: average logistics cost per survey.
It becomes a powerful decision tool during sprints. When a debate starts around adding an unplanned feature, the answer no longer depends on opinions or egos. We ask one question: Does this idea directly impact our OMTM? If not, it goes into the "later" backlog.
2. Counterbalance and health metrics
The OMTM should not be optimized blindly. If we minimize logistics costs by recruiting only inexperienced agents, data quality will collapse.
That is why I add a few secondary KPIs to frame the OMTM and monitor system health:
- Counterbalance metric: Rejection rate of completed surveys. If logistics cost drops but rejection rate explodes, OMTM success is an illusion.
- Technical health metric: Filtering response time. If the app freezes for 10 seconds on every city search, coordinators will bypass the tool.
These extra KPIs keep the OMTM in check so we gain business value without damaging the rest of the experience.
Architecture, stack, and conventions: the how
Once we know what problem to solve, what to build, and how to measure success, we still need to decide how to build it. Not for over-engineering, but because most sprint 2 and sprint 3 blockers come from blind spots: the architecture cannot handle field reality.
Architecture is driven by business constraints
Foundational architecture decisions for a tech product should always clarify at least four elements:
- Architectural style: Modular monolith, microservices, serverless, hexagonal architecture. For most early-stage products, a well-modularized monolith is the most professional choice.
- Communication paradigm: Synchronous (REST, GraphQL) vs asynchronous (message broker, event bus). If parts of your business flow can run later (email, PDF generation, analysis), an internal event bus keeps the system decoupled and much more resilient.
- Data strategy: Single database or polyglot persistence? Relational storage (PostgreSQL) for structured data, optionally object/document storage for files, and a separate search engine if needed. The MVP nightmare is constant data remodeling because relationships were not anticipated.
- API and interface contracts: Define how frontends, mobile clients, or partners talk to the system (API versioning, unified response format, deprecation policy).
These choices are not random. They are dictated by the constraints we defined earlier:
- Scale and budget drive architectural style. A tight MVP budget rules out microservices from day one because they inflate infrastructure cost. Start with a modular monolith.
- MVP scope drives data strategy and interface contracts. If the main user story serves a mobile client and a web dashboard, strict API contracts are required from day one.
- Scalability planning: this is not about over-architecting for millions of users that do not exist yet. It is about avoiding a technical wall in six months. If the project expects massive usage spikes (e.g. ticketing), architecture should separate read/write flows early, or move toward serverless.
- Special constraints dictate communication patterns. In the INSTAT project, lack of field connectivity immediately required an offline-first architecture synchronized asynchronously with the server through events, automatically ruling out a purely synchronous setup.
Stack is driven by human constraints
Once architecture is sketched (for example, "we need a monolith with a relational DB and an async worker"), we pick tools to build it. And here, human constraints often rule everything.
- Team size and seniority: Will the project start with one developer or a team of five? What language is the current team strongest in?
- Maintainability: If you rely on freelancers, will you find enough developers on this stack in six months?
The "perfect" stack on paper is a complete disaster if the team needs three months to master it. Choose the stack where the team can deliver value fastest today.
Conventions: the rules of the game set before development
Even with a strong architecture, a team without shared rules eventually gets stuck. Beyond tool choice, you need a conventions baseline to avoid endless debates and avoidable bugs.
For a founder, this can feel secondary, but it is actually what protects team velocity and reliability:
- Naming conventions: Define in advance how files, variables, and database tables are named.
- Code organization: Decide folder structure so each developer does not invent a personal architecture by sprint three.
- Error handling: Define strict rules to separate "user errors" (e.g. "Agent already assigned") from "system errors" (e.g. "Server is offline"). This leads to clear messages instead of opaque crash screens.
- Validation flow: Put automated rules in place before code reaches production (automated tests, mandatory review). This is your safety net to avoid breaking things.
This is not bureaucracy. It saves cognitive load so the team can focus exclusively on creating value and improving the OMTM during sprint execution.
Note: This section on architecture and conventions is a simplified overview (the technical "HOW" is more complex in reality). My goal is not to turn you into a senior developer, but to give you a practical lens to evaluate, from the outside, whether your tech team is asking the right questions and working the right way. As a founder, be uncompromising on the WHAT (problem and MVP) and the WHY (OMTM). For the HOW, trust a technical professional who can turn business constraints into a robust technology foundation.
Summary: move to execution
For me, framing a tech project means having:
- A clear business problem: a value sentence, not a vague feature.
- An explicit MVP scope: what we build, and especially what we refuse to build.
- OMTM and counterbalances: the compass to settle debates during development.
- Architecture decisions: deduced from business constraints, not technology fashion.
- Rules of engagement: so the tech team can deliver without friction.
From years working in tech, if you respect this process, your project is equipped to deliver concrete value without blowing up budget or timelines.
How I cut infrastructure costs by 6x in a real-time app
CASE STUDY - 14 MIN READ
If you are building a tech product (SaaS, web app, mobile app) and need an experienced partner to challenge your vision, frame your MVP, and design solid architecture foundations, let's talk. My support starts with a free 30-minute framing session, with no commitment, designed to eliminate blind spots and save you months of development.

