Table of Contents


From Idea to MVP: The Honest 7-Step Reality (Lessons from Building AgriSuite)

A Raw, Honest Reflection on How I Turned Chaos Into Clarity While Building AgriSuite Ecosystem


The Real Problem Nobody Talks About

You’re sitting in your apartment at midnight, surrounded by half-empty coffee cups and a laptop glowing with endless browser tabs. You’ve got an idea. It’s good. Actually, it’s really good.

But here’s what nobody tells you: most great ideas die, not because they’re bad, but because they became too ambitious before they became real.

I spent six months building what I thought would revolutionize agriculture in Vietnam. Features on top of features. A beautiful design system. Integrations with weather APIs, satellite imagery, AI crop disease detection—the works.

Then I showed it to a farmer.

He looked at my dashboard for thirty seconds and asked, “Can this just tell me when to water my field? And can it work without Wi-Fi?”

That question haunted me for weeks.

Not because it was simple. But because I’d spent half a year building everything except what he needed.

This is the story of how I learned to build backwards—from real problems to real solutions—and how that one shift changed everything.


Why This Matters: The MVP Myth vs. The MVP Reality

The MVP Myth

  • An MVP is a “stripped-down version” of your full vision
  • You should be 80% sure before building
  • Faster coding = faster success
  • Launch when it’s polished enough
  • One good launch beats ten small iterations

The MVP Reality

  • An MVP is a learning machine, not a product
  • You should be curious about one specific problem
  • Slower learning = faster product-market fit
  • Launch when you’re slightly embarrassed
  • A thousand tiny wins beat one perfect launch
  • Validation happens with humans, not hypotheticals

The difference between these two frameworks is everything.


Step 1: The Spark — Turning Frustration Into Direction

Every meaningful product starts the same way: someone is frustrated.

For me, it wasn’t a complicated realization. I was working with agricultural cooperatives in Vietnam, sitting in fields at sunrise, watching farmers make critical decisions based on:

  • A weather app they didn’t fully understand
  • Price spreadsheets shared on WhatsApp (with formulas half the time)
  • SMS alerts from five different suppliers (contradicting each other)
  • Handwritten notes in notebooks that got lost or ruined by rain

The chaos wasn’t glamorous. It wasn’t even a “billion-dollar problem” when I first saw it. But it was real.

The Most Important Insight

The spark doesn’t come from a market size report or a venture pitch template. It comes from noticing what people actually struggle with every day.

I didn’t spend three months researching the Vietnamese agricultural tech market. I spent three weeks in fields. That’s when the spark hit.

Lesson 1: An idea isn’t validated until you can articulate it in one sentence to someone who lives the problem. If they don’t nod, you haven’t found the real problem yet.

My original one-liner: “Farmers have fragmented data across multiple tools. This fragmentation leads to poor decisions and lost revenue.”

But after talking to a dozen farmers, it evolved to: “Farmers need simple, offline-first decisions tools that tell them what to do today.”

See the difference? The second one is specific. Actionable. Real.


Step 2: Validation — Testing If Anyone Actually Cares

Here’s where most founders trip up: they validate the idea, not the pain point.

“Is agritech a good market?” Yes.

“Do farmers need software?” Yes.

“Will they pay for it?” Probably.

But those are the wrong questions.

The right question is: Will THIS farmer change how they work for THIS specific solution?

How I Actually Did Validation

I didn’t build prototypes in PowerPoint. I built them in Figma—low-fidelity wireframes that looked like sketches, not polished designs.

Then I did something radical: I printed them out.

On paper.

I sat with farmers in their homes and fields. They literally tapped on paper screens with their fingers. No keyboard. No mouse. No “instructions on how to use this prototype.”

I watched their confusion. I watched their hesitation. I watched what they tried to click that didn’t exist.

What I learned:

  • Farmers didn’t want AI insights. They wanted simple alerts.
  • They didn’t understand “predictive indexes.” They understood “spray on Friday.”
  • They didn’t care about “beautiful dashboards.” They cared about “works in areas with no Wi-Fi.”
  • The feature I spent two weeks designing (a financial planning module)? Zero interest.
  • The feature I dismissed as “too simple” (a single daily todo list)? Everyone asked for it.

Validation Tools I Wish I’d Known About

  • Figma (free tier): Clickable prototypes without code—perfect for field testing
  • Typeform or SurveyMonkey: Structured feedback from 20-30 farmers in parallel
  • Hotjar: Session recording and heatmaps to see where users get confused
  • Maze: Usability testing with A/B variants
  • Google Forms + Sheets: Free, offline-capable, perfect for rural areas with spotty internet

Lesson 2: Validation is about watching, not asking. People say one thing but do another. Watch what they do.


Step 3: Scope — The Ruthless Art of Cutting Features

My original AgriSuite roadmap had 12 modules.

I’m going to list them because I need you to feel the temptation I felt:

  1. Crop tracking & health monitoring ✅ (Kept)
  2. Market pricing & trends
  3. Weather forecasts ✅ (Kept)
  4. Soil analysis
  5. Certification management
  6. IoT sensor integration
  7. Financial planning & expenses
  8. Community forums
  9. Pest identification with AI
  10. Logistics & supply chain
  11. Insurance claims management
  12. AI-driven recommendations

The MVP shipped with 2 features.

Two.

Out of twelve.

Every module we cut was a module I believed in. Each felt like removing a limb from my vision. But here’s what I realized: every feature we didn’t build bought us two more weeks to make the core features perfect.

The Feature Priority Framework I Actually Used

Instead of “importance,” I ranked everything on a 2×2 matrix:

HIGH EFFORTLOW EFFORT
HIGH IMPACTDo 3rdDo 1st
LOW IMPACTCutMaybe later

This forced brutal honesty. Three modules looked important in my head but scored “low impact × high effort.” They got cut immediately.

Four modules were “low impact × low effort”—I was tempted to include them. But every one I added slowed down iteration on the core.

The rule I still use today: For every new feature, you must remove something else.

This constraint forces trade-off thinking. Instead of “Can we build this?” you ask “Should we build this instead of making the core 10% better?”

Nine times out of ten, the answer is “no.”

  • Jira: Ticket management with story points to estimate effort
  • Trello: Simpler than Jira, visual board of priorities
  • Asana: Best for showing feature roadmap and timeline
  • Linear: Modern, fast, built for product teams
  • Notion: Free, flexible, great for early-stage planning

Lesson 3: An MVP that does one thing exceptionally well outperforms an MVP that does ten things okay.


Step 4: Build — Moving From Slides to Code

The moment of truth: you actually have to build it.

This was the scariest phase for me because I could finally fail in public.

Our Tech Stack (and Why We Chose It)

Backend: Laravel

  • Why: Fast development, strong community, excellent documentation
  • Not why: Trendy, buzzword-compliant, or cutting-edge

Frontend: Flutter

  • Why: One codebase for iOS and Android, strong performance offline, works well in low-connectivity areas (crucial for Mekong Delta farming)
  • Not why: “Everyone uses it”

Database: PostgreSQL

  • Why: Reliable, handles geographic queries well, great for satellite imagery metadata

Hosting: AWS (with regional optimization for Vietnam)

Why this tech stack mattered: We chose tools built for our problem (offline-first mobile in rural areas), not tools that made us look smart at tech conferences.

The Build Phase Rules I Enforced

Rule 1: Ship working software every week Not presentations. Not progress updates. Not “almost done.” Every sprint ended with something a farmer could actually use on their phone.

Rule 2: Instrument everything We added analytics from Day 1, not Day 90. Mixpanel for user behavior, Sentry for errors. You can’t iterate on what you can’t measure.

Rule 3: Automate testing early Unit tests and integration tests felt like overhead when we had no users. But they saved months later when refactoring. Write them from the beginning.

Rule 4: Document as you go We kept a shared Notion page with architecture decisions. Why did we choose PostgreSQL over MongoDB? Why Flutter over React Native? Future-you will thank you.

ToolPurposeBest ForCost
FigmaUI/UX Design & prototypingCollaborative design, offline-ready compsFree tier / $12/month
FirebaseBackend-as-a-serviceQuick MVP without DevOps overheadFree tier / usage-based
SupabaseOpen-source Firebase alternativePostgreSQL backend with less vendor lock-inFree tier / $25/month
RetoolInternal admin dashboardQuick dashboards for farm data managementFree tier / $10/user/month
BubbleNo-code web app builderFast MVP if you want to avoid coding entirelyFree tier / $29/month
FlutterFlowNo-code Flutter builderMobile MVP without iOS/Android knowledgeFree tier / $60/month
AmplitudeAnalytics & user insightsUnderstanding farmer behavior patternsFree tier / $999/month
MixpanelEvent-based analyticsTracking specific farmer actionsFree tier / $999/month

Lesson 4: Your MVP is proof of concept, not proof of perfection. Done is better than perfect at this stage.


Step 5: Feedback — Listening Without Defending

The moment farmers started using AgriSuite, everything I thought I knew was wrong.

Not just a little wrong. Catastrophically wrong.

What Actually Happened

We gave AgriSuite to 15 early farmers. 45 minutes later, I had a spreadsheet of bugs, complaints, and feature requests. But it wasn’t the bugs that surprised me.

Farmer Tuan couldn’t read the text. Our contrast ratio was fine by WCAG standards. But in direct sunlight in his field at 6 AM? Illegible.

Farmer Linh’s phone crashed. She was trying to enter data for three fields simultaneously across three devices (her phone, her son’s phone, her old tablet). We hadn’t thought about multi-device sync.

Farmer Minh ignored 80% of our features. He just wanted the daily alert. Scrolled past everything else.

My instinct was to defend: “That’s not how you’re supposed to use it.” “They need better phones.” “They should read the instructions first.”

But the most important lesson I learned: Feedback is data. Data doesn’t care about your excuses.

How I Actually Listened (Instead of Defending)

Phase 1: Watch silently I sat with farmers. Literally. No explaining. No “let me show you how to use this.” Just watching them explore.

Phase 2: Document patterns, not complaints Instead of “user said X,” I documented “user did X when Y happened.” Facts over feelings.

Phase 3: Separate signal from noise One person complained about a feature? Noise. Three people? Signal. Five people? Critical bug.

Phase 4: Ask why, not what Instead of “What feature do you want?” ask “What problem do you have right now?”

Farmer’s complaint: “This is too complicated.” What I asked: “What specifically made you confused?” What they said: “I have to tap four screens to enter today’s watering data.” What I understood: The core flow has too many steps.

Tools for Collecting Structured Feedback

  • UserTesting.com: Pay users to record screen shares—invaluable for watching confusion points
  • Hotjar: Free session recording, heatmaps show where users click most
  • Vik (or Respondent): Recruit target users for interviews
  • Google Forms + Sheets: Simple, works offline, farmers can use on their phones
  • Figma Comments: If you’re iterating designs, gather feedback inline
  • Slack: If your early users are tech-comfortable, a private channel beats everything else

Lesson 5: The user is never wrong. Your understanding of the user might be.


Step 6: Iterate — Fix, Simplify, Perfect

Iteration is where most startups get it wrong.

They treat iteration as “add more features.” It’s not.

Iteration is “change something, measure if it worked, repeat.”

What We Actually Iterated On

Iteration 1: Information Hierarchy Our AI crop health model showed: “Predictive crop health index: 7.4/10 (Risk of leaf spot: 34%)”

Farmers ignored it completely. So we changed it to: “⚠️ Your tomatoes might get sick. Spray on Friday.”

Adoption spiked from 12% to 87%.

The insight: We weren’t wrong about the feature. We were wrong about how we communicated it.

Iteration 2: Offline Sync Farmers couldn’t trust the app because data sometimes disappeared. So we rebuilt the offline sync logic. Data now queues locally, syncs when Wi-Fi returns, shows clear status indicators (“Synced,” “Syncing,” “Failed—will retry”).

Trust increased. Churn decreased.

Iteration 3: Simplified Navigation Original: 7-level navigation tree. Farmers got lost.

Revised: 3-level navigation with quick-access buttons for “today’s tasks.” Users now got to their key information in 2 taps instead of 6.

Iteration 4: Text Contrast & Font Size This was humbling. We ran accessibility tests and passed WCAG AA. Then we tested in actual sunlight.

Failed completely. Contrast ratio: increased 40%. Font size: increased 20%. This single change moved retention from 31% to 68%.

The Iteration Cycle We Actually Used

Every two weeks:

  1. Release a small change to 10% of users
  2. Observe metrics for 3-4 days
  3. Decide if we expand to 50%, revert, or iterate further
  4. Document what we learned and why it mattered

This process kept us from:

  • Building features nobody used
  • Optimizing the wrong metrics
  • Making sweeping changes that broke things
  • Getting emotionally attached to “good ideas” that weren’t working
  • LaunchDarkly or Optimizely: Feature flags to A/B test changes
  • Fullstory: Session replay to see exactly what users do
  • Heap Analytics: Auto-tracks all user interactions
  • Segment: Unified event tracking across multiple platforms
  • Datadog: Infrastructure and app performance monitoring
  • Sentry: Error tracking and debugging
  • LogRocket: React/JavaScript session replay with network activity

Lesson 6: Iteration isn’t about building more. It’s about understanding what you already built.


Step 7: Launch & Learn — Embrace the Imperfect

Our MVP launch wasn’t a press release moment.

No TechCrunch article. No Product Hunt vote. No PR campaign.

We just… released it. Quietly.

Through a few agricultural extension offices, one cooperative officer who believed in our vision, and word-of-mouth.

By month one: 50 users. By month three: 200 users. By month six: 650 users.

Nothing viral. Nothing “hockey stick.” Just real people using real software to solve a real problem.

What “Launch” Actually Meant

Pre-Launch (Week 1): Release to 10 hand-picked farmers. Observe like hawks.

Soft Launch (Week 2-3): Release to 2 agricultural cooperatives (50 farmers). Still fixing bugs constantly.

Regional Launch (Week 4): Open to neighboring provinces. Expect the unexpected.

Public Launch (Month 2): Marketing push, blog posts, newsletter announcements.

What Surprised Us After Launch

1. Features we bet on flopped The crop disease AI we’d spent weeks training? Farmers rarely used it. Turned out they already knew how to identify diseases. What they wanted: reminders.

2. Features we almost cut became core A simple “daily task checklist” we added in week 3 became the most-loved feature. Farmers started sharing it with their friends.

3. Revenue assumptions were totally wrong We assumed a per-farm pricing model. But farmers wanted per-person-on-farm pricing—because farm families use different accounts.

4. Our biggest growth vector wasn’t marketing It was one farmer telling another farmer: “This saved me $200 last month.” Word-of-mouth beat every marketing channel we tried.

Launch Metrics That Actually Mattered

Don’t obsess over vanity metrics. Track these instead:

MetricWhy It MattersRed Flag
Activation Rate% of users who complete key action in first weekBelow 30% = redesign onboarding
Day 1 Retention% who return within 24 hoursBelow 50% = your MVP isn’t immediately valuable
Day 7 Retention% who return within a weekBelow 25% = search for the real problem
NPS (Net Promoter Score)Would they recommend it?Below 40 = not yet product-market fit
Cost Per First ActionHow much did it cost to get a user to try the core feature?If rising = messaging isn’t resonating
Feature Adoption Rate% of users using the core feature at least onceBelow 60% = core feature isn’t clear
Churn Rate% of users inactive after 30 daysAbove 15% = product isn’t solving real problem

Lesson 7: Launching imperfectly with real users beats planning perfectly with your team.


The Tools That Actually Worked

After six months of building and iterating, here are the tools that genuinely moved the needle (and which ones we abandoned):

Indispensable Tools (We’d Do It Again)

Figma

  • Why: Designed MVPs together, iterated visually, farmers could give feedback on mockups
  • Cost: ~$12/month per editor
  • ROI: Saved weeks of development time

Git + GitHub

  • Why: Version control, code review, team collaboration
  • Cost: Free (for public repos)
  • ROI: Prevented catastrophic bugs, enabled rollbacks

Notion

  • Why: Shared product roadmap, design decisions, learnings doc
  • Cost: $10/month team
  • ROI: New team members understood context faster

Firebase/Supabase

  • Why: Real-time database without DevOps overhead
  • Cost: Free tier covered us until month 4, then ~$100/month
  • ROI: We didn’t have to hire a database engineer

Tools We Tried, Then Abandoned

Jira

  • Why we tried: “Industry standard”
  • Why we abandoned: Overkill for a 3-person team; Trello was 10× simpler

Figma Community

  • Why we tried: Free design components
  • Why we abandoned: Most templates were built for startups, not agriculture; we created custom components anyway

Firebase Remote Config

  • Why we tried: Control feature flags without redeploying
  • Why we abandoned: LaunchDarkly did this better, but adding it early added complexity we didn’t need

HubSpot

  • Why we tried: “One tool for all marketing”
  • Why we abandoned: Overkill when we had 50 users; Google Forms + Sheets worked better

Tech Stack Decision You Should Make Early

Choose based on your team’s strength, not hype:

  • Strengths in web? Build web-first (Bubble, Webflow), add mobile later
  • Strengths in mobile? Build mobile-first (Flutter, React Native), add web later
  • Strengths in both? Pick one and go deep. Don’t split your resources

We chose Flutter because one engineer knew it well. We could move fast. That speed was more valuable than “the optimal tech stack.”


Common MVP Failures and How to Avoid Them

Before I share my near-failure, let me show you what I learned from studying other product launches:

Case Study 1: Google Glass

The mistake: Building incredible technology for a problem nobody had yet.

The failure: $1,500 headsets, privacy concerns, social stigma (“Glassholes”), market rejection within 2 years.

The lesson: Technology doesn’t sell. Solving real problems sells. Make sure farmers (or your users) actually want this.

Case Study 2: Quibi

The mistake: Assuming user behavior instead of observing it.

The failure: Spent nearly $2 billion on premium short-form mobile video. Launched to a COVID-19 lockdown… when people were at home watching Netflix horizontally, not using phones vertically.

The lesson: Watch how people actually use products, not how you think they should.

Case Study 3: Amazon Fire Phone

The mistake: Copying competitors without understanding differentiation.

The failure: Added a 3D screen nobody wanted, couldn’t compete on price or ecosystem. $170M write-off.

The lesson: A new feature isn’t enough. Solve a problem better than alternatives. If you’re not better in a meaningful way, don’t launch.

Case Study 4: Juicero

The mistake: Over-engineering a solution to a problem that didn’t exist.

The failure: $400 Wi-Fi-connected juice press. Customers could squeeze juice manually just as well. Folded after investor pressure mounted.

The lesson: Simple beats over-engineered. Every time. Manual processes aren’t the enemy; poor decisions are.

How to Avoid These Failures

Before you build:

  1. Can you articulate the problem in one sentence?
  2. Have you seen 20 people struggle with this problem?
  3. Would they pay to solve it or find an alternative?
  4. Is there already a solution they use instead?

Before you launch:

  1. Do farmers know they have this problem?
  2. Have they tried alternatives?
  3. Would they switch their current process for yours?
  4. Will they tell others about it?

The Moment I Almost Quit

Three months after AgriSuite’s soft launch, I was broken.

Revenue: $0. Cash runway: 2 months. User growth: Stalled at 150. Competitor with $10M funding: Entered the market.

I sat in my apartment and wrote in my journal: “Maybe I’m solving a problem nobody wants. Maybe I should get a real job.”

I even updated my LinkedIn to say I was “open to opportunities.”

But then, the morning I planned to make calls about consulting gigs, my phone rang.

Farmer Tuan.

He’d been using AgriSuite for three weeks. He caught a pest outbreak on his tomato crop earlier than usual—using our app’s simple alerts. Instead of losing 40% of his harvest (like he normally would), he lost 5%.

He did the math: that 35% harvest saved him $1,500.

With that money, he bought a new spraying pump for next season.

He wasn’t calling to say thanks.

He was calling to ask: “Can you make it work for my neighbor’s farm too?”

I’d been measuring success by the wrong metric.

I wasn’t building a $100M company. I was helping a farmer earn an extra $1,500—which, in the Mekong Delta, changes your family’s life.

One farmer. One problem solved. One genuine conversation.

That’s when I realized: MVP success isn’t about millions of users. It’s about one user whose problem you genuinely solved.

After that call, everything changed. Not because the product got better. But because I understood what I was actually building.


What I Wish I Knew

If I could go back to Day 1 of AgriSuite, here’s what I’d tell myself:

1. The Best MVPs Are Built on Empathy, Not Ego

Don’t fall in love with your features. Fall in love with your users’ problems.

The features you cut will sting. Cut them anyway. You’ll add them in v2 when you’ve proven the core insight.

2. Speed Matters, But Clarity Matters More

A slow MVP that solves one problem clearly beats a fast MVP that tries to solve ten problems confusingly.

I could have launched three months earlier with an even simpler version. I should have.

3. User Feedback Is Free R&D

Listen to it like your life depends on it. Because it does.

The moment a farmer says “I don’t understand,” don’t explain your design. Change your design.

4. An MVP That Solves One Pain Point Will Always Outperform an MVP Trying to Solve Ten

This is the hill I’ll die on.

Focus beats feature count. Every single time.

5. Launching Imperfectly Is Better Than Planning Perfectly

You learn in one week with real users what would take ten weeks of planning with your team.

I spent two weeks refining design details that farmers never even saw. I spent two hours talking to farmers that changed the entire product.

Which do you think was more valuable?

6. Your MVP Is Proof of Learning, Not Proof of Vision

An MVP isn’t a miniature version of your 10-year plan. It’s a test of your highest-confidence assumption.

For AgriSuite, that assumption was: “Farmers will use a mobile app if it saves them time and money on their core task (watering decision).”

We tested that specific assumption. Everything else was secondary.

7. Revenue Doesn’t Matter Yet; Learning Does

We launched AgriSuite for free. No payment processing. No monetization strategy.

We needed farmers to use it, love it, and tell others. Revenue could wait.

Too many founders focus on monetization before they’ve proven value. Don’t be that founder.


Your First Steps

If you’re standing at the start of your own journey—staring at a blank page, wondering if your idea is good enough, terrified of wasting time or money—here’s what I want you to do:

This Week

1. Articulate your assumption (30 minutes) Write it down: “I believe [target user] has [specific problem] that [your solution] will solve.”

If you can’t write it in one sentence, you don’t understand it yet.

2. Talk to 5 people with that problem (2-3 hours) Not surveys. Not emails. Real conversations.

Go where they are. Sit with them. Watch them work. Ask why.

3. Document their exact words (30 minutes) Write down the words they used to describe their problem. These phrases will end up in your product, your marketing, your pitch.

This Month

4. Build a low-fidelity prototype (1-2 weeks) Use Figma. Use paper. Use anything but code.

Show it to 10 more people. Watch them get confused.

5. Measure one thing (ongoing) Don’t measure “happiness.” Measure one specific behavior: Did they try the core feature? Did they come back? Did they tell someone else?

6. Build the absolute minimum (1-2 weeks) Not the MVP. The Minimum Minimum Viable Product.

For AgriSuite, that was literally:

  • Login screen
  • Enter field data
  • Get today’s watering recommendation
  • That’s it. Four screens.

This Quarter

7. Launch to 20 real users

Not for money. For learning.

Ship it. Watch what breaks. Fix it. Repeat.

8. Find one user who loves it

Not everyone will. That’s okay.

Find one. Make them love it so much they tell their friend.


Final Thoughts — The MVP Mindset is Everything

Building AgriSuite taught me more about people than technology.

Every iteration was a conversation. Every bug fix was a step closer to trust. Every feature we removed made space for something better.

The MVP mindset isn’t just for startups building new products. It’s a way of thinking about problems:

  • Test your highest-confidence assumption first
  • Learn from real people, not hypotheticals
  • Change based on what you see, not what you feel
  • Ship when you’re uncomfortable, not when you’re ready
  • Celebrate your first real user, not your 10,000th

And here’s the truth I’ve come to believe:

You never really stop building your MVP. It just evolves with you.

Your version 2.0 will be built the same way: test, learn, iterate, repeat.

Your version 3.0 will follow the same pattern.

The products that scale aren’t the ones with the most features or the most funding or the smartest team.

They’re the ones that listened hard, learned fast, and stayed obsessed with solving one problem really, really well.


Key Takeaways for Your MVP Journey

PhaseKey FocusCommon MistakeSuccess Metric
SparkFind the real problemFalling in love with the solutionCan you articulate the problem in one sentence?
ValidationTest with humansTesting assumptions, not behaviorsDo 5+ people say “yes, I have this problem”?
ScopeCut ruthlesslyAdding features “just in case”Are you solving ONE problem exceptionally well?
BuildShip working softwarePerfect code over imperfect learningDoes it work for one user in real conditions?
FeedbackListen without defendingExplaining instead of learningAre you changing based on what you hear?
IterateChange one thing at a timeChanging everything at onceCan you measure the impact of each change?
LaunchEmbrace imperfectWaiting for polishDo your users tell others about it?

This post is part of my ongoing documentation of building products that matter. Subscribe for more raw, honest reflections on startups, product development, and entrepreneurship in emerging markets.