Building a product that solves a personal pain point, and how that flips validated learning on its head

POMOS was born out of personal frustrations that I had while interning this summer trying to find a Pomodoro timer that I liked. You can read the story here.

I originally wanted to see if people would pay for Pomodoro tagging, so I set up a Google Analytics event tracking javascript snippet to see how many would click the “Sign Up” for the paid Premium account. Only one person clicked the Premium “Sign Up” button during the couple weeks that was running this test, and who knows if they actually wanted to buy or just wanted to see what my payment page looked like. Hypothesis rejected? I then realized it was a little absurd that I was holding back one of POMOS’s core differentiating features from free users: how would they ever know what POMOS was really capable of? How would they ever know that POMOS had value that should be worth something?

So I polished up the Pomodoro tagging feature, which I had been working on for the past few weeks, and released it. I limited free users a little on the number of tags they could have, so they could get a taste of POMOS’s most important feature without having to pay. I also implemented accepting payments (through Stripe) on the same day and pushed it to production as well. Now users could experience the value of POMOS, and I could measure if they actually thought it had value by the number of subscription sign ups. A better hypothesis test/experiment.

Then it hit me: even if all my validated learning experiments failed, and absolutely no one visited the site or found it useful, I still would’ve built POMOS. Because it solves a personal problem that I have. I wouldn’t have taken no for an answer until I built the product that I would want to use. The beauty of building something that solves a problem close your heart is that often times, someone else in the world will have the same problem. Sure, I was collecting all this data on what pages users visited, how many times users went to the “Pricing” page, how many times users clicked the “Sign Up” button for the paid Premium account, etc. But I was always developing the next feature while the experiment was running, and I would release the feature pretty much despite whatever my experiment results told me because I was building something that I wanted to use.

Validated learning in the traditional sense of “is this product valuable”, and being able to decide without spending time developing something no one wants, was thrown out the window: I already knew it would be valuable to me, so I built the product no matter what. But running experiments still has a role in answering questions like “do other people find this product valuable”, and “how do other people find it valuable”.

I’m still a rookie at this web dev, building products thing. But I’ve already learned so much from just doing it, things one can’t learn from just books.

Using a launch page as an MVP to achieve validated learning


I set up a launch page on LaunchRock for Pomos, and set to point to it. The launch page only contained a short description and a box for people to submit their emails.

The point of this page was to test my hypothesis: would people find Pomos useful? Not only that, but to do it in a way that would allow me to learn without spending too much time and energy in building it first.

In the past (e.g. with predictd), I made the mistake of spending months on development, then launching the product, only to find that no one used it. Part of this was because of my newbie web development skills, part of this was building first and exploring the market second.

To use an entrepreneur’s time more efficiently, Eric Ries introduces two key concepts in his book, the Lean Startup. One is validated learning. Validated learning is about coming up with a hypothesis on the company’s growth or value (or both), and testing that hypothesis by collecting the relevant metrics. Often times, this data collection
is done with a minimum viable product (MVP), or a barebones product that is just good enough to get the data you need.

The growth hypothesis is how the entrepreneur thinks the website will grow (referrals, ads/searches, etc.). The value hypothesis is what value the entrepreneur thinks the site provides. I decided to test some value hypotheses first.

The first value hypothesis was “peole think the idea of Pomos is useful, and want to use Pomos to implement the Pomodoro Technique and boost their productivity”. I tested this hypothesis with the launch page, a MVP of sorts. Page views were rather low (I don’t have much of a social network reach…), conversion rates were decent (~10%). I received only a little feedback (even after reaching out to the first signups for feedback): a few were looking forward to it, several who didn’t provide their emails said they were afraid of giving out their emails for something they couldn’t even use yet. Good point. I didn’t want to submit the launch page to blogs, sign up for AdWords, etc. because there wasn’t even a product yet.

I decided to continue developing Pomos (which I was actually doing while waiting for my launch page stats). I considered my value hypothesis supported by the feedback and conversion rate I received. But the biggest factor was that Pomos scratched my own itch: I wished for an app like Pomos everyday at my summer internship this summer, and I often found myself wishing I had an app like Pomos while I was developing it (you can read more about how I came up with the idea for Pomos on the Pomos Help page).

The second value hypothesis is currently being tested. Once the data is in, I will write another post on it.

That’s how I used a launch page as an MVP to achieve validated learning, and have decided to develop and launch Pomos.