Analytics are test-driven development for product people
Here’s the premise: if you’ve set your analytics up properly, it means you know which metrics mean success. And knowing what success is means that you’ve chosen the goal of your product ahead of time (which, by the way, is a really damn good idea). And if you can measure success, you know when your product goes from “not working” to “working”.
That’s test-driven design. Engineers use it all the time, and I’d like to suggest that product people should too.
Engineers face a number of issues with TDD: it takes a bunch of time to write tests in advance, it requires a lot of upfront discipline before you even start coding, and it restricts agility (because you may end up scrapping a bunch of tests as scope changes). It works well if you can get everyone on board, but it can be murder trying to get there.
Fortunately, these are mostly a non-issue for product because the whole point is that doing this in advance enables agility in meeting the goal. That is, while engineers are writing many tests for many problems with only one solution, product people are writing a test that they can solve a variety of ways. So here’s the drill:
1) Decide what is the single most important thing you care about your users doing. For example, sitting down with Estimize recently, the goal that came out of conversation was getting as many estimates from qualified professionals as possible. Note “qualified professionals” – be careful about tossing out overly simplistic goals like just “user signups”, or you’ll end up like Mint, with plenty of registered users and no active ones. Be specific in your goal, like “people who register and are continuing to log in at least once a month for three months”.
2) Decide how to measure that behavior. For RentHackr, where the goal is as many leases as possible, it is fairly simple: number of leases with valid data. That means writing a quick script that counts the leases and then eliminates invalid ones (rents that are bogus, which is pretty easy to tell by looking at market rates in an area). This measurement step is critical but should be fairly easy if you wrote a specific goal, as mentioned above. If you didn’t, and you find yourself including a lot of junk data, you need to go back to Step 1.
3) Iterate, test, repeat. Once you’ve got a test in place, you can start building features against it. Put out a feature, let it run long enough to get a reasonable sample, and see if you actually increased the metric you were trying to increase. Debug. Repeat. This will keep you from releasing features just because your CEO “thinks that would be cool” and instead forces you to release features focused only on what your product is actually supposed to be doing (once again, product viewpoint for the win).
(Thanks to the DataGotham conference for triggering this line of thought.)
I’m going to go as far to say, in the next few years analytics will be a part of development in the same way testing is. Instead of writing tests up front, you write analytics capturing up front, then tests and work backwards to the functionality. I’m going to put a new term out there – A.D.D 🙂