Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Within any one particular A/B test, the changes in the true propensity to convert over time don't matter, since you're randomly apportioning folks into the alternatives, such that the only difference in the samples is whether they're exposed to A or B. Over the course of an A/B testing regimen, this doesn't apply, and you should be skeptical of results reported like "CR went from 10% to 15%", but a string of winning A/B tests is virtually statistically certain to be driving true wins and clients love hearing the change-over-time thing, so even if you're getting a tailwind from the environment one tends to report the numbers anyhow.

It is a poor allocation of resources for most clients to continue to run original, unoptimized pages just to be able to satisfy one's curiosity as to what portion of an improvement observed over, say, 6 ~ 24 months is due to one's own effort and what portion is extrinsic to the business. There's both a direct financial cost to maintaining old code branches and a regret cost of not giving them the best you're capable of serving. Round numbers: to add one sentence of clarification to this blog post, Team Obama would have had to give up ~$5 million in donations.



I suppose it is a fair thing to say that when listing the sins of marketing, reporting accurate increases in metrics that one cannot be sure they are responsible for would appear rather low on the page.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: