I'm also one of the developers on a hobby project called http://TIGdb.com (Jeff Lindsay is the other, and has written the majority of the website) We don't have a big Continuous Deploy infrastructure, but we also don't have the users and business requirements of IMVU.
We started with the usual, completely manual deploys and hard-to-setup sandboxes, and have been iterating towards a fully automated setup ever since. The entire time we've been doing this, we've been committing and deploying often. Our users are patient, because we're giving them something they can't get elsewhere and we're giving it to them for free. As we do introduce regressions, we'll post-mortem them (probably using the 5 why's technique) and we'll slowly evolve a system to prevent regressions. If the site is a success, we'll have evolved a world class deploy system. If the site never makes it that big then we won't have wasted time on infrastructure. It's classic lean startup thinking (even though TIGdb is really just a hobby project).
Just curious - who maintains the Selenium tets, and how big is the development / "QA" team?
I've never worked in a team big enough that it could devote resources to maintaining all of the following kinds of tests:
* unit
* functional
* AND acceptance
* plus writing the actual code
IMHO, a neutral third-party group like QA should be responsible for writing & maintaining acceptance tests.
I'm also one of the developers on a hobby project called http://TIGdb.com (Jeff Lindsay is the other, and has written the majority of the website) We don't have a big Continuous Deploy infrastructure, but we also don't have the users and business requirements of IMVU.
We started with the usual, completely manual deploys and hard-to-setup sandboxes, and have been iterating towards a fully automated setup ever since. The entire time we've been doing this, we've been committing and deploying often. Our users are patient, because we're giving them something they can't get elsewhere and we're giving it to them for free. As we do introduce regressions, we'll post-mortem them (probably using the 5 why's technique) and we'll slowly evolve a system to prevent regressions. If the site is a success, we'll have evolved a world class deploy system. If the site never makes it that big then we won't have wasted time on infrastructure. It's classic lean startup thinking (even though TIGdb is really just a hobby project).