Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not a matter of distributed systems, but a matter of systems where only parts of it are updated on prod at the same time. You could easily imagine a huge release, where the whole codebase gets pushed to prod at the same time. Then, the whole issue of versioning APIs disappears. However, it would require more discipline (probably unachievable at Google scale), so most companies prefer the slowly-burning garbage fire of versioned APIs and backward/forward compatiblity of messages.


While atomic big-bang releases have their benefits (and drawbacks), I don't think there's any way to avoid dealing with data backward compatibility in at least some way. Old versions of data always exist in your system (in databases, message queues, replay logs, caches, retry loops in external parties, even on the wire at the time of the release in some cases). While explicit and long term API versioning may not be needed in some release processes, a strategy for coping with old data becomes necessary past a certain scale. Migrating all extant data (not just your RDBMS) at the same time as the big-bang release is not practical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: