Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think if you compare the cost of getting a dual power supply server, to the cost of getting two servers, you will conclude that buying two servers gives you more reliability per dollar.

Really you should compare the TCO, including the sysadmin time to configure replication vs. the time to configure failover.

What I would like to be able to do, is set up two linux servers, such that any change to the file system on one would be writen to the other. This can be done with inotify and rsync, but I suspect that it would be hard to configure rsync to both have a low enough load on a heavily used database machine and also "keep up" enough that the slave system has a consistent snapshot.

This is called a dual-ported array plus GFS2 plus Heartbeat.



I have just spent some time reading up on GFS2. It sounds interesting. But it also seems aimed at "SAN" style storage, which is expensive. The acronyms that gaius posted above were also interesting to google, but I don't want to buy storage from a place like EMC or NetApp, I have some drives from Fry's that should be more than sufficient.

Here is what I want to do:

I will take two consumer grade computers less than a couple years old, and install an extra gigabit ethernet card in each one. I will connect the extra crads with a cross over cable. On the "master" machine, I will run Postgres or MySQL, with the data directory on a separate partition if the replication scheme is at the file system level. On the "slave" machine, I willnot have the database server started up. Slow option with a lot of overhead: when the data dir changes, inotify kicks off an rsync that copies the datadir from the master to the slave. Better option: inotify kicks of something that doesn't have to scan whole data file on each side to find the changed data and move it.

On a failure of the master, either I would manually move a cable to make the system start using the slave, or IPs could be shuffled remotely, or one of the failover systems could do it automatically. For my particular application, it would be fine if I had to ssh in and change a DNS entry to make our web servers start using the slave database, and because I would not want the database server on the slave to start up and start writing to a disk that might also be receiving syncs from the master, I would prefer it be manual for now.

Aside from an inotify / rsync solution that seems limited and kludgy, does anyone have any tips on how to go about this ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: