Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have just spent some time reading up on GFS2. It sounds interesting. But it also seems aimed at "SAN" style storage, which is expensive. The acronyms that gaius posted above were also interesting to google, but I don't want to buy storage from a place like EMC or NetApp, I have some drives from Fry's that should be more than sufficient.

Here is what I want to do:

I will take two consumer grade computers less than a couple years old, and install an extra gigabit ethernet card in each one. I will connect the extra crads with a cross over cable. On the "master" machine, I will run Postgres or MySQL, with the data directory on a separate partition if the replication scheme is at the file system level. On the "slave" machine, I willnot have the database server started up. Slow option with a lot of overhead: when the data dir changes, inotify kicks off an rsync that copies the datadir from the master to the slave. Better option: inotify kicks of something that doesn't have to scan whole data file on each side to find the changed data and move it.

On a failure of the master, either I would manually move a cable to make the system start using the slave, or IPs could be shuffled remotely, or one of the failover systems could do it automatically. For my particular application, it would be fine if I had to ssh in and change a DNS entry to make our web servers start using the slave database, and because I would not want the database server on the slave to start up and start writing to a disk that might also be receiving syncs from the master, I would prefer it be manual for now.

Aside from an inotify / rsync solution that seems limited and kludgy, does anyone have any tips on how to go about this ?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: