This forum is in permanent archive mode. Our new active community can be found here.

Ever move a real big website?

edited June 2008 in Technology
I'm looking at a potential job of moving a very large website to a new server. The file system is about 10GB and there are 9 databases. The smallest one is about 50MB in size while the largest is about 780MB in size.

Because just about everything on the site relies on the database moving the files is the easiest part of the job. I will have shell access to both machines so moving the files will not take much work as I will be able to get the two servers to talk to each other and transfer the files. The problem comes in with the databases.

Because this is a production server we want the downtime to be as short as possible. Because the old server is about 10 years old it is also very slow. The backup procedure can easily take 30+ mins. Rather than do an sqldump I would like to try setting the new server as a slave of the existing server and have it sync up the databases on both machines on its own.

Has anyone tried doing this? Is it worth the pain? Should I just do the sqldump, transfer the dump file and restore it to the new machine?

Comments

  • Just do sqldump. Master slave replication is not supposed to be used for this purpose. Downtime should be zero, no matter how long it takes you to do the procedure. Get the second server up. Change the DNS. Wait for DNS to switchover. Take down the original server.
Sign In or Register to comment.