Have you ever seen the show Hoarders on the A&E channel? I often think of this show when we start digging in to customers’ backups and backup strategies.
I once visited an organization to find that the database administrator was backing up his SQL database hourly and nightly and keeping copies on the server for a year. Meanwhile, the application developer was keeping multiple copies of the application (including the database) and the network administrator was backing up everything (including the database) and keeping it for three years! They were tripling every backup they ran.
There are a hundred different ways to back up your data and lots of different concerns about the viability of those backups in case of a disaster. The first question that needs to be answered is why are we backing up the data at all? Next, we need to figure out what we are backing up. Is it just stored data, or is it everything needed to stand back up the environment like operating systems and installed applications? Lastly, what are the management’s expectations? How much downtime and data loss can they tolerate? In many meetings, it comes down to aligning management’s expectations with the capabilities of the IT department.
I want to take some time to talk about three example backup strategies that can be used at various levels in the organization to ensure that customers are efficiently and safely storing necessary data needed to run their business, not just hoarding everything. These aren’t going to work for everyone, but at least this is a place to start.
ONE
One obvious solution is manually copy the files somewhere. This sounds silly, but you would be surprised at the number of times I see this. The customer has a batch file on a scheduler that is copying the data to an offsite mounted drive volume. So the data is getting offsite, but what the heck are you going to do if your building burns down? It would take days if not weeks to restore it all. The bigger problem here is there’s no verification, confirmation, or alert when things don’t work.
TWO
The next way to back up your data is to use backup software or agent-based backup. I put operating system based backups like NT backup, Backup Exec, Veem, Arcserv, Commvault, Carbonite, and others in this category. They usually have agents that can pause applications like SQL or Exchange to create non-corrupt copies and get them moved elsewhere. They can be copied to any kind of media and usually have some sort of alerting, verification, and scheduling built in. This is a step in the right direction. However, these also have long restore times.
THREE
Finally, we come to hardware-based backups like SAN snapshots. In this case, the centralized storage can offload the backup onto the SAN so there is no CPU hit to your applications. We setup NetApp to be the local data store, create the local backups via snapshots, and then replicate compressed copies of the data and the virtual machines to a disaster recovery (DR) site.
Once you’ve decided on a method of backing up your data, the next step (and this is mandatory) is to test it. I call this the fire drill. Every quarter, you need to be able to stand up your entire environment and test to make sure that (1) you have not lost any data and (2) you can actually stand everything back up. We find that many companies don’t bother with this step, which could be a potentially devastating mistake. Getting data back online is the first step in having a DR plan.
Bottom line:
An experienced consultant can help solve issues and recommend the correct backup strategy, which will ultimately be the foundation for your DR plan. Learn more about Zumasys’ Disaster Recovery solution.