I am currently investigating some performance issues we are having backing up to Disk using Veeam and then backing up to tape using Symantec Backup Exec 2010 R3. We have recently added some more critical servers into our Veeam backup job and are now running into problems.
We use a Physical Windows 2008 32bit server as our backup server. Single Quad core with 4GB RAM. This has direct 4gb fibre connection to a NexSAN SATABOY which has dual controllers which is our backup storage device. This is configured in RAID6 and has one 11TB Volume. The dual controllers on SATABOY support 4Gb and on the backup server we have 2 fibre cards which are 8GB. The solution is designed to be fault tolerant so any faults should be alleviated with alternative paths.
We have approx 25 VMs which we backup using Veeam B&R v6. We have 1 Veeam job which kicks off at 8pm, we use the backup to SAN option but find the processing rate to be poor. It takes approx 40-50mb/s. On a good day this can get to about 65mb/s. All our VMs bar 2 have Change blocked tracking enabled so incremental backups should be faster.
The backup server fibre plugs into 2 SAN fibre switches (with redundant paths) and the ESX hosts also connect into these fibre switches. When Veeam backups and restore it should be using the fibre connectivity to directly connect to the SAN eliminating any host involvement to get better performance.
I am trying to understand why we are getting these bottlenecks because when we backup to tape we backup data which sits on the SATABoy which is directly connected to fibre to our Windows backup server and again we are getting poor performance from Backup Exec. The tape library is LTO-5 and we are using a 6GB SAS card which is in the backup server to connect to the tape library. We found when we upgraded from LTO-4 to LTO-5 the performance was exactly the same. Basically we are getting 4,300mb/s when with our design of our backup infrastructure we should be getting at least double this.
Things we have tried
• Copying the data we normally backup from the SATAboy to the backup server local disks which are 450GB SAS disks and find when we backup to tape from this location we are finding it’s a lot quicker takes half the time to backup to tape. The speed per minute doubles to about 8000 mb/s.
• Copying a file/folder from the SATABOY to local disk on the backup server and getting approx 60mb/s (this is through fibre!!). As a test we unplugged the Ethernet to ensure it wasn’t using Ethernet to copy this file and when we plugged the cables out the copy carried on which proved it is using the fibre connection.
• We have done a direct fibre connection from the SATABoy to the backup server and noticed a slight improvement but nowhere near what we expected. We basically achieved 115mb/s going directly with SAN Fibre switches.
• We have also updated the server firmware, all drivers. It’s an IBM server so we have used updateXpress to update all components of the server.
Things we are looking to try next
• Upgrade the firmware on the SATABOY
• Upgrade the firmware on the SAN fibre switches
• Upgrading the server to 64-bit Windows 2008 R2, install approx another 16gb ram and maybe another processor but ideally don’t want to do this as it’s a massive piece of work and we don’t see what significant gain we will get by doing this.
I am hoping someone who may have had similar problem like this can assist. I apoligise that in advance that its all not directly related to Veeam but believe if I can sort the issues out with Veeam and/or hardware, the rest will also get sorted.
You seem to have identified the problem yourself. The SATA box isn't very quick with SAS drives showing the expected sort of throughput.
You could try changing the stripe size on the SATAboy (and also NTFS unit size if that's what it is formatted with). When implementing this sort of solution you really need to run some IOMeter tests to get some idea of performance of the various components.
There are currently 1 users browsing this thread. (0 members and 1 guests)