Performance considerations for larger UrBackup server instances

If you are planning on setting up a larger UrBackup server instance you will find some hints about performance in this post. “Large” in this context is difficult to define, because it depends on the number of clients, the number of files and file sizes in the backups and the backup intervals.

If you plan on setting up a “larger” UrBackup instance you should keep the following things in mind:

  • UrBackup uses an internal (SQLite) database. This database could cause performance problems, especially with a large number of backed-up files and full file backups. The database should be stored on storage suited for databases.
  • UrBackup has some tuning options, but is nevertheless pretty optimized per default. You should only have to tune UrBackup in special circumstances. You will find information about the tuning options in the administration manual.
  • There are many platform options and each has its own considerations, so you should read up on the platform-specific performance considerations. For example you should not run FreeNAS virtualized.

Your system will almost certainly be IO limited. If you have a running system you can verify that by looking at the performance monitor on Windows, iostat on Linux and zfs iostat on FreeBSD. Often it is limited by random read/write performance (Input/Output operations per second).

If you want maximum IO performance following should therefore be the case:

  • The UrBackup database should be on an SSD. This should be a no-brainer, as this database does not get too large and SSDs are way faster than spinning disks. The random reads/writes are for example 900 times faster with a Samsung 840 Pro (97K IOPS).
  • The UrBackup database should not be on the same disk as the backup storage.
  • The UrBackup database should not be on a RAID-5, as this is not optimal for databases.
  • If the database is still the bottleneck (because it is a separate device you can find this out using iostat or an equivalent), you can use the “file entry cache” (see manual). This cache should be on a separate SSD, otherwise it will only cause more IO on the one device.
  • Save the filesystem metadata of your backup storage on an SSD and only the actual data on a spinning disk RAID-5/6 to get the maximum performance. This is only possible with btrfs on Linux.
  • Avoid full file backups. When doing a full file backup UrBackup has to load all files, calculate their hash value and look this value up in the database. This incurs a lot of IO on both the database and the backup storage (and the client). UrBackup can run an infinite amount of incremental file backups without any full file backups.
  • Optimize the maximum number of concurrent backups such that the throughput is maximized.

Ways to absolutely kill performance:

  • Save the UrBackup database on ZFS/btrfs on a spinning disk. Databases (those who use WAL or intent logging; including SQLite3) on ZFS/btrfs are a known pathological case for copy-on-write file systems. The database files get horribly fragmented. Btrfs has a (currently disabled per default, because it is not stable) background defragmentation for that, but ZFS does not.
  • Save the UrBackup database on a RAID-5. See http://www.baarf.com/.
  • A lot of full file backups.
  • Enable ZFS deduplication without having enough RAM for the dedup table. See here for a discussion.

4 thoughts on “Performance considerations for larger UrBackup server instances

  1. What’s the best way to migrate a current database to SSD on Windows? Create a shortcut to the database file on SSD or something along those lines?

    • I guess you could replace C:\Program Files\UrBackupServer\urbackup with a junction to an SSD (and copy the files to the SSD).

      Or you can install the complete Server to SSD and copy the old database into the urbackup folder.

  2. Hello,

    this post is a bit old, but I may find some help here. I am running Urbackup 2.3.8 on a Ganeti LVM instance with ext4 FS. The backup folder is a NFSv4 mount directly from the host (Ganeti node with KVM hyper visor). It is a mdadm RAID 5 disk.
    The backup VM has 6 VCPU’s and 12 GB RAM, thus a full file backup of one of the servers is about 140GB takes weeks.
    I tested the network speed it was constant at 16-17 Mbits/sec and the I/O of the NFS mount was pretty good. At least with dd. It took around 10 seconds to write a 1GB file.
    I just installed Urbackup without any further tweeks except turning off the background priority and setup the backup dir to the NFS mount.

    Can you give some hints?

    Thanks for the blog

    • Correction. The Raid5 is my data storage, the backup mount is one WD 2TB sata 6 Gb/s 128 MB Cache

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.