Performance considerations for larger UrBackup server instances

If you are planning on setting up a larger UrBackup server instance you will find some hints about performance in this post. “Large” in this context is difficult to define, because it depends on the number of clients, the number of files and file sizes in the backups and the backup intervals.

If you plan on setting up a “larger” UrBackup instance you should keep the following things in mind:

  • UrBackup uses an internal (SQLite) database. This database could cause performance problems, especially with a large number of backed-up files and full file backups. The database should be stored on storage suited for databases.
  • UrBackup has some tuning options, but is nevertheless pretty optimized per default. You should only have to tune UrBackup in special circumstances. You will find information about the tuning options in the administration manual.
  • There are many platform options and each has its own considerations, so you should read up on the platform-specific performance considerations. For example you should not run FreeNAS virtualized.

Your system will almost certainly be IO limited. If you have a running system you can verify that by looking at the performance monitor on Windows, iostat on Linux and zfs iostat on FreeBSD. Often it is limited by random read/write performance (Input/Output operations per second).

If you want maximum IO performance following should therefore be the case:

  • The UrBackup database should be on an SSD. This should be a no-brainer, as this database does not get too large and SSDs are way faster than spinning disks. The random reads/writes are for example 900 times faster with a Samsung 840 Pro (97K IOPS).
  • The UrBackup database should not be on the same disk as the backup storage.
  • The UrBackup database should not be on a RAID-5, as this is not optimal for databases.
  • If the database is still the bottleneck (because it is a separate device you can find this out using iostat or an equivalent), you can use the “file entry cache” (see manual). This cache should be on a separate SSD, otherwise it will only cause more IO on the one device.
  • Save the filesystem metadata of your backup storage on an SSD and only the actual data on a spinning disk RAID-5/6 to get the maximum performance. This is only possible with btrfs on Linux.
  • Avoid full file backups. When doing a full file backup UrBackup has to load all files, calculate their hash value and look this value up in the database. This incurs a lot of IO on both the database and the backup storage (and the client). UrBackup can run an infinite amount of incremental file backups without any full file backups.
  • Optimize the maximum number of concurrent backups such that the throughput is maximized.

Ways to absolutely kill performance:

  • Save the UrBackup database on ZFS/btrfs on a spinning disk. Databases (those who use WAL or intent logging; including SQLite3) on ZFS/btrfs are a known pathological case for copy-on-write file systems. The database files get horribly fragmented. Btrfs has a (currently disabled per default, because it is not stable) background defragmentation for that, but ZFS does not.
  • Save the UrBackup database on a RAID-5. See http://www.baarf.com/.
  • A lot of full file backups.
  • Enable ZFS deduplication without having enough RAM for the dedup table. See here for a discussion.