Status of next major version

I’m releasing UrBackup Server 0.26.1 and Client 0.40.1 soon. They do have only minor bug fixes and additionally a Russian translation.

The next major version, which will probably be 1.0, will have following new features:

First of all you will be able to start and stop backups from the server web interface.

 

Then I reorganized the settings, both on the server web interface and on the client. You can also see the new bandwidth throttling feature which can limit the bandwidth usage of the backup server, both globally and for each client.

 

I added a few features to the new internet mode, described in the last post. Per default UrBackup does not do full file backups or image backups with an internet connection, but it can be enabled. Total global backup speed and backup speeds for each client can be set separately from the local backup speed. You can e.g. use this on the client to prevent UrBackup from using all your bandwidth. Additionally to being able to encrypt the transfer over internet UrBackup can now also compress it.

 

 

 

There is a new feature which lets you archive certain backups in certain intervals. Archived backups are not deleted during cleanups, until they are not archived anymore. Additionally to the automated archival you can also manually archive and un-archive certain file backups simply by clicking on them. For now only file backups can be archived.

 

This should be the major improvements. The are some minor ones as well.

Everything except the internet mode is ready for testing, so if anyone wants to help send me a mail at martin@urbackup.org or drop by in the forums and I will upload the appropriate builds.

Internet Mode

Currently I’m working on a new internet mode for UrBackup. This means that you will be able to backup clients to a server on the internet with the upcoming new version.
This communication is of course encrypted and authenticated. It uses a shared key encryption with AES256 in CFB mode. It should be easy to configure: You just need to supply the server with its internet name/IP and the ports the clients should connect to. These settings, as well as random keys, are then pushed to the clients via the local (trusted) network. They can be manually entered on the client side as well. Then the key is pushed from the client to the server.

If you are not in the local network the client tries to connect to the internet server, if you entered something (e.g. a dns name or IP address) there. Then both check if they have the same key and if they do have the same shared key a normal connection, like if the client were in the local network, is established and backups can be performed.

I’ll now implement special options for disabling image and full file backups for clients connected via internet. Then I will implement a special, block based file (rsync like) transfer mode which will be used for those clients and which transfers less data in some scenarios.

Then you can look forward to backup archival and more detailed backup retention capabilities, which I’ll be working on next.

I’m better than you, Explorer

A long time ago I fixed a bug, where UrBackup Server on Windows could not backup files with a path name longer than 255 characters. It’s here:
https://sourceforge.net/apps/mantisbt/urbackup/view.php?id=2
Yesterday I reinstalled a test server and today I wanted to delete the old UrBackup backup folder. It threw error messages like you see in the screenshot.
Apparently the Windows Explorer (even in Windows Server 2008R2) cannot delete files with path names longer than 255 characters. And contrary to the error message you can also not move or rename them. You have to install some alternative file manager, to get rid of these files or shorten directory names such that the path length is smaller than 256 characters. Or let UrBackup delete them. In my case: I just left the folder there. I do not care. It’s a test server anyway.

Well done, Explorer.

No BerkeleyDB backend

My previous announcement that there will be a Berkeley DB backend was too hasty. I ran into some (for now) unsolvable problems. I posted them into the official Oracle forums but seem to get no reply there:
https://forums.oracle.com/forums/thread.jspa?threadID=2307258&tstart=15

The final nail in the coffin was, that the advertised increased concurrency was not present. In my tests it performed even worse then SQLite in WAL journal mode. That and the perceived instability (I had a database corruption once) shine a pretty bad light on Berkeley DB. Maybe the SQL layer for Berkeley DB which I used is not stable yet?

On the plus side the tables I had to denormalize cause a significant speed increase for SQLite as well, so all this work was not for nothing.
Maybe I will revisit Berkeley DB in a few months/years.

MSI installers with next version

I finally bit the bullet and worked on MSI installers for Windows. As anticipated it was not easy. I used WiX.

They do have some advantages over an installer distributed as “exe”:

  • One can add the Microsoft Visual Studio runtime as a “merge module” thus avoiding starting it in the installer manually
  • Apparently installing centralized on domain computers is easier

On the negative side:

  • No shared 32/64 bit MSIs are possible. That means the user has to select the right one before downloading
  • You cannot add custom commands as easily as in NSIS

I think I will only publish 64bit MSIs for now. Most Windows Servers should be 64bit now anyway and I will still publish the “old” installers for users of older and 32bit systems.

Optional BerkeleyDB database backend

In the upcoming version of UrBackup Server you will be able to choose BerkeleyDB as database backend instead of SQLite. I am still deliberating about if I will make it the default in Windows. Not in Linux though as the BerkeleyDB version UrBackup needs is not yet e.g. in Debian stable.

The advantage of BerkeleyDB over SQLite is that it is build for higher concurrency. So if you want to have a lot of simultaneous backups you should definitely use it. The new SQLite compatibility layer of BerkeleyDB also made it very easy to add that alternative backend. (The BerkeleyDB people do not like you calling it a backend. They say it is a SQLite frontend for BerkeleyDB.)
It is not as robust as SQLite though. For example it has some problems when the filesystem the database is saved on is at its capacity limit. In my case it slowed down to a crawl. Also if you do not adjust the database parameters correctly it may throw “out of memory” errors. I am still testing what the correct parameters are. If you just set them really high it needs a lot of memory.
For example it said “Lock table out of lock entries” but increasing the number of lockers such that this error did not occur any more resulted in 1GB more memory usage. This is simply too much. I then tracked the problem down to a table join in which BerkeleyDB seems to need a disproportionately large number of locks. Denormalizing a table, such that this join is no longer necessary solved that problem, I think.
BerkeleyDB may also be slower in situations where no concurrency is involved as it has a much more fine grained locking system and in such situations then locks to much. So using it with only a few clients will cause unnecessary locking overhead.

I plan on automatically converting the database as soon as the BerkeleyDB plugin is loaded into UrBackup Server. So the only inconvenience should be a long database reconstruction time during which the database is converted.
The table denormalization and some index rebuilding will also take place on upgrade. This also took a lot of time on my test system.
Once you switched, converting back will be a manual job. I think I’ll have to write in detail about that once the new version is released.

Latest multiple volumes image backup feature and why you should not use it

I recently added a setting to configure which volume(s) UrBackup should backup as image. I added this because it was requested in the forums. Feature requests work! Try them as well!

I will now continue on to modifying the restore CD to allow one to select the volume to restore. This will not be much work, so expect it soon with UrBackup Client 0.39, Server 0.25 and Restore CD 0.3.

Nevertheless I felt a little bit uneasy while adding this feature, because it is not really necessary in my opinion.

Incremental file backups are usually really fast. Incremental image backups currently not so much. So you should use file backups whenever possible. As additional benefit restoring only part of the data is much easier with file backups than with image backups, where you have to mount the image. If you want to be able to do a bare metal restore though and you use Windows having an image backup of your system volume is unavoidable. Restoring from files is not possible without reinstalling.

I will talk about why you do not need image backups at all if you use Linux soon.

So because file backups are faster and more convenient and you only need an image backup of your system volume backing up any other volume as image backup is suboptimal.

But now UrBackup does not stop you from doing it. It is your choice now.

Bare metal restore for Windows Server 2008 R2 now working

A few days ago while I was testing the Image capability of UrBackup on XenServer (another story) I noticed that Windows Server 2008 R2 creates a 100MB large Partition called “System Reserved”. On closer look a partition like this also exists on my Windows 7 (x64). Windows 7 apparently keeps some files for booting there so that it can still repair the main C: partition in case something happens to it.
One user already mentioned that such a partition exists and UrBackup should also include it into the image backups, but I couldn’t reproduce the problem till now (I asked him which Windows he was using and he didn’t answer).
In order for the partition to not be created and used you have to manually partition during the Windows setup.

As this is a major bug/limitation of UrBackup, because it causes the restored images of operating systems with this partition not to be bootable I decided to fix this problem in Server 0.24/Client 0.38 (further delaying said versions). If this partition exists it gets downloaded to the server and is associated with the normal image backup of the “C” partition. If you restore an image the “System Reserved” partition will be restored after the “C” partition, if it exists on the server thus making a restored image of the operating systems mentioned above bootable.

Write fail

So UrBackup decided yesterday that it’s time for a full backup on my laptop. I have a lot of small files and just going over them and getting their file size and change date usually takes half an hour. This is one reason why the incremental backup in UrBackup is so great. Because it does not have to go over all files.

Anyways after 8 hours it was finally finished and the server was happily sorting the files into the database (much faster now – see previous entry).
Then it basically started the full backup again! It couldn’t copy the list of all transferred files to the right location because all the space on the drive where these lists are saved was used up by temporary files. Now you probably shouldn’t store your temporary files on the same drive as you save e.g. the UrBackup database (as it can fill up often). But this is nevertheless a problem, which I fixed.

Now it’s backuping again. 4 hours and going. Hopefully there will be no other error at the end.

Keep in mind that this is not a serious bug, as the full backup was complete and no data was lost and no successive errors occurred. The bug only causes the backup to be worthless for successive incremental backups (they have to backup more). It also does only occur if you have less space in the temporary folder then the files which are backed up and if your temporary folder is on the same drive as your application data. This is not a good idea anyways.

SQLite is paranoid by default

UrBackup uses SQLite to save information about your backups. The data ranges from small stuff such as settings and when a file/image backup was done and where it is saved, to a entry for every copy or in some cases linked file in a file backup. If you have a lot of files there will be a lot of entries in the last case, resulting in a very large SQLite file.
So maybe you have heard of SQLite. It is used practically everywhere right now. Firefox saves its bookmarks with it, Android phones and the IPhone their settings.
Here is a list of well-known companies which use SQLite: http://sqlite.org/famous.html

Lately SQLite got a new journalling mode which allows simultaneous reads and writes. This journalling mode is also used in stand-alone databases like PostgreSQL, thus making SQLite competitive in the concurrency area as well. This works by appending the changes to the database to a separate file. After some accumulation of changes they are applied to the database (checkpoint). Read access is blocked only during this application of changes.

Now I thought that appending the changes to that files does not require actual data to be written to disk and that because of that the write performance of SQLite is greatly improved (The appended data can be kept in system buffers until the checkpoint occurs).
There was some speed improvement but not that much. The reason for that is that SQLite is paranoid by default and enforces that every change to the database is actually written to disk after some data was saved. This is reasonable if this data is important, but in this case a backup interrupted by a power outage is worthless anyway (incomplete).
Of course you can change this default behaviour by executing:

PRAGMA synchronous=NORMAL;

On each connection. This will be done in the new version of both UrBackup server and client, greatly speeding up anything write related. Especially the statistics calculations are far faster now.