UrBackup Server/Client/Restore 2.0.0 beta was recently released. This marks the beginning of the UrBackup 2.0 beta phase.
See the forums for download links and discussion.
UrBackup 2.0 marks the beginning of UrBackup having no major limitations. If you still find some please start a discussion in the forums. The next beta version will also properly support sparse file backup.
Currently the next major UrBackup version is kind of close to getting finished. There are a few major work in progress areas. Once they are finished and I have done some overall testing I will release a beta version.
The major changes in the new version are:
- Completely reworked the file deduplication and file backup statistics calculation. This should be much faster, scalable and reliable now.
- The Copy-on-Write image backups on btrfs mentioned in the last post. Synthetic full backups for the VHD/VHDZ file format and settings for basing image backups on the last full or last incremental backup (differential/incremental).
- File backups include file metadata including file modification time, ACLs, alternate data streams etc.
- Backup of streaming data. E.g. the output of “mysqldump”/”pg_dump”. I plan to add basic backup scripts for popular Open Source databases to the client
- New file restore feature which restores file backups and properly restores the file meta-data
- The ACLs/file permissions are used to enable users to directly access backups on the web interface from the explorer on the clients (via right click -> Access/Restore backups)
- Proper backups of symbolic links. Symbolic links which point to folders/files which are backed up are backed up as symbolic links and symbolic links which point outside of the selected backup set are followed/not followed depending on a setting
- The web interface has been bootstrapified (http://getbootstrap.com/ – mombojuice did the work) and looks much more modern now
- Simultaneous image and file backups
- Backup and restore EFI boot sector and partition on UEFI systems. Restore CD that boots with UEFI firmware
- Client for Mac OS X
- Forward secrecy for Internet clients via ECDH and Internet client security improvement by using AES-GCM
- Switch from DSA to ECDSA for client update and server identity signatures
Still to do:
- Lot’s of testing and bug fixing
- Backup and restore of file meta-data on Mac OS X and Linux
- Symbolic link backup handling on Mac OS X and Linux
- Automatic client update for Mac OS X like for Windows
- Restoring files which are in use on Windows (via restarting)
- Update Documentation
Remaining UrBackup limitations (to be done after with a subsequent version):
- Recognize hard links and backup the files only once
- Backup only used areas of sparse files
- Continuous file backup
Once the “to do”s are done this is a big step forward for UrBackup. Having streaming file backups and incremental, differential, synthetic full and full image backups basically allows you to implement pretty much every backup strategy with UrBackup. The only thing missing is the continuous file backup and I already started work on that.
For example you could use UrBackup instead of TimeMachine on Mac OS X and do a full system restore via the file restore feature (this is not implemented at all – it is just an example what it might be able to do). You probably don’t even need an image of your Windows system partition but can restore it via the file backup restore (albeit inefficiently, because the hard links in C:\windows\winsxs are not handled properly).
I’m releasing UrBackup Server 0.26.1 and Client 0.40.1 soon. They do have only minor bug fixes and additionally a Russian translation.
The next major version, which will probably be 1.0, will have following new features:
First of all you will be able to start and stop backups from the server web interface.
Then I reorganized the settings, both on the server web interface and on the client. You can also see the new bandwidth throttling feature which can limit the bandwidth usage of the backup server, both globally and for each client.
I added a few features to the new internet mode, described in the last post. Per default UrBackup does not do full file backups or image backups with an internet connection, but it can be enabled. Total global backup speed and backup speeds for each client can be set separately from the local backup speed. You can e.g. use this on the client to prevent UrBackup from using all your bandwidth. Additionally to being able to encrypt the transfer over internet UrBackup can now also compress it.
There is a new feature which lets you archive certain backups in certain intervals. Archived backups are not deleted during cleanups, until they are not archived anymore. Additionally to the automated archival you can also manually archive and un-archive certain file backups simply by clicking on them. For now only file backups can be archived.
This should be the major improvements. The are some minor ones as well.
Everything except the internet mode is ready for testing, so if anyone wants to help send me a mail at email@example.com or drop by in the forums and I will upload the appropriate builds.
Currently I’m working on a new internet mode for UrBackup. This means that you will be able to backup clients to a server on the internet with the upcoming new version.
This communication is of course encrypted and authenticated. It uses a shared key encryption with AES256 in CFB mode. It should be easy to configure: You just need to supply the server with its internet name/IP and the ports the clients should connect to. These settings, as well as random keys, are then pushed to the clients via the local (trusted) network. They can be manually entered on the client side as well. Then the key is pushed from the client to the server.
If you are not in the local network the client tries to connect to the internet server, if you entered something (e.g. a dns name or IP address) there. Then both check if they have the same key and if they do have the same shared key a normal connection, like if the client were in the local network, is established and backups can be performed.
I’ll now implement special options for disabling image and full file backups for clients connected via internet. Then I will implement a special, block based file (rsync like) transfer mode which will be used for those clients and which transfers less data in some scenarios.
Then you can look forward to backup archival and more detailed backup retention capabilities, which I’ll be working on next.
You know those parents that love their child so much, they do not see how bad their child in reality really is. I think you have to have a similar relationship to GNU/Linux if you really want to use it as your desktop operating system. That does not mean, that it is generally bad. Just like the child it has its strong points. E.g. the kernel. I am an avid fan of Linus Torvals autocratic management of kernel development. And have no doubt, it is autocratic. He decides in which direction the kernel moves and the success the kernel has had, is in my opinion, largely caused by his pragmatic style.
One could say that the success of the kernel part of GNU/Linux was caused by his strong leadership. And in areas where the operating system does less well, there is a lack of leadership.
For example the window managers. Mainly there is KDE and Gnome. They have different UI frameworks and it is already kind of sacrilegious to use a KDE application in Gnome, because it uses more memory. But additionally to that, this application won’t have the same style.
Of course they have different systems for start menu entries, tray icons, settings and pretty much everything you can think of. Thankfully there is kind of a standardization body named FreeDesktop.org. The problem – as with every standardization process – is that it moves slowly and the resulting standard does not define all useful scenarios. Thus the new features are sometimes still not accessible in a common way.
We speak here of a fragmentation within the operating system: In order to make a GNU/Linux application which uses UI and has a native look&fell you need to do everything twice now. Once with GTK and once with Qt (used by KDE).
But this does not end there: You have to think of the zillion other window managers out there. XFCE, Unity, Fluxbox you name them. Thankfully most are based on either GTK or Qt. Nevertheless: In each one of those, your application may not display its tray icon correctly.
And as you perhaps know: The UrBackup Client displays a tray icon.
Anticipating all these complications I am using a cross platform toolkit for the UrBackup Client: wxWidgets. Theoretically it is available for both Qt and GTK. As every level of abstraction this gives you slightly less power, but the application is simple right?
Well, try to show such a balloon popup on windows and we can talk. But otherwise it really worked mostly well.
So I compiled the client in Debian and checked if everything was working. And it did. Then – to test it in a more popular desktop distribution – I downloaded Ubuntu.
The tray icon did not show up. Turns out Unity has a whitelist of apps that can show tray icons. This annoys many users as e.g. Skype won’t show up any more. You can of course allow tray icons by editing some arcane setting somewhere. But this is not something the end-user should do right?
But it gets worse: After I edited that setting to allow all applications, it still did not work.
Turns out they did not like the FreeDesktop.org standard any more and made their own. In order for it to work I would have to use a separate library (libappindicator) to display the tray icon. Libappindicator only works with Unity on Ubuntu, so I would have to make and release a different version of my application for Ubuntu. Not acceptable.
I’ll repeat: I’m using libwxgtk2.8 which is officially part of Ubuntu to display a tray icon. This does not work because wxWidgets uses the FreeDesktop.org protocol to display the tray icon which Ubuntu decided to abandon. The wxWidget guys (understandably) seem to not want to fix that issue in wxWidgets 2.9, probability because they do not want to implement something only for Ubuntu, as well.
Simultaneously the Ubuntu fork Mint which does not use Unity is becoming more popular. So perhaps this specific problem will resolve itself this way. This issue certainly seems to have caused some waves: http://blogs.gnome.org/bolsh/2011/03/11/lessons-learned/
Bottom line of that article is that the FreeDesktop.org standardization process is broken. And this is just one example of the kind of fragmentation we developers have to think about.
Compare that to Windows where the program that displayed tray icons in Windows 95 probably still can display it in Windows 7. After 11 years! Too easy.
I said at the beginning that a strong leadership is needed just like for the Linux kernel. This strong leadership would have to coordinate efforts in different window managers and in different distributions. This is difficult because the distribution is the thing the users sees and holds responsible for something that is not working. This is also the reason why you install packets from your distribution. Because doing it any other way probably causes something to not work or even break. Because only distributors are accountable it is very difficult to establish something like FreeDesktop.org – an inter distribution standardization body. There is simply no incentive to play along nicely, especially since standardization processes tend to be lengthy and difficult and you want your distribution to be progressive and modern.
The only way I see this dilemma could be solved, is by having one distribution which the majority of GNU/Linux (desktop) users use. I hoped this could be Ubuntu lead by Mark Shuttleworth. But Ubuntu is sadly moving into the wrong direction at the moment.
Additionally to that. It is moving too fast. Given some time and persuasion Gnome probability would have adapted the libappindicator interface and I would not have this problem now.
Unity is – in my opinion – really crappy. I did not even find out how to switch applications without alt+tabbing and had to use the windows key to start one (start menu where are you?). If this is someone’s idea of usability and end user friendliness then I give up all remaining hope for Ubuntu.
Given all that, I have decided to not start building any packages for Ubuntu/Debian. The support matrix would just be too great and I already named one issue. If you really love Linux that much that you use it as a desktop operating system, I leave you to grab the source code and build it yourself – no guarantees that it works on your specific distribution in your window manager. Thankfully the back-end part – the part that does the backups – is not dependant on any flaky UI/Window manager stuff and so should be there to stay. If the frontend does not work for you (aka it does not display the tray icon) you can always set the directories it backs up on the server.
I hope that some time in the future someone from a distribution picks up that code and builds working packages for that distribution. But that someone won’t be me. This far, and no further! Sorry, Linux. I will still love you. But only as my server child. Not a desktop one.
I finally bit the bullet and worked on MSI installers for Windows. As anticipated it was not easy. I used WiX.
They do have some advantages over an installer distributed as “exe”:
- One can add the Microsoft Visual Studio runtime as a “merge module” thus avoiding starting it in the installer manually
- Apparently installing centralized on domain computers is easier
On the negative side:
- No shared 32/64 bit MSIs are possible. That means the user has to select the right one before downloading
- You cannot add custom commands as easily as in NSIS
I think I will only publish 64bit MSIs for now. Most Windows Servers should be 64bit now anyway and I will still publish the “old” installers for users of older and 32bit systems.
I recently added a setting to configure which volume(s) UrBackup should backup as image. I added this because it was requested in the forums. Feature requests work! Try them as well!
I will now continue on to modifying the restore CD to allow one to select the volume to restore. This will not be much work, so expect it soon with UrBackup Client 0.39, Server 0.25 and Restore CD 0.3.
Nevertheless I felt a little bit uneasy while adding this feature, because it is not really necessary in my opinion.
Incremental file backups are usually really fast. Incremental image backups currently not so much. So you should use file backups whenever possible. As additional benefit restoring only part of the data is much easier with file backups than with image backups, where you have to mount the image. If you want to be able to do a bare metal restore though and you use Windows having an image backup of your system volume is unavoidable. Restoring from files is not possible without reinstalling.
I will talk about why you do not need image backups at all if you use Linux soon.
So because file backups are faster and more convenient and you only need an image backup of your system volume backing up any other volume as image backup is suboptimal.
But now UrBackup does not stop you from doing it. It is your choice now.
A few days ago while I was testing the Image capability of UrBackup on XenServer (another story) I noticed that Windows Server 2008 R2 creates a 100MB large Partition called “System Reserved”. On closer look a partition like this also exists on my Windows 7 (x64). Windows 7 apparently keeps some files for booting there so that it can still repair the main C: partition in case something happens to it.
One user already mentioned that such a partition exists and UrBackup should also include it into the image backups, but I couldn’t reproduce the problem till now (I asked him which Windows he was using and he didn’t answer).
In order for the partition to not be created and used you have to manually partition during the Windows setup.
As this is a major bug/limitation of UrBackup, because it causes the restored images of operating systems with this partition not to be bootable I decided to fix this problem in Server 0.24/Client 0.38 (further delaying said versions). If this partition exists it gets downloaded to the server and is associated with the normal image backup of the “C” partition. If you restore an image the “System Reserved” partition will be restored after the “C” partition, if it exists on the server thus making a restored image of the operating systems mentioned above bootable.
UrBackup uses SQLite to save information about your backups. The data ranges from small stuff such as settings and when a file/image backup was done and where it is saved, to a entry for every copy or in some cases linked file in a file backup. If you have a lot of files there will be a lot of entries in the last case, resulting in a very large SQLite file.
So maybe you have heard of SQLite. It is used practically everywhere right now. Firefox saves its bookmarks with it, Android phones and the IPhone their settings.
Here is a list of well-known companies which use SQLite: http://sqlite.org/famous.html
Lately SQLite got a new journalling mode which allows simultaneous reads and writes. This journalling mode is also used in stand-alone databases like PostgreSQL, thus making SQLite competitive in the concurrency area as well. This works by appending the changes to the database to a separate file. After some accumulation of changes they are applied to the database (checkpoint). Read access is blocked only during this application of changes.
Now I thought that appending the changes to that files does not require actual data to be written to disk and that because of that the write performance of SQLite is greatly improved (The appended data can be kept in system buffers until the checkpoint occurs).
There was some speed improvement but not that much. The reason for that is that SQLite is paranoid by default and enforces that every change to the database is actually written to disk after some data was saved. This is reasonable if this data is important, but in this case a backup interrupted by a power outage is worthless anyway (incomplete).
Of course you can change this default behaviour by executing:
On each connection. This will be done in the new version of both UrBackup server and client, greatly speeding up anything write related. Especially the statistics calculations are far faster now.
I just noticed that the new Unstable UrBackup client crashed on Windows XP. Looking at the logfile did not give any indication where the error came from. I could however say where it did not come from, because only one thread crashed and the other ones continued writing debug info into the logfile.
To get some more info where this error was happening I installed userdump on the XP laptop (http://support.microsoft.com/kb/241215). This gave me a nice image of the process memory at the point of failure. Sadly Visual Studio cannot display that much information about the dump file. For example I did not get it to display a call stack. I went on to install WinDbg which is part of the Windows SDK. This had all the information needed to pinpoint the problem. It showed a call stack including line numbers in the source files. It was however mysterious how it got the location of the source files and the line number of the error, because of course I used a release build on the XP laptop and release versions do not include debug information. Strange.
Even though it could display everything quite fine WinDbg complained about missing debug information. Which is, as explained, only natural. But then why could it show the call stack with function names?
Analyzing the information WinDbg provided did not help: The error was at a position where normally no error should occur. I double checked.
So whatever magic WinDbg does it must be wrong. Right? I continued to point WinDbg at the right debug information, but that did not change the position of the error in the code. I was just in the process of collecting all the right DLLs to get a debug build to run on the XP laptop, when the day saving thing happened: I cleaned the whole project and build everything. The universal advise for every problem computer related: “Have you tried turning it off and on again?”. Of course it worked perfectly after that.
Visual Studio must have done something wrong in the calculation of what it has to rebuild, causing the XP build target to be outdated, running with a more recent host process. This caused a function to be called without a parameter which now has one, which then caused the memory access error.
Once again the solution was easy but finding the solution was hard.