Understanding obnam performance

Lionel Bouton lionel-subscription at bouton.name
Thu Jun 26 01:51:20 BST 2014

Le 25/06/2014 20:38, Jason F. McBrayer a écrit :
> Hi. I'm trying out obnam, coming from duplicity. I'm primarily
> interested in changing because of space usage. Duplicity needs
> periodic full backups to protect against the risk of corrupt
> incremental backup chains, and I try to keep at least two full backups
> and the incrementals of the most recent full backup at all times. I
> get the impression that my space usage will converge at a lower level
> while keeping me some older backups with obnam.
> So far, everything is fine, except that my nightly backups take a lot
> longer with obnam than with duplicity. I am backing up about 400GB to
> a 1 TB external drive. With both apps, full backups take a long time
> -- around 18 hours for duplicity and 24 hours for obnam, but, again, I
> don't have to keep doing full backups with obnam. My problem is with
> the nightly incrementals to which I am accustomed. In both cases,
> they're started by scripts in /etc/cron.daily (this is on Fedora). The
> jobs start around 3 AM, and duplicity consistently finished by 6AM.
> Obnam tends to finish between 12PM and 2PM. Both keep the machine
> they're running on quite busy, so I'd rather not use it interactively
> while either is running.
> It looks like the biggest part of the slowdown is scanning maildirs
> with lots of messages, but that's just an impression I get from
> watching the backups' interactive display from the first few runs
> before I put it in cron.
> Is this just an expected tradeoff, or is there something I can do to
> tune obnam's performance for my use-case?

I've just tested obnam 1.8 to get recent performance results. Earlier
tests suggest that 1.7.4 behaved similarly.

Tuning lru-size and/or upload-queue-size can make a significant
difference in performance.

Here follows some test results for this situation :
- data to backup stored on a btrfs volume on SSD : ~155000 files, 3.66GiB,
- Local system : 64 bit Linux, python 2.7.5, OpenSSH 6.6p1 with hpn patches,
- Local CPU : Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz (mostly idle),
- Remote system : 64bit Linux, OpenSSH 6.6p1 with hpn patches,
repository data on ext4 on standard SATA 7200rpm disk, large memory
(everything should fit in memory, only writes should hit disk)
- very minimal changes in the data backed up during tests so successive
backups only check for differences and backup transfers nearly no content,
- backup over Wifi (~1ms RTT, max speed over sftp ~3MB/s).

I use this command line without any configuration file :
obnam --repository=sftp://obnam@<servers>/~/repo --compress-with=deflate
--client-name=<myclient> backup <list_of_directories>
During testing I added --lru-size <l> --upload-queue-size <q> with
different <l> and <q> values.

The resident memory of the obnam process grows steadily (probably
filling caches) until it hits a pretty stable ceiling (cache full or
nothing new to put in cache) during the backup. It raises against
rapidly at the very end (during commits/unlock/...). The value reported
below is obtained either through the RES column reported by the htop
utility or the RSS column reported by "ps aux" and is the max witnessed
near the end of the backup.

Each combination was tested at least twice unless it was considered not
interesting after the first run. Timing seems consistent enough given
the systems involved (the system hosting the repository is often busy)
and memory usage is very consistent across runs.

default values as fetched from __init__.py are: l=256, q=128

|     Conditions     |       Time      |   Memory   |Number of runs |
|   default values   | 22m21s - 24m51s |   ~260M    |       2        |
| l=10000, q=default | 13m45s - 15m03s |   ~332M    |       2        |
| l=default, q=250   | 08m23s - 10m29s |   ~278M    |       5        |
| l=default, q=350   | 02m42s - 02m49s |  272-276M  |       2        |
| l=default, q=400   | 02m13s - 02m18s |  268-272M  |       3        |
| l=default, q=500   | 02m10s - 02m16s |  267-272M  |       3        |
| l=default, q=512   | 02m13s - 02m14s |  265-269M  |       2        |
| l=512,     q=512   | 01m55s - 02m06s |  322-326M  |       3        |
| l=768,     q=512   | 01m55s - 01m58s |  397-418M  |       3        |
| l=1024,    q=512   | 01m53s - 01m55s |  403-418M  |       3        |
| l=2048,    q=512   | 01m55s - 01m59s |  408-410M  |       3        |
| l=4096,    q=512   |     ~01m58s     |   ~419M    |       1        |
| l=default, q=600   | 02m14s - 02m26s |  269-272M  |       4        |
| l=default, q=750   | 02m13s - 02m15s |  266-272M  |       2        |
| l=default, q=1000  | 02m19s - 02m20s |   ~266M    |       2        |
| l=default, q=10000 | 02m23s - 02m35s |   ~266M    |       2        |

So in my configuration, when nearly no data changes between backups,
--lru-size=1024 --upload-queue-size=512 is at least 11x faster than the
default configuration.
--upload-queue-size seems to have the greatest effect without any
adverse effect (memory usage remains at the same level).
For a little extra boost with a small impact on memory usage, I can
increase --lru-size to 1024

Note that obnam was using 100% of the CPU for most of the time in the
fastest configuration, replacing --verbose with --quiet didn't change
the running time.

Best regards,

Lionel Bouton

More information about the obnam-support mailing list