Hello @lucas_diniz
Welcome to the Bacularis User Group.
Thank you for kind words about Bacularis 🙂
For the restore issue there exist two places in this wizard where for large backups there can happen a problem. Both are not directly caused by Bacularis but they are visible in Bacularis.
The two factors decided about intensivity of the problems are:
1) type of Bacula Catalog database
2) number of files in backup
Bacularis to prepare data for restore uses the Bacula Bvfs interface (Bacula Virtual File System) that provides many advantages to the restore process but it has also disadvantages such as the issues, that you experienced for backups with large number of files. Restore using Bconsole does not use Bvfs and it is the reason why you didn't experience these problems in Bconsole.
The first place in this wizard is building the Bvfs cache by Bacula in the second step of the wizard. This is exactly the screenshot, that you attached. If your Bacula catalog is MySQL/MariaDB then building Bvfs cache process will be slower than for PostgreSQL. It is the reason why in point 1) I mentioned type of the database. For many files the request can time out, as you correctly noticed. Number of files in backup is also deciding about how much time it will take.
The second place in this wizard is a moment when you start the restore. Bvfs part responsible for that has a hardlinks algorithm that for many files causes performance problems. It does not matter if you have in Backup hardlinks or not, the algorithm is run for all files anyway.
For the hardlinks algorithm problem, some time ago I researched it a bit. It has been added to Bacula in version 11.0
. The workaround for that can be creating an index in the database that should help:
CREATE INDEX file_jobid_fileindex_idx ON File (JobId, FileIndex);
(creating index can take some time, if you have many database records)
For the final solution, Bacula developers know about it. I hope that the fix will come soon.
For the Bvfs and the restore performance with many files, I would propose to try optimize the database as much as possible. I don't know which database type you use, but some hints for that you can find in this documentation chapter:
https://bacularis.app/doc/brief/optimization.html#database
Also useful to speed up building Bvfs cache is building it asynchronously just after backup in the Job Runscript
. This way when you need to restore something and you go to the restore wizard, the cache is already ready and you don't need to wait at all. This building Bvfs cache is incremental process that can be run anytime you want. For example there can be created a post-backup script in Runscript
that will do:
echo '.bvfs_update jobid=YOUR_JOB_ID' | bconsole
There could be used the Bvfs update command directly in the Runscript
Console=XXXX
directive, but Bacula had a problem with executing this .bvfs_update
command in the Console
directive. I don't know if it has been already fixed, so because of that I propose to put .bvfs_update
in script in Command=XXXX
. But you can try both, of course.
On the Bacularis side, for a long time we are considering adding Bconsole restore in the restore wizard, but so far it has not been decided.
Best regards,
Marcin Haba (gani)