Hello from France,
Hope all is well for you.
I'm wondering about S3 migration jobs and restore jobs...
When a job is migrated, a local copy is made before the migration (/path/cache/client/S3_file/part.1 etc.) - local copy which is not necessarily kept on server after the migration (S3/path/S3_file/part.1 etc.).
However, file restoration via the GUI and the restore job does not work : the job seems to look for the file not directly on the S3 but rather in the local part used for cache :
director-sd JobId 158243: Warning: acquire.c:234 Read open Cloud device "S3_DEVICE_client" (/opt/data/cache/client) Volume "S3_DATA_client_date" failed: ERR=cloud_dev.c:1471 Could not open(/opt/data/cache/client/S3_DATA_client_date/part.1,OPEN_READ_ONLY,0640): ERR=No such file or directory
Here are the informations in bacula-sd :
Cloud {
Name = "S3_cloud"
Driver = "S3"
HostName = "s3_hostname"
BucketName = "bucket_name"
AccessKey = "XXX"
SecretKey = "XXX"
Protocol = "HTTPS"
UriStyle = "Path"
TruncateCache = "No"
Upload = "EachPart"
}
Device {
Name = "S3_DEVICE_client"
Description = "Device for client logical volume before migration"
MediaType = "S3_MEDIA_client"
DeviceType = "Cloud"
ArchiveDevice = "/opt/data/cache/client"
RemovableMedia = no
RandomAccess = yes
AutomaticMount = yes
LabelMedia = yes
AlwaysOpen = no
MaximumVolumeSize = 5000000000
MaximumFileSize = 1000000000
MaximumConcurrentJobs = 5
MaximumPartSize = 5000000000
Cloud = "S3_cloud"
}
The migration job is using the S3 device/media and a MigrationTime to the S3_pool associated to the client (NextPool S3)... Here is my bacula-dir.conf file :
Storage {
Name = "S3_STORAGE_client"
Description = "S3 logical bucket storage for client"
Address = "bacula_server"
Password = "XXX"
Device = "S3_DEVICE_client"
MediaType = "S3_MEDIA_client"
MaximumConcurrentJobs = 5
TlsEnable = yes
TlsPskEnable = no
TlsRequire = yes
TlsCaCertificateFile = "/etc/ssl/bacula/bacula_ca.crt"
TlsCertificate = "/etc/ssl/bacula/bacula_server.crt"
TlsKey = "/etc/ssl/bacula/bacula_server.key"
}
Pool {
Name = "Full_client"
Description = "Pool for client full backups"
PoolType = "Backup"
LabelFormat = "Full_${Job}_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}_${Hour:p/2/0/r}:${Minute:p/2/0/r}:${Second:p/2/0/r}"
LabelType = "Bacula"
UseVolumeOnce = yes
PurgeOldestVolume = no
ActionOnPurge = "Truncate"
MaximumVolumeBytes = 5000000000
VolumeRetention = 950400
MigrationTime = 604800
NextPool = "S3_client"
Storage = "STORAGE_client"
AutoPrune = yes
Recycle = no
Catalog = "MyCatalog"
FileRetention = 950400
JobRetention = 950400
}
Pool {
Name = "S3_client"
Description = "Pool used to migrate full oldest client volumes on S3"
PoolType = "Backup"
LabelFormat = "S3_${Job}_${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}_${Hour:p/2/0/r}:${Minute:p/2/0/r}:${Second:p/2/0/r}"
LabelType = "Bacula"
UseVolumeOnce = yes
PurgeOldestVolume = no
ActionOnPurge = "Truncate"
MaximumVolumeBytes = 5000000000
VolumeRetention = 15724800
Storage = "S3_STORAGE_client"
AutoPrune = yes
Recycle = no
Catalog = "MyCatalog"
FileRetention = 15724800
JobRetention = 15724800
}
Job {
Name = "MIGRATION_client"
Description = "Job to migrate oldest full jobs of client on S3"
Type = "Migrate"
Messages = "Standard"
Storage = "STORAGE_client"
Pool = "Full_client"
NextPool = "S3_client"
Client = "client"
Fileset = "MIGRATION_fileset"
Schedule = "SCHEDULE_migration"
PruneJobs = yes
PruneFiles = yes
PruneVolumes = yes
PurgeMigrationJob = yes
Enabled = yes
MaximumConcurrentJobs = 5
Priority = 10
SelectionType = "PoolTime"
AllowDuplicateJobs = no
}
Job {
Name = "RESTORE_S3_client"
Type = "Restore"
Messages = "Standard"
Storage = "S3_STORAGE_client"
Pool = "S3_client"
Client = "client"
Fileset = "RESTORE_fileset"
Where = "/tmp/restore"
MaxWaitTime = 1200
Enabled = yes
MaximumConcurrentJobs = 5
Priority = 10
}
So, after migration files on cache server and old jobs datas/metadatas are purged and delete from disk/Catalog, to keep only the migrated job information on S3/database.
I tried with several restore jobs with the same error... Do I have to manually recreate the remote file on S3 locally to use the restoration job ? Or, do i just need to keep a tree in the cache folder if I want to be able to restore the files?
Best regards from here,
Romain