This topic describes the various processes for restoring Dremio, such as the CLI command,
dremio-admin restore; restoring a project from AWS; and other general information about restoring Dremio from a backup.
Dremio metadata and user uploaded files can be backed up and restored. Doing a restore does not restore the contents of the distributed cache, such as acceleration cache, downloaded files and query results.
A backup can only be restored using the same version of Dremio that the backup was created on.
<dremio_home>/bin/dremio-admin restore -d <BACKUP_PATH> [additional options]
To obtain a list of restore options on the command line:
./dremio-admin restore -h
* -d, --backupdir backup directory path. for example, /mnt/dremio/backups or hdfs://$namenode:8020/dremio/backups -h, --help show usage -r, --restore restore dremio metadata -v, --verify verify backup contents
./dremio-admin restore -d /tmp/dremio_backup
The following are step-by-step instructions for restoring Dremio from a backup.
Make sure all cluster nodes are shutdown.
On the master node, create a copy of
/data/dremio/ depending on your setup)
On the master node, delete contents of
<DREMIO_LOCAL_DATA_PATH> and then create an empty directory called
db readable and writable
by the user running restore tool and Dremio daemon under
On the master node, run the following command located under
<DREMIO_HOME>/bin/ to verify backup where
-v option verifies backup contents.
$ ./dremio-admin restore -d <BACKUP_FOLDER_PATH> -v
If above step is successful, run the following command located under
-r option initiates a restore.
$ ./dremio-admin restore -d <BACKUP_FOLDER_PATH> -r
Look for the confirmation message. For example:
... Restored from backup at /tmp/dremio_backup_2017-02-23_18.25, dremio tables 14, uploaded files 1
Should you need to perform a rollback to a previous version of your Dremio project using AWS, this section describes the steps necessary for restoring to a backup.
This process is used only if you previously created a backup manually or automatically. You may also alternately create a cloned environment to use for testing new versions of Dremio against your current project.
If you’re attempting to roll back from an upgrade, ensure that the stack you’re creating is using the version of Dremio your project was previously using.
The project has been rolled back to the previous version.
This section outlines the process necessary to take an instance of Dremio and clone a copy of it. This is used when administrators wish to create a testing environment for upgrading to a new version of Dremio.
Prior to this, you must have first created a manual backup of your Dremio project.
/tmpfolder. Also, you’ll want to ensure that the original ownership and group (dremio:dremio) of the backup folder is preserved.
sudo service dremio stop.
dremio.conffile located in
/etc/dremio/and add the following line:
provisioning.migration.enabled = "true"
sudo rm -rf /var/lib/dremio/db/* sudo rm -rf /var/lib/dremio
-rinitiates a restore:
sudo -u dremio ./dremio-admin restore -d <BACKUP_FOLDER_PATH> -r
Restored from backup at /tmp/dremio_backup, dremio tables 14, uploaded files 1.
If the project you created in step #3 was done using a paid edition, skip to step #13. If the project was created using a free version and you want to enable enterprise features, you may optionally add a license key here. To do so, obtain a License Activation Key from your Dremio Account Executive, complete these steps, and skip to step #13.
license.txt) and take note of the directory pathway to include in the next step.
sudo -u dremio ./dremio-admin add-license -f <LICENSE
sudo service dremio start.
You may see some warnings in the
server.log file that look like:
“WARN c.d.s.r.MaterializationCache - couldn't expand materialization. This can be corrected by refreshing all reflections you have, or by simply waiting until the reflections refresh themselves automatically.
When Dremio is running on a edge node (Hadoop client installed) and a
dremio-admin restore -v or
-r is performed,
by default, it looks at HDFS and comes back with file does not exist. The folder/file obviously does not exist is Hadoop.
Restore fails with the following stack:
Error Message: java.io.FileNotFoundException: File /tmp/dremiobackup does not exist. at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:901) at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:112) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:961) at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:958) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:958) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537) at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1580) at com.dremio.dac.util.BackupRestoreUtil.scanInfoFiles(BackupRestoreUtil.java:191) at com.dremio.dac.util.BackupRestoreUtil.validateBackupDir(BackupRestoreUtil.java:230) at com.dremio.dac.cmd.Restore.main(Restore.java:81) verify failed java.io.FileNotFoundException: File /tmp/dremiobackup does not exist
file:/// to direct to local. For example, use the following command instead:
./dremio-admin restore -d file:///tmp/dremiobackup/dremio_backup_2019-04-22_20.30 -r