Skip to main content
Version: current [25.x]

Polaris Catalog Preview Enterprise

Dremio supports Snowflake’s managed Polaris Catalog as an Iceberg catalog source. With this source connector, you can connect and read from internal and external Polaris Catalogs.

Prerequisites

You will need the catalog Service URI, Client ID, and Client Secret from the Snowflake setup. For a walkthrough of the Snowflake setup, please refer to Query a table in Polaris Catalog using a third-party engine.

Configuring Polaris Catalog as a Source

To add a Polaris Catalog source:

  1. On the Datasets page, to the right of Sources in the left panel, click This is the Add Source icon..

  2. In the Add Data Source dialog, under Metastores, select Polaris Catalog.

    The New Polaris Catalog dialog box appears, which contains the following tabs:

    • General: Create a name for your Polaris Catalog source, specify the endpoint URI and Polaris Catalog, and set the authentication.

    • Advanced Options: Use catalog properties and credentials to set up storage authentication and authorization.

    • Reflection Refresh: (Optional) Set a policy to control how often reflections are refreshed and expired.

    • Metadata: (Optional) Specify dataset handling and metadata refresh.

    • Privileges: (Optional) Add privileges for users or roles.

    Refer to the following sections for guidance on how to edit each tab.

General

To configure the source connection:

  1. For Name, enter a name for the source.

    note

    The name you enter must be unique in the organization. Also, consider a name that is easy for users to reference. This name cannot be edited once the source is created. The name cannot exceed 255 characters and must contain only the following characters: 0-9, A-Z, a-z, underscore(_), or hyphen (-)

  2. Enter the name of the Polaris Catalog.

  3. For Endpoint URI, specify the catalog service URI.

  4. In the Authentication section, use the Client ID and Client Secret created during the configuration of a service connection in Snowflake's Polaris Catalog.

  5. (Optional) For Allowed Namespaces, add each namespace and check the option if you want to include their whole subtrees. Tables are organized into namespaces, which can be at the top level or nested within one another. Namespace names cannot contain periods or spaces.

Advanced Options

To set the advanced options:

  1. (Optional) For Enable Asynchronous Access for Parquet Datasets, this option is enabled by default but you can uncheck the box to deactivate. Dremio enables asynchronous access and local caching when possible so that asynchronous requests do not wait for data to return from your storage. Activating this option can enable faster query times.

  2. For Catalog Properties and Catalog Credentials, you must manually provide the storage authentication instead of using vended credentials in this preview version of Polaris Catalog.

    Dremio supports Amazon S3, Azure Storage, and Google Cloud Storage (GCS) as object storage services. For acceptable storage authentication configurations, see the following catalog properties and credentials for each service option.

    Amazon S3 Access Key

    TypeNameValueDescription
    propertyfs.s3a.aws.credentials.providerorg.apache.hadoop.fs.s3a.SimpleAWSCredentialsProviderRequired value for a Polaris Catalog source
    credentialfs.s3a.access.key<your_access_key>AWS access key ID used by S3A file system
    credentialfs.s3a.secret.key<your_secret_key>AWS secret key used by S3A file system

    Amazon S3 Assumed Role

    TypeNameValueDescription
    propertyfs.s3a.assumed.role.arnarn:aws:iam::*******:role/OrganizationAccountAccessRoleAWS ARN for the role to be assumed
    propertyfs.s3a.aws.credentials.providercom.dremio.plugins.s3.store.STSCredentialProviderV1Required value for a Polaris Catalog source
    propertyfs.s3a.assumed.role.credentials.providerorg.apache.hadoop.fs.s3a.SimpleAWSCredentialsProviderUse only if the credential provider is AssumedRoleCredentialProvider; lists credential providers to authenticate with the STS endpoint and retrieve short-lived role credentials
    credentialfs.s3a.access.key<your_access_key>AWS access key ID used by S3A file system
    credentialfs.s3a.secret.key<your_secret_key>AWS secret key used by S3A file system

    Azure Storage with Microsoft Entra ID

    TypeNameValueDescription
    propertyfs.azure.account.auth.typeOAuth
    propertyfs.azure.account.oauth2.client.id<your_client_ID>Client ID from App Registration within Azure Portal
    propertyfs.azure.account.oauth2.client.endpointhttps://login.microsoftonline.com/<ENTRA ID>/oauth2/tokenMicrosoft Entra ID from Azure Portal
    credentialfs.azure.account.oauth2.client.secret<your_client_secret>Client secret from App Registration within Azure Portal

    Azure Storage Shared Key

    TypeNameValueDescription
    credentialfs.azure.account.key<your_account_key>Storage account key

    Google Cloud Storage (GCS) Using Default Credentials

    TypeNameValueDescription
    propertydremio.gcs.use_keyfilefalseRequired value for a Polaris Catalog source

    Google Cloud Storage (GCS) Using KeyFile

    TypeNameValueDescription
    propertydremio.gcs.clientId<your_client_ID>Client ID from GCS
    propertydremio.gcs.projectId<your_project_ID>Project ID from GCS
    propertydremio.gcs.clientEmail<your_client_email>Client email from GCS
    propertydremio.gcs.privateKeyId<your_private_key_ID>Private key ID from GCS
    propertydremio.gcs.use_keyfiletrueRequired value for a Polaris Catalog source
    credentialdremio.gcs.privateKey<your_private_key>Private key from GCS
  3. Under Cache Options, review the following table and edit the options to meet your needs.

    Cache OptionsDescription
    Enable local caching when possibleSelected by default, along with asynchronous access for cloud caching. Uncheck the checkbox to disable this option. For more information about local caching, see Columnar Cloud Cache.
    Max percent of total available cache space to use when possibleSpecifies the disk quota, as a percentage, that a source can use on any single executor node only when local caching is enabled. The default is 100 percent of the total disk space available on the mount point provided for caching. You can either manually enter in a percentage in the value field or use the arrows to the far right to adjust the percentage.

Reflection Refresh

You can set the policy that controls how often reflections are scheduled to be refreshed automatically, as well as the time limit after which reflections expire and are removed. See the following options.

OptionDescription
Never refreshSelect to prevent automatic reflection refresh, default is to automatically refresh.
Refresh everyHow often to refresh reflections, specified in hours, days or weeks. This option is ignored if Never refresh is selected.
Set refresh scheduleSpecify the daily or weekly schedule.
Never expireSelect to prevent reflections from expiring, default is to automatically expire after the time limit below.
Expire afterThe time limit after which reflections expire and are removed from Dremio, specified in hours, days or weeks. This option is ignored if Never expire is selected.

Metadata

Specifying metadata options is handled with the following settings.

Dataset Handling

  • Remove dataset definitions if underlying data is unavailable (Default).
  • If this box is not checked and the underlying files under a folder are removed or the folder/source is not accessible, Dremio does not remove the dataset definitions. This option is useful in cases when files are temporarily deleted and put back in place with new sets of files.

Metadata Refresh

These are the optional Metadata Refresh parameters:

  • Dataset Discovery: The refresh interval for fetching top-level source object names such as databases and tables. Set the time interval using this parameter.

    ParameterDescription
    Fetch everyYou can choose to set the frequency to fetch object names in minutes, hours, days, or weeks. The default frequency to fetch object names is 1 hour.
  • Dataset Details: The metadata that Dremio needs for query planning such as information needed for fields, types, shards, statistics, and locality. These are the parameters to fetch the dataset information.

    ParameterDescription
    Fetch modeYou can choose to fetch only from queried datasets. Dremio updates details for previously queried objects in a source. By default, this is set to Only Queried Datasets.
    Fetch everyYou can choose to set the frequency to fetch dataset details in minutes, hours, days, or weeks. The default frequency to fetch dataset details is 1 hour.
    Expire afterYou can choose to set the expiry time of dataset details in minutes, hours, days, or weeks. The default expiry time of dataset details is 3 hours.

Privileges

You have the option to grant privileges to specific users or roles. See Access Controls for additional information about privileges.

To grant access to a user or role:

  1. For Privileges, enter the user name or role name that you want to grant access to and click the Add to Privileges button. The added user or role is displayed in the USERS/ROLES table.

  2. For the users or roles in the USERS/ROLES table, toggle the checkmark for each privilege you want to grant on the Dremio source that is being created.

  3. Click Save after setting the configuration.

Updating a Polaris Catalog Source

To update a Polaris source:

  1. On the Datasets page, under Metastores in the panel on the left, find the name of the source you want to edit.

  2. Right-click the source name and select Settings from the list of actions. Alternatively, click the source name and then the The Settings icon at the top right corner of the page.

  3. In the Source Settings dialog, edit the settings you wish to update. Dremio does not support updating the source name. For information about the settings options, see Configuring Polaris Catalog as a Source.

  4. Click Save.

Deleting a Polaris Catalog Source

note

If the source is in a bad state (for example, Dremio cannot authenticate to the source or the source is otherwise unavailable), only users who belong to the ADMIN role can delete the source.

To delete a Polaris source:

  1. On the Datasets page, click Sources > Metastores in the panel on the left.

  2. In the list of data sources, hover over the name of the source you want to remove and right-click.

  3. From the list of actions, click Delete.

  4. In the Delete Source dialog, click Delete to confirm that you want to remove the source.

note

Deleting a source causes all downstream views that depend on objects in the source to break.