Skip to main content

Manage Engines

An engine is a Dremio entity that manages compute resources. Each engine has one or more replicas that are created for executing queries. An engine replica consists of a group of executor instances defined by the engine capacity.

When you signed up for Dremio, an organization and a project were automatically created. Each new project has a preview engine. The preview engine, by default, will scale down after 1 hour without a query. As the name suggests, it provides previews of queries and datasets. Unlike other engines, the preview engine cannot be disabled.

If an engine is created with a minimum replica of 0, it remains idle until the first query runs. No executor instances run initially. When you run a query, Dremio allocates executors to your project and starts the engine. Engines automatically start and stop based on query load.

Sizes

Dremio provides a standard executor, which is used in all of our query engine sizes. Query engine sizes are differentiated by the number of executors in a replica. For each size, Dremio provides a default query concurrency, as shown in the table below.

Replica SizeExecutors per ReplicaDCUsDefault ConcurrencyMax Concurrency
2XSmall116220
XSmall132440
Small264660
Medium4128880
Large825610100
XLarge1651212120
2XLarge32102416160
3XLarge64204820200

States

An engine can be in one of the following states.

StateIconDescription
RunningRepresents an enabled engine (replicas are provisioned automatically or running as per the minimum number of replicas configured). You can use this engine for running queries.
Adding ReplicaRepresents an engine that is scaling up (adding a replica).
Removing ReplicaRepresents an engine that is scaling down (removing a replica).
DisablingRepresents an engine that is being disabled.
DisabledRepresents a disabled engine (no engine replicas have been provisioned dynamically or there are no active replicas). You cannot use this engine for running queries.
Starting EngineRepresents an engine that is starting (transitioning from the disabled state to the enabled state).
Stopping EngineRepresents an engine that is stopping (transitioning from the enabled state to the disabled state).
StoppedRepresents an enabled engine that has been stopped (zero replicas running).
DeletingRepresents an engine that is being deleted.

Autoscaling

The autoscaling capability dynamically manages query workload for you based on parameters that you set for the engine. Engine replicas are started and stopped as required to provide a seamless query execution by monitoring the engine replica health.

The following table describes the engine parameters along with their role in autoscaling.

ParameterDescription
SizeThe number of executors that make up an engine replica.
Max ConcurrencyMaximum number of jobs that can be run concurrently on an engine replica.
Last Replica Auto-StopTime to wait before deleting the last replica if the engine is not in use. Not valid when the minimum engine replicas is 1 or higher. The default value is 2 hours.
Enqueued Time LimitIf there are no available resources, the query waits for a period of time that is set by this parameter. When this time limit exceeds, the query gets canceled. You are notified with the timeout during slot reservation error if the query gets canceled due to the query time limit being exceeded. The default value is 5 minutes.
Query Runtime LimitTime a query can run before it is canceled. The default value is 5 minutes.
Drain Time LimitTime until an engine replica continues to run after the engine is resized, disabled, or deleted before it is terminated and the running queries fail. The default value is 30 minutes. If there are no queries running on a replica, the engine is terminated without waiting for the drain time limit.

For a query that is submitted to execute on an engine, the control plane assigns an engine replica to that query. Replicas are dynamically created and assigned to queries based on the query workload. The control plane observes the query workload and current active engine replicas to determine whether to scale up or scale down replicas. Replica is assigned to the query until the query execution is done. For a given engine, Dremio Cloud does not scale up replicas beyond the configured maximum replicas and it does not scale them down below the configured minimum replicas.

Monitor Engine Health

The Dremio Cloud control plane monitors the engines health and manages unhealthy replicas to provide a seamless query execution experience. The replica nodes send periodic heartbeats to the control plane, which determines their liveness. If a periodic heartbeat is not returned from a replica node, the control plane marks that node as unhealthy and replaces it with a healthy one.

View All Engines

To view engines:

  1. In the Dremio Cloud application, click the Project Settings This is the icon that represents the Project Settings. icon in the side navigation bar.
  2. Select Engines in the project settings sidebar to see the list of engines in the project. On the Engines page, you can also see engines as per the status. Click the Status dropdown list to see the different statuses.

Add an Engine

To add a new engine:

  1. On the Project Settings page, select Engines in the project settings sidebar. The Engines page lists the engines created for the project. Every engine created in a project is created in the cloud account associated with that project.
  2. Click the Add Engine button on the top-right of the Engines page to create a new engine.
  3. In the Add Engine dialog, for Engine, enter a name.
  4. (Optional) For Description, enter a description.
  5. (Optional) For Size, select the size of the engine. The size designates the number of executors.
  6. (Optional) For Max Concurrency per Replica, enter the maximum number of jobs that can be run concurrently on this engine.

The following parameters are for Engine Replicas:

  1. For Min Replicas, enter the minimum number of engine replicas that Dremio Cloud has running at any given time. For auto-stop, set it to 0. To guarantee low-latency query execution, set it to 1 or higher. The default number of minimum replicas is 0.
  2. For Max Replicas, enter the maximum number of engine replicas that Dremio Cloud scales up to. The default number of maximum replicas is 1.
tip

You can use these settings to control costs and ensure that excessive replicas are not spun up.

  1. Under Advanced Configuration. For Last Replica Auto-Stop, enter the time to wait before deleting the last replica if engine is not in use. The default value is 2 hours, and the minimum value is 1 minute.
note

The last replica auto stop is not valid when the minimum number of engine replicas is 1 or higher.

The following parameters are for Time Limit:

  1. For Enable Enqueued Time Limit, check the box.
  2. For Enqueued Time Limit, enter the time a query waits before being cancelled. The default value is 5 minutes.
caution

You should not set the enqueued time limit to less than one minutes, which is the typical time to start a new replica. Changing this setting does not affect queries that are currently running or queued.

  1. (Optional) For Enable Query Time Limit, check the box to enable the query time limit for making a query run before it is canceled.
  2. (Optional) For Query Runtime Limit, enter the time a query can run before it is canceled. The default query runtime limit is 5 minutes.
  3. For Drain Time Limit, enter the time (in minutes) that an engine replica continues to run after the engine is resized, disabled, or deleted before it is terminated and the running queries fail. The default value is 30 minutes. If there are no queries running on a replica, the engine is terminated without waiting for the drain time limit.
  4. Click Save and Launch. This action saves the configuration, enables this engine, and allocates the executors.

Edit an Engine

To edit an engine:

  1. On the Project Settings page, select Engines in the project settings sidebar.
  2. On the Engines page, hover over the row of the engine that you want to edit and click on the Edit Engine This is the icon that represents the Edit Engine settings. icon that appears next to the engine. The Edit Engine dialog is opened.

Alternatively, you can click the engine to go to the engine's page. Click the Edit Engine button on the top-right of the page.

note

You cannot edit the Engine name parameter.

  1. For Description, enter a description.
  2. For Size, select the size of the engine. The size designates the number of executors.
  3. For Max Concurrency per Replica, enter the maximum number of jobs that can be run concurrently on this engine.

The following parameters are for Engine Replicas:

  1. For Min Replicas, enter the number of engine replicas that Dremio has running at any given time. Set this value to 0 to enable auto-stop, or to 1 or higher to ensure low-latency query execution.
  2. For Max Replicas, enter the maximum number of engine replicas that Dremio scales up to.
  3. Under Advanced Configuration. Last Replica Auto-Stop, enter the time to wait before deleting the last replica if the engine is not in use. The default value is 2 hours.
note

The last replica auto-stop is not valid when the minimum number of engine replicas is 1 or higher.

The following parameters are for Time Limit:

  1. For Enable Enqueued Time Limit, check the box.
  2. For Enqueued Time Limit, enter the time a query waits before being canceled. The default value is 5 minutes.
caution

You should not set the enqueued time limit to less than one minutes, which is the typical time to start a new replica. Changing this setting does not affect queries that are currently running or queued.

  1. (Optional) For Enable Query Time Limit, check the box to enable the query time limit for making a query run before it is canceled.
  2. (Optional) For Query Runtime Limit, enter the time a query can run before it is canceled. The default query runtime limit is 5 minutes.
  3. For Drain Time Limit, enter the time (in minutes) that an engine replica continues to run after the engine is resized, disabled, or deleted before it is terminated and any running queries fail. The default value is 30 minutes. If no queries are running on a replica, the engine is terminated without waiting for the drain time limit.
  4. Click Save.

Disable an Engine

You can disable an engine that is not being used:

To disable the engine:

  1. On the Project Settings page, select Engines in the project settings sidebar. The list of engines in this project are displayed.
  2. Disable the engine by using the toggle in the Enabled column.
  3. Confirm that you want to disable the engine.

Enable an Engine

To enable a disabled engine:

  1. On the Project Settings page, select Engines in the project settings sidebar. The list of engines in this project are displayed.
  2. Enable the engine by using the toggle in the Enabled column.
  3. Confirm that you want to enable the engine.

Delete an Engine

You can permanently delete an engine if it is not in use (this action is irreversible). If queries are running on the engine, then Dremio waits for the drain-time-limit for the running queries to complete before deleting the engine.

caution

An engine that has a routing rule associated with it cannot be deleted. Delete the rules before deleting the engine.

To delete an engine:

  1. On the Project Settings page, select Engines in the project settings sidebar. The list of engines in this project are displayed.
  2. On the Engines page, hover over the row of the engine that you want to delete and click the Delete This is the icon that represents the Delete settings. icon that appears next to the engine.
  3. Confirm that you want to delete the engine.

Configure External Engines

Dremio's Open Catalog is built on Apache Polaris, providing a standards-based, open approach to data catalog management. At its core is the Iceberg REST interface, which enables seamless integration with any query engine that supports the Apache Iceberg REST catalog specification. This open architecture means you can connect industry-standard engines such as Apache Spark, Trino, and Apache Flink directly to Dremio.

By leveraging the Iceberg REST standard, the Open Catalog acts as a universal catalog layer that query engines can communicate with using a common language. This allows organizations to build flexible data architectures where multiple engines can work together, each accessing and managing the same Iceberg tables through Dremio's centralized catalog.

Apache Spark is a unified analytics engine for large-scale data processing, widely used for ETL, batch processing, and data engineering workflows.

Prerequisites

This example uses Spark 3.5.3 with Iceberg 1.9.1. For other versions, ensure compatibility between Spark, Scala, and Iceberg runtime versions. Additional prerequisites include:

  • The following JAR files downloaded to your local directory:
  • Docker installed and running.
  • Your Dremio catalog name – The default catalog in each project has the same name as the project.
  • If authenticating with a PAT, you must generate a token. See Personal Access Tokens for step-by-step instructions.
  • If authenticating with an identity provider (IDP), your IDP or other external token provider must be configured as a trusted OAuth external token provider in Dremio.
  • You must have an OAuth2 client registered in your IDP configured to issue tokens that Dremio accepts (matching audience and scopes) and with a client ID and client secret provided by your IDP.

Authenticate with a PAT

You can authenticate your Apache Spark session with a Dremio personal access token using the following script. Replace <personal_access_token> with your Dremio personal access token and replace <catalog_name> with your catalog name.

In addition, you can adjust the volume mount paths to match where you've downloaded the JAR files and where you want your workspace directory. The example uses $HOME/downloads and $HOME/workspace.

Spark with PAT Authentication
#!/bin/bash
export CATALOG_NAME="<catalog_name>"
export DREMIO_PAT="<personal_access_token>"

docker run -it \
-v $HOME/downloads:/opt/jars \
-v $HOME/workspace:/workspace \
apache/spark:3.5.3 \
/opt/spark/bin/spark-shell \
--jars /opt/jars/authmgr-oauth2-runtime-0.0.5.jar,/opt/jars/iceberg-spark-runtime-3.5_2.12-1.9.1.jar,/opt/jars/iceberg-aws-bundle-1.9.1.jar \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
--conf spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.polaris.type=rest \
--conf spark.sql.catalog.polaris.cache-enabled=false \
--conf spark.sql.catalog.polaris.warehouse=$CATALOG_NAME \
--conf spark.sql.catalog.polaris.uri=https://catalog.dremio.cloud/api/iceberg \
--conf spark.sql.catalog.polaris.io-impl=org.apache.iceberg.aws.s3.S3FileIO \
--conf spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=vended-credentials \
--conf spark.sql.catalog.polaris.rest.auth.type=com.dremio.iceberg.authmgr.oauth2.OAuth2Manager \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.token-endpoint=https://login.dremio.cloud/oauth/token \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.grant-type=token_exchange \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.client-id=dremio-catalog-cli \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.scope=dremio.all \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.token-exchange.subject-token="$DREMIO_PAT" \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.token-exchange.subject-token-type=urn:ietf:params:oauth:token-type:dremio:personal-access-token
note

In this configuration, polaris is the catalog identifier used within Spark. This identifier is mapped to your actual Dremio catalog via the spark.sql.catalog.polaris.warehouse property.

Authenticate with an IDP

You can authenticate your Apache Spark session using an external token provider that has been integrated with Dremio.

Using this configuration:

  • Spark obtains a user-specific JWT from the external token provider.
  • Spark connects to Dremio and exchanges the JWT for an access token.
  • Spark connects to the Open Catalog using the access token.

Using the following script, replace <catalog_name> with your catalog name, <idp_url> with the location of your external token provider, <client_id> and <client_secret> with the credentials issued by the external token provider.

In addition, you can adjust the volume mount paths to match where you've downloaded the JAR files and where you want your workspace directory. The example uses $HOME/downloads and $HOME/workspace.

Spark with IDP Authentication
#!/bin/bash
export CATALOG_NAME="<catalog_name>"
export IDP_URL="<idp_url>"
export CLIENT_ID="<idp_client_id>"
export CLIENT_SECRET="<idp_client_secret>"

docker run -it \
-v $HOME/downloads:/opt/jars \
-v $HOME/workspace:/workspace \
apache/spark:3.5.3 \
/opt/spark/bin/spark-shell \
--jars /opt/jars/authmgr-oauth2-runtime-0.0.5.jar,/opt/jars/iceberg-spark-runtime-3.5_2.12-1.9.1.jar,/opt/jars/iceberg-aws-bundle-1.9.1.jar \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
--conf spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.polaris.type=rest \
--conf spark.sql.catalog.polaris.cache-enabled=false \
--conf spark.sql.catalog.polaris.warehouse=$CATALOG_NAME \
--conf spark.sql.catalog.polaris.uri=https://catalog.dremio.cloud/api/iceberg \
--conf spark.sql.catalog.polaris.io-impl=org.apache.iceberg.aws.s3.S3FileIO \
--conf spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=vended-credentials \
--conf spark.sql.catalog.polaris.rest.auth.type=com.dremio.iceberg.authmgr.oauth2.OAuth2Manager \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.issuer-url=$IDP_URL \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.grant-type=device_code \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.client-id=$CLIENT_ID \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.client-secret=$CLIENT_SECRET \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.scope=dremio.all \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.impersonation.enabled=true \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.impersonation.token-endpoint=https://login.dremio.cloud/oauth/token \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.impersonation.scope=dremio.all \
--conf spark.sql.catalog.polaris.rest.auth.oauth2.token-exchange.subject-token-type=urn:ietf:params:oauth:token-type:jwt

Usage Examples

With these configurations, polaris is the catalog identifier used within Spark. This identifier is mapped to your actual Dremio catalog via the spark.sql.catalog.polaris.warehouse property. Once Spark is running and connected to your Dremio catalog:

List namespaces
spark.sql("SHOW NAMESPACES IN polaris").show()
Query a table
spark.sql("SELECT * FROM polaris.your_namespace.your_table LIMIT 10").show()
Create a table
spark.sql("""
CREATE TABLE polaris.your_namespace.new_table (
id INT,
name STRING
) USING iceberg
""")

Troubleshoot

If your engines are not scaling up or down as expected, you can reference the engine events to see the error that is causing the issue.

To view engine events:

  1. On the Project Settings page, select Engines in the project settings sidebar. The list of engines in this project are displayed.
  2. On the Engines page, click on the engine that you want to investigate.
  3. In the engine details page, click on the Events tab to view the scaling events and status of each event.
  4. If any scaling problems persist, contact Dremio Support.