Skip to main content
Version: current [24.3.x]

23.0.0+ Release Notes (October 2022)


Click here to view the release notes for the latest Dremio 23 release.

Breaking Changes

  • Dremio 23.0.0+ supports only MapR 6.2.0. If you are running MapR 5.2.x or 6.1.x, you must upgrade to MapR 6.2.0 before upgrading to Dremio 23. Dremio releases up to and including 22.x do not support MapR 6.2.0, only MapR 5.2.x and 6.1.x are supported in releases prior to Dremio 23.

    NOTE: MapR 6.2.0 supports only JDK 11. JDK8 is not supported.

  • Dremio can now read MAP data from Parquet files. You must run ALTER TABLE <table_name> FORGET METADATA on tables containing MAP data that you have previously queried. This feature is enabled by default. If you prefer the previous behavior of representing MAP data as STRUCT, set to OFF under Settings > Support > Support Keys.

  • A preview job is no longer automatically triggered when you click on a dataset. If you have permissions to edit the dataset, you will see the original SQL in the SQL Editor. If you do not have permissions to edit the dataset, you will continue to see a SELECT * statement pre-populated in the SQL Editor.

  • In previous releases, Dremio supported a maximum of 800 leaf columns in a table, though that value was configurable with the support key store.plugin.max_metadata_leaf_columns. If you used this support key and have upgraded to v23.0, reset the key so that you can use the maximum of 6,400 that is enabled with wide table support. See Creating and Querying Wide Tables for more information and limitations.

Known Issues

  • In this release, Dremio does not support Iceberg tables written with equality deletes.
  • DML operations (INSERT, UPDATE, DELETE, MERGE) are not supported on tables with MAP columns. CTAS is supported on tables with MAP columns.


  • If a user was actively logged in to Dremio during the upgrade to version 23.0.0, pages under Settings will throw an Unexpected Error until the user logs out and logs back in.


DX-55884 - expanded view and default reflections

What's New

  • This release of Dremio supports a semi-structured MAP data type that allows you to query map data from Apache Parquet files, Apache Iceberg, and Delta Lake. The MAP data type is a collection of key-value pairs and is useful for holding sparse data. See Data Types for more information.
  • Dremio now supports LISTAGG, which is an aggregate function that concatenates a list of strings and places a separator between them. See LISTAGG for more information.
  • The Jobs Profile page includes a number of enhancements so you can quickly find the most expensive execution steps in a query, understand details of each execution step and its impact on query time, memory consumption, data volume, and the effect on upstream and downstream data volume upstream, and identify system or data issues that are causing a query to be slow or expensive. See Viewing Query Profiles for more information.
  • Azure Data Lake Storage (ADLS) Gen1 is now supported as a source on Dremio's AWS Edition. For more information, see Azure Data Lake Storage Gen1.
  • Elasticsearch is now supported as a source on Dremio's AWS Edition. For more information, see Elasticsearch.
  • In this release, embedded Nessie historical data that is not used by Dremio is purged on a periodic basis to improve performance and avoid future upgrade issues. The maintenance interval can be modified with the nessie.kvversionstore.maintenance.period_minutes Support Key, and you can perform maintenance manually using the nessie-maintenance admin CLI command.
DX-51980: Time travel queries by `TIMESTAMP` are now supported on Iceberg tables. See [Querying Apache Iceberg Tables](/current/reference/sql/commands/apache-iceberg-tables/apache-iceberg-select/) for more information.
  • Dremio now supports wide tables. See Creating and Querying Wide Tables for more information and limitations.

  • Added a new Admin CLI command, dremio-admin remove-duplicate-roles, that will remove duplicate LDAP groups or local roles and consolidate them into a single role. For more information, see Remove Duplicate Roles.


  • Dremio now supports connecting to Amazon S3 sources using an AWS PrivateLink URL. For more information, see Amazon S3.

  • Similar to Encrypting the LDAP Bind Password, Dremio now supports the same encryption mechanism for wire encryption setup for the following fields in dremio.conf: keyStorePassword, keyPassword, and trustStorePassword


  • Iceberg tables written with positional deletes are now supported.

  • Starting in Dremio 23.0.0, customers who select dremio/dremio-ee docker image will receive an Eclipse Temurin based image for JDK, either JDK 8 or JDK 11. Dremio will no longer provide docker images based on openjdk:jdk-8 for future Dremio versions since it has been officially deprecated. Older dremio/dremio-ee image versions will remain available.


  • Added a button that allows you to quickly copy the ID of a job on the job details page.

Issues Fixed

  • When multiple metadata refresh jobs ran concurrently on the same dataset, one or more jobs could fail with ConcurrentModificationException.
  • Added table snapshot ID in the plan digest for Iceberg table scans so that the planner can distinguish between two different versions of the same table.
  • Added validation to the REST endpoint so that reflections cannot be configured to expire more quickly than the refresh period.
  • When promoting Iceberg tables, Dremio now correctly previews underlying table content for the latest snapshot, excluding delete files.

  • When promoting Iceberg tables backed by external catalogs, users would see an unhelpful Failed to get iceberg metadata error. The message now provides more information about using a data source configured for the catalog.


  • After upgrading to Dremio 22.1.1, some coordinator nodes failed to start due to a failure in connecting to S3-compatible storage (sources or distributed storage configuration) that required path style access.

  • Following the upgrade to Dremio v22, Support Keys of type DOUBLE would no longer accept decimal values.


  • Fixed an issue that was causing REFRESH REFLECTION and REFRESH DATASET jobs to hang when reading Iceberg metadata using Avro reader.
    DX-56556, DX-56244
  • Fixed an issue that was causing the status of a cancelled job to show as RUNNING or PLANNING.
  • Fixed a bug in Yarn-based deployments where certain properties that were meant for customizing Dremio executor containers were also being passed on to the Application Master container.
  • In some deployments, using a large number of REST API-based queries that return large result sets can create memory issues and lead to cluster instability.
  • Following the upgrade to Dremio 22, some queries to Hive 2 metastore external tables with data in S3 were running considerably slower than before.

  • In some scenarios, invalid metadata about partition statistics was leading to inaccurate rowcount estimates for tables. The result was slower than expected query execution or out of memory issues. For each table included in a query where this behavior appears, perform an ALTER TABLE <table-name> FORGET METADATA, then re-promote the resulting file or folder to a new table. This will ensure that the table is created with the correct partition statistics.


  • During the reflection matching phase, for the filter pattern in some queries the planner could generate row expression nodes exponentially and exhaust heap memory.

  • Fixed an issue with concurrent metadata refresh requests that could result in the following error: StatusRuntimeException: ABORTED: Tried to create a dataset that already exists.

  • Changes made to the columns displayed on the Jobs page, or the order of the columns, were not being saved after leaving the page.
  • In some environments, Dremio was unable to read a Parquet statistics file in Hive during logical planning, and the query was cancelled because planning phase exceeded 60 seconds.

  • Fixed an issue that was causing the error GandivaException: Failed to make LLVM module due to Function double abs(double) not supported yet for certain case expressions used as input arguments.


  • When a materialization took too long to deserialize, the job updating the materialization cache entry could hang and block all reflection refreshes.
DX-54176, DX-54174, DX-54214
  • This release includes a number of fixes that resolve potential security issues.
  • For some users, when clicking on certain items on the Settings page, they were being redirected to the Dremio home screen.

  • Automatic reflection refreshes were failing with the following error: StatusRuntimeException: UNAVAILABLE: Channel shutdown invoked


  • In rare cases, an issue in the planning phase could result in the same query returning different results depending on the query context.

  • Profiles for some reflection refreshes included unusually long setup times for WRITER_COMMITTER.


  • Wait time for WRITER_COMMITTER was excessive for some reflection refreshes, even though no records were affected.
  • After changing the engine configuration, some queries were failing with an IndexOutOfBoundsException error.
  • When skipping the current record from any position, Dremio was not ignoring line delimiters inside quotes, resulting in unexpected query results.
  • Following the upgrade to Dremio 21.2, some Delta Lake tables could not be queried, and the same tables could not be formatted again after being unpromoted.

  • Fixed an issue handling CONVERT_FROM during reflection matching when the materialization cache was enabled.


  • On occasion, projecting complex data types would result in a Schema change exception.
  • Some queries on Parquet datasets in an ElasticSearch source were failing with a SCHEMA_CHANGE error, though there had been no changes to the schema.
  • In some cases, deleted reflections were still being used to accelerate queries if the query plan had been cached previously.

  • Reflection refreshes were failing on ElasticSearch views that used the CONTAIN keyword.


  • When a query that used a reflection was executed multiple times, some of the jobs used the reflection and some did not.

  • Clicking Edit Original SQL for a view in the SQL editor was producing a generic Something went wrong error.


  • Fixed issue that was causing the LENGTH function to return incorrect results.
  • Fixed an issue that was causing metadata refresh on some datasets to fail continuously.

  • Some queries were failing with INVALID_DATASET_METADATA ERROR: Unexpected mismatch of column names if duplicate columns resulted from a join because Dremio wasn't specifying column names.


  • When unlimited splits were enabled, users were seeing an Access denied error for queries run against Parquet files if impersonation was enabled on the source.
  • Fixed an issue causing the error "Offset vector not large enough for records" when copying list columns.

  • Some queries that used the FLATTEN() function were showing results for a Preview, but no data was returned when using Run.


  • Removed the ‘unsafe-eval’ directive from the content security policy.
  • Dremio no longer includes server name and version in the response header.
  • Fixed an issue with external LDAP group name case sensitivity, which was preventing users from accessing Dremio resources to which they had been given access via their group/role membership.
  • If issues were encountered when running queries against a view, Dremio was returning an error that was unhelpful. The error returned now includes the root cause and identifies the specific view requiring attention.

  • When using the Catalog API to create a folder in a space, if the folder already existed in the space, the API was returning the HTTP/1.1 500 Internal Server Error instead of HTTP/1.1 409 Conflict.


  • Row count estimates for some Delta Lake tables were changing extensively, leading to single-threaded execution plans.

  • When a Hive source was added or modified, shared library files created in a new directory under /tmp were not being cleaned up and leading to disk space issues.


  • In some cases, queries using the < operator would fail when trying to decode a timestamp column in a Parquet file.

  • JDBC clients could not see parent objects (folders, spaces, etc.) unless they had explicit SELECT privileges on those objects, even if they had permissions on a child object.


  • Fixed an issue in the scanner operator that could occur when a parquet file had multiple row-groups, resulting in a query failure and the following system error: Illegal state while reusing async byte reader
  • When promoting a folder using the REST API, incremental refresh settings were not being returned in the POST response.
  • Frequent, consecutive requests to the Job API endpoint to retrieve a Job's status could result in an UNKNOWN StatusRuntimeException error.
  • Parentheses were missing in the generated SQL for a view when the query contained UNION ALL in a subquery, and the query failed to create the view.
  • Upgraded 3rd party XML parsing library stax2-api dependency (used while parsing XML responses from S3) from 3.1.4 to 4.2 as required by woodstox-core:5.2.1.
  • Updated the PostgreSQL JDBC Driver to version 42.4.1 to address CVE-2022-31197.
  • Updated com.squareup.okhttp3:okhttp to version 4.9.2.
  • Updated the Freemarker library to version 2.3.31. While Dremio was not subject to any vulnerabilities in the previous version, the version was updated to comply with security and development best practices.
  • Updated the Apache Xerces Java library to version 2.12.2.
  • Updated to version 3.19.4.
  • Updated to version 2.9.0.

23.0.1 Release Notes (October 2022)

Issues Fixed

  • In some cases, queries against a table that was promoted from text files containing Windows (CRLF) line endings were failing or producing an Only one data line detected error.

23.1.0 Release Notes (November 2022)

Breaking Changes

  • If you previously installed the community Snowflake connector from Dremio Hub, you must remove it and the existing driver. For more information, see Snowflake.

What's New

  • Table location (locationUri) for Hive and Glue sources is now supported for Iceberg Tables. See Creating Apache Iceberg Tables for more information.
  • This release includes a new SQL function, ARRAY_CONTAINS which returns whether a list contains a given value. For more information, see ARRAY_CONTAINS.

  • In this release, a new source connector allows you to query data from other Dremio clusters. For more information, see Connecting to Another Dremio Software Cluster.

  • This release adds support for a new connector that allows querying data from Snowflake data warehouses. If you previously installed the community connector from Dremio Hub, you must remove it and the existing driver. For more information, see Snowflake.

  • If you specify an alias for a column or expression in the SELECT clause, you can now refer to that alias elsewhere in the query. For more information, see Table SQL Statements.

  • SELECT statements now support a new QUALIFY clause, which allows you to filter the results of window functions. For more information, see SELECT.

  • This release includes performance improvements for incremental metadata refreshes on partitioned Parquet tables.


Issues Fixed

  • The queries.log file was showing zero values for inputRecords, inputBytes, outputRecords, outputBytes, and metadataRetrieval, even though valid values were included in the job profile.
  • For Parquet sources on Amazon S3, files were being automatically formatted/promoted even though the auto-promote setting had been disabled.
  • When saving a view, datalake sources were showing up as a valid location for the view, but such sources should not have been allowed as a destination when saving a view.
  • Following the upgrade to Dremio 20.x, is_member(table.column) was returning zero results on views that used row-level security.
  • Improved reading of double values from ElasticSearch to maintain precision.
  • Fixed an issue that was causing queries to fail when adding or subtracting an integer to TIMESTAMP.
  • Following the upgrade to Dremio 22.1.2, when promoting JSON files to tables and building views from those tables, queries against the views were failing with a NullPointerException.
  • The width of the Tag field for datasets has been expanded to ensure that the full name of a tag will be displayed.
  • Reflection footprint was 0 bytes when created on a view using the CONTAINS function on an Elasticsearch table. The reflection could not be used in queries and sys.reflection output showed CANNOT_ACCELERATE_SCHEDULED.
  • To address potential security concerns, AWSE CloudFormation now enforces IMDSv2, HTTP tokens are now required, and endpoints are enabled.
  • An error in schema change detection logic was causing refresh metadata jobs for Hive tables to be triggered at all times, even if there were no changes in the table.
  • Updated org.apache.parquet:parquet-format-structures to address a potential security vulnerability [CVE-2021-41561].
  • Dremio was generating unnecessary exchanges with multiple unions, and changes have been made to set the proper parallelization width on JDBC operators and reduce the number of exchanges.
  • Fixed an issue that was causing COALESCE queries containing NULLIF calls to not get pushed down to Oracle.
  • On catalog entities, ownership granted to a role was not being inherited by users in that role.
  • If you clicked on a job to view details, your position on the page was reset when clicking the Back button or the Jobs link on the page header. Your position on the main Jobs page is now maintained in these scenarios.
  • Some queries using a filter condition with flatten field under a multi-join were generating a NullPointerException.
  • In Dremio 22.0.x, users who were not assigned the ADMIN role were getting 0-byte files when attempting to download query results, while downloads were working as expected in previous releases.
  • CONVERT_FROM() did not support all ISO 8601 compliant date and time formats.
  • The AWSE activation page was no longer showing the expiration date for a license key.
  • An aggregate reflection that matched was not being chosen due to a cost difference generated during pre-logical optimization.
  • Fixed an issue that was affecting the accuracy of cost estimations for queries against Delta tables (i.e., some queries where showing very high costs).
  • Fixed an issue that was causing an error when using the Tableau OAuth sign-in method when using the "oauth+ldap" mode.
  • Formatting and comments in a view definition were not being preserved as they had been entered in the SQL Editor.
  • If Dremio was stopped while a metadata refresh for an S3 source was in progress, some datasets within the source were getting unformatted/deleted.
  • Fixed an issue where Glue tables with large numbers of columns and partitions would not return results for all partitions in the table. Before this fix will take effect, you will need to refresh metadata via ALTER TABLE REFRESH METADATA.

23.1.2 Release Notes (Enterprise Edition Only, January 2023)

What's New

  • Added support for timestamp to bigint coercion in Hive-Parquet tables.

Issues Fixed

  • Some queries using multiple CONVERT_FROM functions on different JSON data type columns were failing to read with an Unable to find the referenced field error.
  • In some queries, the ConvertFromJson operator was invoked multiple times on the same column, resulting in slow query performance.
  • Fixed an issue with Decimal functions that was leading to bad results when exec.preferred.codegenerator was set to java.
  • In some cases, with the Arrow Flight SQL ODBC driver, users were getting an error when testing the connection to Microsoft Excel in the ODBC Administrator on Windows.
  • Fixed an issue causing incorrect values to be returned for boolean columns during filtering at parquet scan.
  • Some queries were failing for MongoDB v4.9+ sharded collections because MongoDB would use UUID instead of namespace.
  • When opening a reflection to view details under Settings > Reflections, an error indicating that the reflection did not exist could be displayed, even though the reflection was valid.
  • When copying a view definition and pasting into the SQL editor, the pasted SQL was incorrect because newlines were not being retained.
  • After offloading a column with type DOUBLE and offloading again to change the type to VARCHAR, the column type was still DOUBLE and read operations on the table failed with an exception.
  • LIKE was returning null results when using ESCAPE if the escaped character was one of the Perl Compatible Regular Expressions (PCRE) special characters.
  • For tables created from a folder of files, the jobs count on the Datasets page was incorrect as it always showed 0.
  • Fixed an issue that was causing queries against sys.reflections to fail with a FlightRuntimeException error.
  • In some cases, a MERGE query with an INSERT clause was inserting columns in the wrong order.
  • Fixed an issue that was affecting fragment scheduling efficiency under heavy workloads, resulting in high sleep times for some queries.
  • Heap usage on some coordinator nodes was growing over time, requiring a periodic restart to avoid out of memory errors.
  • Fixed an issue that was creating a race condition, causing REFRESH REFLECTION and REFRESH DATASET jobs to hang when reading Iceberg metadata.
  • Moved from strict matching of types to coercion to compatible types such as INT and BIGINT -> BIGINT, to address an issue with forgotten Elasticsearch mappings during refresh.
  • Fixed an issue that was resulting in repeated role lookups during privilege checks and causing performance issues.
  • Updated org.apache.calcite.avatica:avatica-core to version 1.22.0 to address potential security issues [CVE-2022-36364].

23.1.30 Release Notes (Enterprise Edition Only, March 2023)

What's New

  • This release includes a new SQL function, COL_LIKE, which tests whether an expression column matches a pattern column. For more information, see COL_LIKE.
  • In this release, Dremio supports reading Parquet files using ZSTD compression.
  • This release adds support for reading TIME and TIMESTAMP microseconds in Parquet files. Microseconds are truncated and the value is stored as milliseconds.

Issues Fixed

  • When trying to share a SQL script with another user in Dremio's AWS Edition, sharing failed with a generic "Something went wrong" error.
  • In some cases, XML responses from AWS Glue were not being handled properly and causing queries to fail.
  • Fixed an issue that was causing queries failed if certain expression splits contained CAST AS UNION.
  • If a subquery expression was used after an aggregate and the same expression was duplicated in a WHERE clause, a validation exception was being encountered.
  • Fixed an issue with the Jobs page that could lead to high heap memory usage when the content of the SQL query was unusually large.
  • Metadata refresh queries that were cancelled because metadata was already available no longer show as failed.
  • Fixed an issue that was causing slow query performance if the query contained an ORDER BY clause.

23.1.40 Release Notes (Enterprise Edition Only, March 2023)

Issues Fixed

  • Fixed an issue where the planner would attempt to add implicit casting for identical data types, causing an error.
  • In some cases, allocator information was not being included in the profile for queries running into out of memory errors.
  • Following the upgrade to Dremio 22.1.7, Power BI Desktop and Gateway may not have been able to connect to Dremio via Azure Active Directory.
  • Queries were failing against views and time series collections on MongoDB sharded collections.
  • When unlimited splits were enabled, partition pruning was failing due to complex filter conditions that were unable to transform a query to its normalized form using CNF.
  • When configuring Azure Active Directory for Power BI, added an additional field for User Claim Mapping due to a change in the AAD token version Dremio supports. For more information, see Configuring Azure Active Directory for Power BI.

23.1.50 Release Notes (Enterprise Edition Only, April 2023)

Known Issues

  • In some cases, if a deployment has a large number of sources, the SQL Runner can be considerably unresponsive when loading the page or typing in the editor.

Issues Fixed

  • In some cases, the JVM's direct memory allocator was triggering garbage collection when there was sustained and high usage of direct memory, which was causing performance issues.


  • Container probes have been updated to support Kubernetes versions greater than 1.20. Dremio V2 helm charts did not have timeoutSeconds configured for readiness probes and were failing if the check took more than one second.


  • When using the ui.whitelabel.url support key to apply a custom logo in Dremio, the logo was not being displayed in the side navigation bar.
  • Pushdowns were not working for UUID data types on a PostgresSQL source. This change maps PostgresSQL's UUID type to Dremio's VARCHAR. Comparison operators (=, >, <, <=, >=, !=) between a UUID and a UUID and between a UUID and a VARCHAR will now be pushed down.
  • At times, using the CONCAT function with non-string operands would yield results that were truncated incorrectly.
  • Expression operator names were being used as intermediate field names. In some queries, the multiplication operator * was later treated as a SELECT * statement, which was causing physical planning to fail to find the field.
  • When querying INFORMATION_SCHEMA tables as a non-admin user from JDBC/ODBC applications, the query was taking much longer than when performed by an admin user.
  • When unlimited splits were enabled, partition pruning was failing due to complex filter conditions that were unable to transform a query to its normalized form using CNF.
  • If a view used in a raw reflection contained the CONVERT_FROM() function, trying to access the view would result in a planning error.
  • In some cases, removing and adding privileges for a user on a space was failing with a "Failed to create SPACE with: Role not found" error.
  • Fixed some issues that were causing performance issues with the REGEXP_LIKE SQL function.
  • Some queries were performing poorly if the query contained an ORDER BY clause.
  • A Zookeeper class was missing from the JDBC jar in some earlier releases, resulting in a ClassNotFound exception.
  • When using the GROUP BY expression with an aliased column of the same name as the original column, a validation error was indicating that the column was not being grouped.
  • Reflections that had zero rows were not available for substitution.

23.2.0 Release Notes (Enterprise Edition Only, June 2023)

Issues Fixed

  • Added more security around DML permission checks to ensure that users can access data only according to their privileges.
  • Improved permission validation around view-based query execution.
  • In this release, the plan cache is user-specific for increased security, and it will be utilized when the same query is executed by the same user.

23.2.1 Release Notes (Enterprise Edition Only, July 2023)

What's New

  • The COL_LIKE SQL function has been updated to improve performance.


  • This release includes some changes to improve logical planning performance and query planning times in certain scenarios.


  • For Azure Blob Storage and Data Lake Gen 2 sources, you can now enable checksum-based verification to ensure data integrity during network transfers. To enable this option, set the support key to true.


Issues Fixed

  • Privilege changes for entities via SQL were not being captured in the audit log.


  • Made an update to ensure that custom (whitelabel) logos will be left-aligned on the Dremio login page.


  • GRANT commands on catalog entities were failing with Role/User <ID> not found if existing user or role grantees were no longer present in the system.


  • Adding a CAST to an Oracle index column was leading to a missed partition key and resulting in an expensive and slow query.


  • Fixed an issue that was causing an exception in the BRIDGE_FILE_READER_RECEIVER SQL operator for some queries.


  • If a Hive source connection was slow, Dremio was repeatedly pinging the source to add databases.


  • Some queries that included multiple levels of nested fields were failing.


  • For some queries on Oracle sources, an interval of time was being processed incorrectly, resulting in the following error: (full) year must be between -4713 and +9999


  • If a corrupted or malformed checkpoint file was encountered during metadata refresh, queries would fail with a Metadata read Failed error.


  • Queries that utilized external reflections were not being logged in queries.json.


  • Fixed an issue that was causing Failed to decode column and uncompressed_page_size errors during reflection refresh.


  • When parsing CSV, Dremio now allows multi-character strings to be used as field delimiter, quote, quote escape, and comment. Previously, only single characters were supported for these.


  • Fixed query concurrency issues that could lead to "Selected table has no columns" errors.


  • In some cases, Dremio nodes could reboot unexpectedly due to queries that contained deeply nested functions.


  • Resolved a frequent internal hash collision in the C3 eviction path that disabled the cloud columnar cache (C3), potentially causing performance degradation.


23.2.2 Release Notes (Enterprise Edition Only, July 2023)

What's New

  • Added support for full outer joins that resolve to a true join condition.

Issues Fixed

  • Top-level CASE statements intended to return a boolean were not being rewritten correctly, resulting in an error for some SQL Server queries.


  • Fixed an issue that was causing invalid SQL comparison syntaxes in SQL Server queries if nested CASE statements were encountered.


  • The Dremio vectorized reader could not read invalid Parquet files where a dictionary page offset was used if no dictionary page was present.


  • Fixed a single-stream processing issue and improved execution times when using UNION ALL with planner.unionall_distribute_all_children set to "true".


  • When unlimited splits were enabled, partition pruning was failing due to complex filter conditions that were unable to transform a query to its normalized form using CNF.


  • Fixed query concurrency issues that could lead to "Selected table has no columns" errors.


  • Dremio was generating unnecessary exchanges with multiple unions, and changes have been made to set the proper parallelization width on JDBC operators and reduce the number of exchanges.


  • When unlimited splits were enabled and running incremental metadata refreshes on a file-based table, running subsequent raw reflections would fail with a DATA_READ error.


  • Fixed an issue that could result in duplicate column names being written by the planner when an expression in the project included a field named *.


23.2.3 Release Notes (October 2023) Enterprise

What's New

  • When run by PUBLIC users, the data returned by the apiv2/user/<userid> internal API is limited to only information that is required to search users and assign privileges.


  • In this release, the NOT IN clause is supported with correlated subqueries.


Issues Fixed

  • In some cases, if a deployment had a large number of sources, the SQL Runner was considerably unresponsive when loading the page or typing in the editor.


  • Removed stack trace information from REST API payload JSON parsing error message.


  • In some cases, running ALTER TABLE <table_path> FORGET METADATA against a view could result in the view being deleted instead of the command failing with an error.


  • Dremio queries in some Tableau executors would start failing after the access token had expired and been renewed.


  • Top-level CASE statements intended to return a boolean were not being rewritten correctly, resulting in an error for some SQL Server queries.


  • Fixed an issue in C3 recovery logic that was causing C3 to be disabled at startup on some nodes.


  • Incorrect dates were returned when passing a date that was prior to the start of the Gregorian calendar using TO_DATE and referencing a data source.


  • Fixed an issue that was causing increased startup time for the Jobs service when the jobs history table was very large.


  • Fixed the following issues with acceleration information in job profiles when the plan cache was used: acceleration information was missing for a prepared query, plan cache usage was missing for a prepared query, acceleration information was missing when the query was not accelerated but reflections were considered, canonicalized user query alternatives were missing, and matching hints were missing for reflections that were only considered.


  • Fixed an issue that was causing certain queries to fail when using hash joins, leading to an unexpected restart of an executor.


  • Some queries that contained large IN conditions were failing with a stack overflow error.


  • In Dremio versions later than v20, casting +Infinity was returning an error.


  • Fixed a number of issues that were affecting proper handling of inferred partition columns, specifically FOR PARTITIONS (...) was not working for inferred partition columns.


  • Deleting a space or folder that contained a user-defined function was resulting in an error.


  • Fixed an issue with the default Jobs results cleanup path that was resulting in disk space issues and unexpected restarts on some cluster nodes.


23.2.4 Release Notes (January 2024) Enterprise

What's New

  • Added the option to force parallelism in write queries by using a round-robin exchange before the writer, even when input is only single-threaded. Enable this option by enabling the support key planner.writer.round_robin.

Issues Fixed

  • Changing the size of an existing EC2 engine in Dremio's AWS Edition no longer resets the engine type.


  • Split assignment for tables in Delta Lake format no longer result in a NullPointerException.


  • Fixed an issue that was causing the error Gandiva code generation is handled during build for CASE WHEN queries that contain nested Java/Gandiva functions.


  • Row-level runtime filtering is disabled for reflection refresh jobs so that views no longer return incorrect results due to an incorrect match to a single Starflake reflection.


  • Impersonation users who do not have access to a remote table cannot view local versions of the remote table that are created by reflections.


  • Cancelled queries now immediately stop reading data from data sources.


  • Fixed an issue with node endpoint checks that could cause coordinators to restart.


  • C3 system stats for storage_plugins, mount_points, datasets, and objects were missing due to an internal error. They have been enabled again.


  • To increase coordinator stability, plan cache size is decreased to 1k (from 10k) and time duration is decreased to 8 hours (from 10 hours).


  • Fixed an issue that could cause a BSONRecordReader crash for MongoDB data sources.


  • Dremio now reads indices with date fields formatted in either Joda-Time or java.time without the use of a prefix.


  • Resolved a path travel security issue that bypassed folder-level role-based access control (RBAC).


  • Updated snappy-java to 1.10.5 to address a potential security issue [CVE-2023-43642].


  • LIMIT queries can now be executed in parallel rather than using only one thread for execution.


  • Removed an errant dependency check that prevented some engines from starting or scaling replicas.