4.0 Release Notes

What's New

Cloud Columnar Cache (C3)

Dremio provides a local (per executor node) cache for Parquet files. Cloud columnar caching is implemented for the following data sources:

  • Amazon S3
  • ADLS (Gen 1)
  • Azure Storage (ADLS Gen 2) - v2 only

See Cloud Cache and Configuring Cloud Cache for more information about cloud caching. See Amazon S3, ADLS, and Azure Storage for specific data source configuration information.

Multi-Cluster Isolation

Dremio provides the ability to isolate workloads by defining multiple separate clusters of nodes and route workloads to specific clusters by configuring WLM Queues. See Workload Management for more information.

AWS Security

Configurable IAM Role-based Access

Dremio supports configurable IAM role-based access to S3 buckets. On top of using access key/secret, S3 sources can now use customizable IAM roles from EC2 instance metadata for access. See Amazon S3 for data source configuration information.

Note that full S3 bucket access is not required for IAM roles.

AWS Secrets Manager

Dremio supports AWS Secrets Manager for the following data sources:

  • Redshift
  • Oracle
  • PostgreSQL

This feature is configurable in the General tab for the data sources when adding or modifying the data sources.

AWS KMS Encryption

Hive 3.1

Dremio now supports Hive 3.1. Additionally Hive ACID tables now use v2 of the specification.

Enhancements

Dremio UI Copy Result Sets

Dremio allows you to copy the result sets from the display window to the clipboard. This is accomplished via a button on the results table that copies query results to the clipboard. The maximum number of records copied is 5,000.

Amazon S3

Whitelisting Buckets

Dremio provides functionality for whitelisting S3 buckets. See Amazon S3 for data source configuration information.

Metadata Query Limit

A limit can be set on the number of tables returned for "get tables" metadata request from client applications. By default, the limit is set to 0 (disabled).

  • Users can set the maximum number of tables returned with the MaxMetadataCount property. For JDBC, set the value as a connection property. For ODBC, set the value as an advanced property.
  • Administrators can define the default maximum with the client.max_metadata_count support key. If a connection property is specificed it overrides the support key set on the server.

SQL Query

The following enhancements or behavioral changes are applicable:

  • For all relational databases, the to_date function is now pushed down when used anywhere in a query.
  • For Elasticsearch, scale_float type is supported.

Decimal Support

Dremio supports decimal to decimal mappings for relational database sources and MongoDB. Existing relational database and MongoDB data source will now map decimal to decimal. See Data Types for more information in addition to the data source specific data types.

Functionality Changes

The following functionality changes are applicable:

  • Dremio no longer supports Avro and Sequencefile outside of Hive.
  • Dremio no longer supports IBM DB2 and HBase. DB2 and HBase were deprecated in 3.1.3 and removed in 4.0.
  • Dremio's Intercom chat/Ask Dremio is now disabled.

Deprecations

The following features are deprecated in 4.0 and will be removed in a future release.

  • PDFS - Dremio now requires external storage to be configured for Distributed Storage sources and to replace local PDFS.
  • MapR (Community Edition)
  • Voting and Automatic Thresholds for Reflections
  • Elastic - Lucene search queries using the CONTAINS syntax; Painless scripts with aggregate pushdown; Versions 5 and 6
  • MongoDB - Aggregation Pipeline Pushdowns
  • Single Node with single process installations - Single node installations will require separate coordinator and executor processes.
  • SQL Functionality:
    • CONVERT_FROM - Will replace with a function that enforces a fixed schema
    • Mixed Types - Remove mixed types within a column and enforce casting to a common value

Not Supported

  • Windows installation
  • MacOS installation

Upgrade Notes

For additional upgrade notes, see Installing and Upgrading.

Upgrade Process Time

The upgrade process may take a prolonged amount of time depending on the length of the refresh cycle for reflections and depending on the use of decimals for relational sources.

Reflections Out-of-Sync

For RDBMS data sources, upgrading from Dremio 3.3 to 4.0 causes external reflections to become out of sync. This is expected behavior for external reflections on RDBMS sources. Workaround this behavior by dropping and recreating your external reflections.

Decimal Upgrade Behavior

When you upgrade, decimal columns in RDBMS and MongoDB sources now show as decimal in Dremio. Reflections with decimal data types will be invalid until refreshed. See RDBMS Decimal Support for more information about decimals.

Amazon S3 Distributed Store

The following exception may occur when using Amazon S3 as a distributed store and an EC2 Metadata Authentication mechanism.

Exception

java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.SharedInstanceProfileCredentialsProvider not found

See Configuring Distributed Storage for more information.

Dremio 2.0.3 or lower

Upgrading from Dremio 2.0.3 or lower to Dremio 4.0 is not supported.

  1. Upgrade to Dremio 3.x first.
  2. Then, upgrade to 4.0.

Fixed Issues

Unable to access Hive tables backed by Azure Storage.
Resolved by adding ADLS and Azure properties to the Hive configuration.

For MongoDB, an incorrect pushdown occurs when ISODate is used with FILTER.
Resolved by relaxing restrictions around allowed pushdowns for MongoDB.

For NAS data sources, adding the source fails when a forward slash is at the end of the path.
Resolved to take into account a trailing forward slash in the path.

For ADLS data sources, timeouts may occur if caching is disabled.
Resolved by improving socket/thread usage when caching is disabled.

Intermittent permission errors may occur when multiple ADLS sources are configured.
Resolved file system caching issue.

Dremio Wiki has an XSS security bug.
Resolved the XSS security issue by upgrading some internal modules.

Data is never loaded when previewing results for a failed job or running a query which fails.
Resolved by displaying actual error message instead of spinner.

The REFRESH METADATA SQL query does not work with Azure Storage.
Resolved by fixing the Azure Storage plug-in PDS METADATA REFRESH trigger.

For the RDBMS plugins, if the date_trunc() function is used in the query it cannot be pushed down.
Resolved by adding support for the date_trunc() function in the RDBMS plugins.

Supported SSL cipher suites updated.
Resolved by updating the supported cipher suites to align with recommended cipher suite list from OWASP TLS Cipher String Cheat Sheet.

Need to monitor heap usage on executor nodes to prevent outages.
Resolved by improving internal management of queries and heap usage along with improved error and exception messaging.

Direct memory usage from ORC reader can sometimes cause issues.
Fixed netty direct memory usage from ORC reader.

ODBC driver cannot handle dots in column names.
Resolved by relaxing dot validation on column names in Dremio.

Raw reflections with negative values for the partitioned column can lead to incorrect values..
Resolved by ensuring that the data from these partitions is also read as part of the query.

For MongoDB, timestamp filters with strings are not pushed down.
Resolved by coercing strings to timestamp when pushing down to MongoDB.

Aggregate queries on text file with new lines within a quoted field behave incorrectly.
Resolved by correcting count queries with .csv files.

For Azure AD, the secret is not being retrieved correctly.
Resolved by updating how the secret is fetched. -->

ORC pushdowns are ignoring floating point literals.
Resolved by adding decimal case to ORC literal filters.

For RDBMS sources, when an unsupported column type was encountered subsequent columns would be incorrectly fetched.
Resolved by correcting detection of unsupported columns.

For Minio, data sources are not promoted for newly added folders.
Resolved by improving how unique properties are passed. -->

For SQL Server, new data sources do not have the advance option "Verify server certificate" enabled by default.
Resolved by enabling "Verify server certificate" by default.

Reflection refreshes on acid tables with only delta files return empty results.
Resolved so that queries that are re-run with new deltas before a metadata refresh return consistent results.

sys.queries always returning empty results.
Resolved by removing the sys.queries from the list of valid system tables.

On occasion, the log/archive/queries.json file may be overwritten with random contents.
Fixed archiving of tracker.json logs where tracker.log archiving no longer overrides query.json archived logs.

For Teradata sources, queries are making unnecessary calls to retrieve metadata.
Resolved by improving the metadata retrieval process.

For Teradata data sources, previews for VDSs/queries with UNIONs fail.
Resolved by correcting the Teradata SQL LIMIT with UNION functionality.

For ADLS data sources, timeouts may occur if caching is disabled.
Resolved by improving socket/thread usage when caching is disabled.

For ADLS data sources, "too many open files" errors may occur when reading Parquet files.
Resolved by improving socket/thread usage when caching is disabled.

For SQL queries, provided transitive join is enabled, filter push down occurs on both tables when the join key is computed.
Resolved by detecting the filter and projections are on equivalent expressions. To enable, planner.experimental.transitivejoin must be set to on. Default: off

For SQL queries, aggregate JOINs on reflections do not work with timestamp fields.
Resolved by improving pushdown rules.

Enqueued jobs do not show their queue.
Jobs UI now shows the queue name of enqueued jobs.

For SQL queries, column name conflict resolution does not occur at every level.
When joining columns through the UI, if columns in two separate tables had the same name but different casing (e.g. DEPARTMENTID and department_id), the columns were _not automatically renamed despite their names being equivalent.

Resolved where JOINs through the UI detect and automatically resolve case-insensitive column name conflicts. The original names are preserved with an "_X" (where X is an integer) appended to the name. For example, when UI joining tables with columns DEPARTMENT_ID and department_id, department_id will become department_id_0. If there were more department_id columns,

4.0.1 Release Notes

Dremio Community 4.0 Docker Image Issue

We discovered an issue with the Docker image for Community Edition version 4.0 that is used in Kubernetes, Azure AKS and AWS EKS deployments.

Community Edition 4.0 Docker images accidentally incorporated elements of the Enterprise Version, which can cause Dremio to not be able to issue queries and potentially corrupt the configuration. Dremio Community version 4.0.1 was just released to correct the issue.

Users deploying via YARN, AWS CloudFormation, Azure ARM or Linux RPM and tar installations were not affected.

Community users who upgraded from 3.x to 4.0 using Docker, Kubernetes, AKS or EKS are recommended to:

  1. Upgrade to Dremio Community version 4.0.1
  2. Restore the configuration from a backup created prior to upgrading to 4.0.

[info] Note

It is highly recommended to restore from backup after upgrading to 4.0.1. If no backup is available the upgrade will work provided no new sources were added.

Community users who created a new system with Dremio Community 4.0 between Sept 12 and Sept 18 using Docker, Kubernetes, AKS or EKS are recommended to:

  1. Delete the new system created with Dremio Community 4.0
  2. Re-install a new system using Dremio Community 4.0.1

[info] Note

New systems created with Dremio Community 4.0 using Docker, Kubernetes, AKS or EKS will continue to function, but will not be able to upgrade to 4.0.1 or later versions.

We apologize for the inconvenience and appreciate your continued support.

4.0.2 Release Notes

Enhancements in 4.0.2

Cloud Cache for HDFS

Dremio now provides cloud columnar caching for HDFS. See HDFS for more information.

See Cloud Cache and Configuring Cloud Cache for more information about cloud caching.

Async Reading for HDFS

The HDFS data source now support asynchronous reading.

Admin repair-acls Command

Dremio added an Admin command, repair-acls, that is used to help repair ACLs. This command performs a dry run and prints entities that are missing ACLs. See Repair ACLs for more information.

Downloading Result Sets

Downloaded jobs run much faster as they no longer rerun the original query, but essentially download the results from the distributed storage directly. That is, what is configured for the distributed storage. See Configuring Distributed Storage for more information.

This enhancement affects only non-default configuration (that is not PDFS). Dremio allows you to download result sets in one of the following formats:

  • JSON
  • CSV
  • Parquet

See Data Curation for more information.

Job Results Systems Table

A new systems table, sys.job_results, has been added that allows you to query the job results using the sys.job_result.<job id> path. See Job Details for more information.

Partitions Information on Physical Datasets

Dremio now shows partitions information on columns for a physical dataset. See Dataset Concepts for more information.

RDBMS

  • Dremio now allows the RDBMS connector to pushdown date/string comparisons.
  • Dremio now supports partial acceleration for queries against relational sources.

Relational Planning

Dremio introduces a new planning mode, called Relational Planning, which enhances JDBC pushdown phase in the Dremio query planner. Previously, queries against relational sources could only be accelerated when every dataset is covered by a reflection. With this enhancement, partial substitution on relational sources is now available. In addition to this, queries against relational sources are optimized in the Dremio query planner before it is pushed down.

Fixed Issues in 4.0.2

On RDBMS environments, queries are sometimes not considering reflection.
Resolved by reporting errors when reflections are not being considered.

For Hive, when querying ORC files, sometimes ORC runs out of heap space.
Resolved by improving usage of Hadoop buffer.

On Oracle, the DATE TO string comparison EQUALS TO filter sometimes fail.
Resolved by improving some varchar and datetime comparisons for ARP.

For MongoDB, refreshing metadata on collections causes corresponding raw reflection to NOT be chosen for substitution.
Resolved the issue by relaxing some internal conditions.

A command click for opening new tabs for the View Details link in the job doesn't open a new tab.
Resolved so that command-click and right-click work as expected for normal links.

For Teradata, accessing a dataset corresponding to a view would cause an exception.
Resolved by falling back to catalog metadata when normal metadata is not available.

A query failed with PLAN ERROR: Unable to convert the value of null and type VARBINARY.
Resolved by improving the handling of null varbinary literals in RexToExpr.

For MongoDB, Dremio can throw the following error when scales of the values change: Failure while attempting to read metadata for [TABLE NAME].
Resolved by automatically adjusting decimal scale during schema learning in the MongoDB source connector.


results matching ""

    No results matching ""