This topic describes Hive data source considerations and Dremio configuration.
Dremio and Hive
Dremio supports the following:
- Hive 2.1
- Hive 3.x
The following data sources are supported:
S3 - See S3 on Amazon EMR Configuration for more information about S3-backed Hive tables on Amazon EMR.
Hive external tables backed by HBase storage handler
The following formats are supported:
- Apache Iceberg
- Apache Parquet
- Delta Lake
- Text, including CSV (Comma-separated values)
In addition, the following interfaces and reading file formats are supported:
Hive table access using Hive's out-of-the-box SerDes interface, as well as custom SerDes or InputFormat/OutputFormat.
Hive-supported reading file format using Hive's own readers -- even if Dremio does not support them natively.note
Dremio does not support Hive views. However, you can create and query views instead.
This section provides information about Hive configuration.
Adding additional elements to Hive plugin classpaths
Hive plugins can be extended to utilize additional resource files and classes. These can be added as either directories or JAR files. Note that any resources that are part of the server's classpath are not exposed to the Hive plugin.
The location to place these additional elements based on the Hive version and the Dremio distribution. The location is based on what shall be referred to as the hive-plugin-id.
The hive-plugin-id values are:
To add additional classpath elements, which may be JARs or resource directories, you must do the following on every node:
Create a directory
<dremio-root>/plugins/connectors/<hive-plugin-id>.d/(note the addition of the .d suffix).
For JARs, place them directly in the directory created in (1).
For resource directories, copy the directory to the directory created in (1).
Ensure the directory and its contents are readable by the Dremio process user.
It is possible to use add a symbolic link to a JAR or directory instead of copying a JAR or directory to the directory created in (1).
Hive plugins do not use elements present in the main Dremio server classpath. This includes any Hadoop/Hive configuration files such as core-site.xml and hive-site.xml that the user may have added themselves.
You can add these files to the Hive plugin classpath by following the instructions above.
For example you can create conf files here:
<dremio-root>/plugins/connectors/**hive3-ee.d**/conf for the Hive 3 plugin
in Enterprise mode.
An easy way to use the same configuration as Dremio is to use a symlink. From
ln -s conf plugins/connectors/hive3-ee.d/conf
To grant the Dremio service user the privilege to connect from any host and to impersonate a user belonging to any group, modify the core-site.xml file with the following values:Grant user impersonation privileges
To modify the properties to be more restrictive by passing actual hostnames and group names, modify the core-site.xml file with the following values:Grant more restrictive user impersonation privileges
By default, Dremio utilizes its own estimates for Hive table statistics when planning queries.
However, if you want to use Hive's own statistics, do the following:
store.hive.use_stats_in_metastoreparameter to true.
Run theANALYZE TABLE COMPUTE STATISTICS command
ANALYZE TABLE COMPUTE STATISTICScommand for relevant Hive tables in Hive. This step is required so that all of the tables (that Dremio interacts with), have up-to-date statistics.
ANALYZE TABLE <Table1> [PARTITION(col1,...)] COMPUTE STATISTICS;
If you are using a Hive source and an HA metastore (multiple Hive metastores),
then you need to specify the following
hive.metastore.uris parameter and value in the hive-site.xml file.
Configuration is primarily accomplished through either the General or Advanced Options.
Name -- Hive source name
Connection -- Hive connection and security
Hive Metastore Host -- IP address. Example: 220.127.116.11
Port -- Port number. Default: 9083
Enable SASL -- Box to enable SASL. If you enable SASL, specify the Hive Kerberos Principal.
Authorization -- Authorization type for the client. When adding a new Hive source, you have the following client options for Hive authorization:
Storage Based with User Impersonation -- A storage-based authorization in the Metastore Server which is commonly used to add authorization to metastore server API calls. Dremio utilizes user impersonation to implement Storage Based authorization
- When Allow VDS-based Access Delegation is enabled (default), the owner of the view is used as the impersonated username.
- When Allow VDS-based Access Delegation is disabled (unchecked), the query user is used as the impersonated username.
SQL Based -- Not Currently Supported
Ranger Based -- An Apache Ranger plug-in that provides a security framework for authorization.
Ranger Service Name - This field corresponds to the security profile in Ranger. Example:
Ranger Host URL - This field is the path to the actual Ranger server. Example:
The following options allow you to specify either impersonsation users and Hive connection properties.
For example, to add a new Hive source, you can specify a single metastore host
by adding a
hive.metastore.uris parameter and value in the Hive connection properties.
This connection property overrides the value specified in the Hive source.
Multiple Hive Metastore Hosts: If you need to specify multiple Hive metastore hosts, update the hive-site.xml file. See Hive Metastores for more information.
Impersonation User Delegation -- Specifies whether an impersonation username is As is (Default), Lowercase, or Uppercase
Connection Properties -- Name and value of each Hive connection property.
To connect to a Kerberized Hive source, add the following connection property in the Advanced Options:
|yarn.resourcemanager.principal||Name of the Kerberos principal for the YARN resource manager.|
Never refresh -- Specifies how often to refresh based on hours, days, weeks, or never.
Never expire -- Specifies how often to expire based on hours, days, weeks, or never.
- Remove dataset definitions if underlying data is unavailable (Default).
If this box is not checked and the underlying files under a folder are removed or the folder/source is not accessible, Dremio does not remove the dataset definitions. This option is useful in cases when files are temporarily deleted and put back in place with new sets of files.
Dataset Discovery -- Refresh interval for top-level source object names such as names of DBs and tables.
- Fetch every -- Specify fetch time based on minutes, hours, days, or weeks. Default: 1 hour
Dataset Details -- The metadata that Dremio needs for query planning such as information needed for fields, types, shards, statistics, and locality.
Fetch mode -- Specify either Only Queried Datasets, or All Datasets. Default: Only Queried Datasets
Only Queried Datasets -- Dremio updates details for previously queried objects in a source.
This mode increases query performance because less work is needed at query time for these datasets.
All Datasets -- Dremio updates details for all datasets in a source. This mode increases query performance because less work is needed at query time.
Fetch every -- Specify fetch time based on minutes, hours, days, or weeks. Default: 1 hour
Expire after -- Specify expiration time based on minutes, hours, days, or weeks. Default: 3 hours
Authorization -- Used when impersonation is enabled. Specifies the maximum of time that Dremio caches authorization information before expiring.
- Expire after - Specifies the expiration time based on minutes, hours, days, or weeks. Default: 1 day
You can specify which users can edit. Options include:
All users can edit.
Specific users can edit.
For More Information
See Hive Data Types for information about mapping to Dremio data types.