SELECT
Dremio supports querying using standard SELECT
statements. You can query tables and views that are contained in connected sources and Arctic catalogs.
When working with Apache Iceberg tables, you can query a table's metadata as well as run queries by snapshot ID.
Dremio supports reading positional deletes and equality deletes for Apache Iceberg v2 tables. Dremio does not support reading global equality deletes from Apache Iceberg v2 tables in which the partition spec for the delete file is unpartitioned. Dremio performs writes using copy-on-write by default and supports writes using merge-on-read if specified in the Iceberg table properties.
[ WITH ... ]
SELECT [ ALL | DISTINCT ]
{ *
| <column_name1>, <column_name2>, ... }
FROM { <table_name> | <view_name>
| TABLE ( <iceberg_metadata> ( <table_name> ) )
| UNNEST ( <list_expression> ) [ WITH ORDINALITY ] }
[ { PIVOT | UNPIVOT } ( <expression> ) ]
[ WHERE <condition> ]
[ GROUP BY <expression> ]
[ QUALIFY <expression> ]
[ ORDER BY <column_name1>, <column_name2>, ... [ DESC ] ]
[ LIMIT <count> ]
[ AT { { REF[ERENCE] | BRANCH | TAG | COMMIT } <reference_name>
[ AS OF <timestamp> ]
| { SNAPSHOT <snapshot_id> | <timestamp> } } ]
| <function_name>
[ AT { REF[ERENCE] | BRANCH | TAG | COMMIT } <reference_name> ]
[ AS OF <timestamp> ]
}
Parameters
[ WITH ... ] String Optional
Defines a common table expression (CTE), which is a named subquery. For more information, read WITH.
[ ALL | DISTINCT ] String Optional
Specifies the result set that is returned. Similar to the asterisk (*), ALL
returns all the values in the result set. DISTINCT
eliminates duplicates from the result set. If you do not specify an option, the default is ALL
.
*
Indicates that you want to query all columns in the table.
<column_name1>, <column_name2>, ... String
The name of the column(s) that you want to query.
FROM { <table_name> | <view_name> } String
The name of the table or view that you want to select from.
FROM TABLE ( <iceberg_metadata> ( <table_name> ) ) String
The <table_name>
is the name of the Iceberg table that you want to select from and the name must be enclosed in single quotes. For <iceberg_metadata>
, Iceberg includes helpful system-table references which provide easy access to Iceberg-specific information on tables, including:
- The data files for a table
- The history of a table
- The manifest files for a table
- The partition-related statistics for a table
- The snapshots for a table
Supported Iceberg metadata clauses include:
-
TABLE ( table_files ( <table_name> ) )
: Query an Iceberg table's data file metadata using the table_files() function. Dremio returns records that have these fields:Column Data Type Description file_path VARCHAR Full file path and name file_format VARCHAR Format, for example, PARQUET partition VARCHAR Partition information record_count BIGINT Number of rows file_size_in_bytes BIGINT Size of the file column_sizes VARCHAR List of columns with the size of each column value_counts VARCHAR List of columns with the number of records with a value null_value_counts VARCHAR List of columns with the number of records as NULL nan_value_counts VARCHAR List of columns with the number of records as NaN lower_bounds VARCHAR List of columns with the lower bound of each upper_bounds VARCHAR List of columns with the upper bound of each key_metadata VARCHAR Key metrics split_offsets VARCHAR Split offsets -
TABLE ( table_history ( <table_name> ) )
: Query an Iceberg table's history metadata using the table_history() function. Dremio returns records that have these fields:Column Data Type Description made_current_at TIMESTAMP The timestamp the Iceberg snapshot was made at snapshot_id VARCHAR The Iceberg snapshot parent_id VARCHAR The parent snapshot ID, null if not exists is_current_ancestor BOOLEAN If the snapshot is part of the current history, shows abandoned snapshots -
TABLE ( table_manifests ( <table_name> ) )
: Query an Iceberg table's manifest file metadata using the table_manifests() function. Dremio returns records that have these fields:Column Data Type Description path VARCHAR Full path and name of the manifest file length BIGINT Size in bytes partition_spec_id VARCHAR ID of the partition added_snapshot_id VARCHAR ID of the snapshot added to the manifest added_data_files_count BIGINT Number of new data files added existing_data_files_count BIGINT Number of existing data files deleted_data_files_count BIGINT Number of files removed partition_summaries VARCHAR Partition information -
TABLE( table_partitions( '<table_name>' ) )
: Query statistics related to the partitions of an Iceberg table.Column Name Data Type Description partition CHARACTER VARYING The partition key record_count INTEGER The number of records in the partition file_count INTEGER The number of data files in the partition spec_id INTEGER The ID of the partition specification on which partition is based. -
TABLE ( table_snapshot ( <table_name> ) )
: Query an Iceberg table's snapshot metadata using the table_snapshot() function. Dremio returns records that have these fields:Column Data Type Description committed_at TIMESTAMP The timestamp the Iceberg snapshot was committed snapshot_id VARCHAR The Iceberg snapshot ID parent_id VARCHAR The parent snapshot ID, null if it does not exist operation VARCHAR The Iceberg operation (for example, append) manifest_list VARCHAR List of manifest files for the snapshot summary VARCHAR Additional attributes (records added, etc.)
FROM UNNEST ( <list_expression> ) [ WITH ORDINALITY ] String Optional
Expands a LIST
into a table with a single row for each element in the LIST
. WITH ORDINALITY
returns an additional column with the offset of each element, which can be used with the ORDER BY
clause to order the rows by their offsets. UNNEST
cannot unnest a correlated variable.
{ PIVOT | UNPIVOT } ( <expression> ) String Optional
PIVOT
converts a set of data from rows into columns. UNPIVOT
converts a set of data from columns into rows. The expression can be one of the following:
- pivot_clause: The query to aggregate the data on.
- pivot_for_clause: Which columns to group and pivot on.
- pivot_in_clause: Filters the values for the columns pivot_for_clause. Each of the values in this clause will be a separate column.
This keyword is applied to a SELECT
statement. The syntax does not support an alias between the table/subquery and either the PIVOT
or UNPIVOT
clause. For example, SELECT (name, dept FROM employees) <alias> PIVOT <query>
is not supported.
WHERE <condition> Boolean Optional
Filters your query and extracts only the records that fulfill a specified condition. The following operators can be used: +
, >
, <
, >=
, <=
, { <> | != }
, BETWEEN
, LIKE
, IN
. Additionally, <condition>
can include logical operators, such as AND
, OR
, and NOT
.
GROUP BY <expression> String Optional
Groups rows with the same group-by-item expressions and computes aggregate functions (such as COUNT()
, MAX()
, MIN()
, SUM()
, AVG()
) for the resulting group. A GROUP BY
expression can be one or more column names, a number referencing a position in the SELECT
list, or a general expression.
QUALIFY <expression> Boolean;Optional
Filters the results of window functions. To use QUALIFY
, at least one window function must be present in either the SELECT
statement or within the QUALIFY
expression. The expression filters the result after aggregates and window functions are computed; it can also contain window functions. The boolean expression can be the result of a subquery.
ORDER BY <column_name1>, <column_name2>, … [ DESC ] String Optional
Sort the result by a specific column. By default, the records are sorted in ascending order. Use DESC
to sort the records in descending order.
LIMIT <count> Integer Optional
Constrains the maximum number of rows returned by the query. Must be a non-negative integer.
AT { REF[ERENCE] | BRANCH | TAG | COMMIT } <reference_name> String Optional
Specifies a reference to run the query against or a reference at which the UDF exists. When this parameter is omitted, the current reference is used.
REF
: Identifies a reference to run the query against, which can be a branch, tag, or commit.BRANCH
: Identifies the branch reference to run the query against.TAG
: Identifies the tag reference to run the query against.COMMIT
: Identifies the commit reference to run the query against. Commit hashes must be enclosed in double quotes (for example,“ff2fe50fef5a030c4fc8e61b252bdc33c72e2b6f929d813833d998b8368302e2”
).
AS OF <timestamp> String Optional
Changes the commit reference point to the provided timestamp. Can only be applied to REF
, BRANCH
, and TAG
. <timestamp>
may be any SQL expression that resolves to a single timestamp type value, for example: CAST( DATE_SUB(CURRENT_DATE,1) AS TIMESTAMP )
or TIMESTAMP '2022-07-01 01:30:00.000'
.
AT SNAPSHOT <snapshot_id> String Optional
Applies to Iceberg and Delta Lake tables only. A time-travel query that enables you to specify an earlier version of a table to read. A snapshot ID is obtained either through the table_history()
or table_snapshot()
metadata function. Must be enclosed in single quotes.
AT <timestamp> String Optional
Available for Iceberg and Delta Lake table queries only. Changes the commit reference point to the most recent Iceberg snapshot as of the provided timestamp. <timestamp>
may be any SQL expression that resolves to a single timestamp type value, for example: CAST( DATE_SUB(CURRENT_DATE,1) AS TIMESTAMP )
or TIMESTAMP '2022-07-01 01:30:00.000'
.
<function_name> String Enterprise
The name of an existing UDF. To run SELECT <function_name>
, users need the USAGE privilege on the Arctic catalog and SELECT privilege on the UDF.
Examples
Query an existing table in a data lake sourceSELECT *
FROM Samples."samples.dremio.com"."zips.json";
SELECT city
FROM Samples."samples.dremio.com"."zips.json";
SELECT DISTINCT city
FROM Samples."samples.dremio.com"."zips.json";
SELECT *
FROM Samples."samples.dremio.com"."zips.json"
WHERE state = 'MA' AND city = 'AGAWAM';
SELECT passenger_count, trip_distance_mi, fare_amount,
RANK() OVER (PARTITION BY passenger_count ORDER BY trip_distance_mi) AS pc_rank
FROM "NYC-taxi-trips"
QUALIFY pc_rank = 1;
SELECT passenger_count, trip_distance_mi, fare_amount
FROM "NYC-taxi-trips"
QUALIFY RANK() OVER (PARTITION BY passenger_count ORDER BY trip_distance_mi) = 1;
SELECT COUNT(city), city, state
FROM Samples."samples.dremio.com"."zips.json"
GROUP BY state, CITY
ORDER BY COUNT(city) DESC;
WITH cte_quantity (Total)
AS (
SELECT SUM(passenger_count) as Total
FROM Samples."samples.dremio.com"."NYC-taxi-trips" where passenger_count > 2
GROUP BY pickup_datetime
)
SELECT AVG(Total) average_pass
FROM cte_quantity;
ALTER DATASET Samples."samples.dremio.com"."SF weather 2018-2019.csv" REFRESH METADATA auto promotion FORCE UPDATE;
SELECT * FROM (
SELECT EXTRACT(YEAR FROM CAST(F AS DATE)) as "YEAR",
EXTRACT(MONTH FROM CAST(F AS DATE)) as "MONTH",
K as MAX_TEMP
FROM Samples."samples.dremio.com"."SF weather 2018-2019.csv"
where F <> 'DATE'
)
PIVOT (
max(MAX_TEMP) for "MONTH" in (1 as JAN, 2 as FEB, 3 as MAR, 4 as APR, 5 as MAY, 6 as JUN, 7 as JUL, 8 as AUG, 9 as SEP, 10 as OCT, 11 as NOV, 12 as "DEC")
)
UNPIVOT (
GLOBAL_MAX_TEMP for "MONTH" in (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, "DEC")
)
ORDER BY "YEAR", "MONTH";
SELECT index,
UPPER(array_item)
FROM UNNEST ( ARRAY [ 'a', 'b', 'c' ]) WITH ORDINALITY AS my_table ( array_item, index)
ORDER BY index;
SELECT *
FROM myCatalog.demo_table AT REF main_branch;
SELECT *
FROM myCatalog.demo_view AT COMMIT "7f643f2b9cf250ce1f5d6ff4397237b705d866fbf34d714";
SELECT *
FROM myTable AT TIMESTAMP '2022-01-01 17:30:50.000';
SELECT *
FROM myTable AT SNAPSHOT '5393090506354317772';
SELECT *
FROM TABLE(table_history('myTable'))
WHERE snapshot_id = 4593468819579153853;
SELECT count(*)
FROM TABLE(table_snapshot('myTable'))
GROUP BY snapshot_id;
Column Aliasing
If you specify an alias for a column or an expression in the SELECT
clause, you can refer to that alias elsewhere in the query, including in the SELECT
list or in the WHERE
clause.
SELECT c_custkey AS c, lower(c)
FROM "customer.parquet";
SELECT c_custkey AS c, lower(c)
FROM (
SELECT c_custkey, c_mktsegment AS c
FROM "customer.parquet");
SELECT c_name AS n, n
FROM (
SELECT c_mktsegment AS n, c_name
FROM "customer.parquet")
AS MY_TABLE
WHERE n = 'BUILDING';
SELECT c_custkey
FROM (
SELECT c_custkey, c_name AS c
FROM "customer.parquet" )
WHERE c = 'aa';
SELECT *
FROM (
SELECT c_custkey AS c, c_name
FROM "customer.parquet" )
JOIN "orders.parquet" ON c = o_orderkey;
SELECT c_custkey AS c
FROM "customer.parquet"
JOIN "orders.parquet" ON c = o_orderkey;
Distributing Data Evenly Across Execution Engines During Joins
You can use a BROADCAST
hint if a query profile indicates that data involved in a join of two tables is heavily skewed and overloading one or more execution engines. The hint forces an even distribution of the data across all execution engines.
These hints are ignored for nested-loop joins and are not supported on views.
A BROADCAST
hint must be used immediately after the name of a table.
/*+ BROADCAST */
SELECT *
FROM T1 /*+ BROADCAST */
INNER JOIN t2 ON t1.key = t2.key
INNER JOIN t3 ON t2.key = t3.key;
SELECT *
FROM T1
INNER JOIN (select key, max(cost) cost from t2 /*+ BROADCAST */) T2 ON t1.key = t2.key
INNER JOIN t3 ON t2.key = t3.key;
Querying information about rejected records in files used by a COPY INTO operation for which ON_ERROR was set to 'continue' or 'skip_file'
Queries use the copy_errors()
function.
SELECT *
FROM TABLE(copy_errors(‘<table_name>’, [‘<query_id>’])
<table_name> String
The name of the target table on which the COPY INTO operation was performed.
<query_id> String Optional
The ID of the job that ran the COPY INTO operation. You can obtain this ID from the SYS.PROJECT.COPY_ERRORS_HISTORY system table. If you do not specify an ID, the default value is the ID of the last job started by the current user to run COPY INTO on the target table.
The records returned consist of these fields:
Column | Data Type | Description |
---|---|---|
job_id | string | The ID of the job that ran the COPY INTO operation. |
file_name | string | The full path of the file where the validation error was encountered. |
line_number | long | The number of the line (physical position) in the file where the error was encountered. |
row_number | long | The row (record) number in input file. |
column_name | string | The name of the column where the error was encountered. |
error | string | A message describing the error. |