Skip to main content
Version: current [25.x]

Google Cloud Platform - GKE

This topic describes the deployment architecture of Dremio on Google Kubernetes Engine (GKE).

Architecture

GKE Diagram

Requirements

  • GKE version 1.12.7 or later
  • Kubernetes cluster on GKE (see Seting up a GKE Cluster for instructions)
  • Worker node instance type (minimum): e2-highmem-16 (16 core, 128 GiB memory)

Setting up a GKE Cluster

To set up a Kubernetes cluster on GKE, use the Google console or CLI. To do it via Google console:

  1. Sign in to the Google cloud console at https://console.cloud.google.com/.
  2. Go to GKE page https://console.cloud.google.com/kubernetes/ and click on CREATE.
    1. Choose the standard option and proceed to the next screen.
    2. In the Node Pool section, click on the default pool and set the number of nodes to 5.
    3. In the Nodes section, select the machine type e2-highmem-16 (16 vCPU, 128 GB memory).
    4. Configure additional options as necessary and create the cluster.
  3. Connect to the cluster. See Using kubectl to interact with GKE for more information.
  4. Install Helm. See Helm Install for more information.
  5. Begin Deploying Dremio.

Deploying Dremio

To deploy Dremio on GKE, follow the steps in Installing Dremio on Kubernetes in the dremio-cloud-tools repository on GitHub.

High Availability

High availability is dependent on the Kubernetes infrastructure. If any of the Kubernetes pods go down for any reason, Kubernetes brings up another pod to replace the pod that is out of commission.

  • The Dremio master-coordinator and secondary-coordinator pods are each StatefulSet. If the master-coordinator pod goes down, it recovers with the associated persistent volume and Dremio metadata is preserved.
  • Each Dremio executor pods is a StatefulSets with associated persistent volume. If an executor pod goes down, it recovers data from the associated persistent volume when it is restarted. Secondary-coordinator pods are not associated with persistent volumes.

Load Balancing

Load balancing distributes the workload between Dremio's web (UI and REST) client and ODBC/JDBC clients. All web and ODBC/JDBC clients connect to a single endpoint (load balancer) rather than directly to individual pods. These connections are then distributed across available coordinator (master-coordinator and secondary-coordinator) pods.

For More Information