Configure Model Providers
You configure model providers for your organization for AI features when deploying Dremio. After you configure at least one model provider, you must set a default model provider and optionally set an allowlist of available models. Dremio uses this default provider for all Dremio's AI Agent interactions, whereas the allowlist models can be used by anyone writing AI functions. By default CALL MODEL is granted to all users for all new model providers so if the default changes users can continue to use the AI Agent without interruption.
Dremio-Provided LLM
Dremio provides all organizations with an out-of-the-box model provider so that all users can begin engaging with the AI Agent and AI functions without any other configuration required. For US deployments, the LLM calls are routed to a US endpoint; for EU deployments, the LLM calls are routed to an EU endpoint. Once you have added your own model provider and set it as the new default, the Dremio-Provided LLM will no longer be used. If you delete all other model providers, then the Dremio-Provided LLM will revert to the organization's default model provider. This model provider cannot be deleted.
Supported Model Providers
Dremio supports configuration of the following model providers. Dremio recommends using a medium or high tier enterprise-grade reasoning model for the best performance and experience.
| Service | Connection Method(s) |
|---|---|
| OpenAI | Organization ID + API Key + Region (US, EU, or Global). Region selection adheres to OpenAI Regional Data Controls. |
| Anthropic | API Key |
| Google Gemini | API Key |
| Amazon Bedrock | Choose one of the following:
|
| Azure OpenAI | Resource Name + Directory ID + Application ID + Client Secret Value |
| OpenAI-Compatible API | API endpoint URL, and optionally Organization ID and API Key |
Dremio does not validate the model associated with an AWS Bedrock model ID or Azure OpenAI deployment name upon model provider creation. Using an unsupported model can lead to runtime errors or a higher rate of inaccurate responses.
Default Model Provider
To delete the model provider, you must assign a new default unless you are deleting the last available model provider that you have configured. To update the default model provider to a new one, you must have MODIFY privilege on both the current default and the new proposed default model provider.
Add Model Provider
For steps on adding an AWS Bedrock model provider, see Configure AWS Bedrock as a Model Provider.
For steps on adding an OpenAI-Compatible API model provider, see Configure an OpenAI-Compatible API Model Provider.
For all other model providers, follow these steps to add a model provider in the Dremio console:
- Click
in the side navigation bar and select Organization. - Select AI Configurations from the organization settings sidebar.
- Click Add model provider.
Configure an OpenAI-Compatible Model Provider
The OpenAI-Compatible API provider lets you connect Dremio to any service that implements the OpenAI API specification—including self-hosted models and third-party providers not listed above—without waiting for Dremio to certify specific models.
To configure an OpenAI-compatible model provider:
- Click
in the side navigation bar and select Organization. - Select AI Configurations from the organization settings sidebar.
- Click Add model provider.
- In the Add model provider dialog, select OpenAI-Compatible API as the model provider service.
- For Name, enter a name for the model provider.
- For API endpoint URL, enter the base URL of your OpenAI-compatible endpoint (e.g.,
https://api.example.com/v1). - (Optional) For Organization ID, enter your provider's organization ID.
- (Optional) For API Key, enter your API key.
- For Default Model ID, enter the model ID to use as the default.
- (Optional) For Allowed Model IDs for AI Functions, enter the model IDs to make available for AI functions.
- (Optional) Expand Advanced Configuration and enter JSON in Model parameters (JSON) to customize model behavior. See Model Parameters.
- Click Add.
Model Parameters
The Model parameters (JSON) field under Advanced Configuration controls how models behave when called by Dremio. You can set defaults that apply to all models, and override them for specific models by exact name or regex pattern.
| Field | Description |
|---|---|
parameters | Default parameters applied to all models |
per_model | Parameters for a specific model, matched by exact model name |
per_model_pattern | Parameters for models whose name matches a regex pattern |
Supported Parameters
| Parameter | Type | Description |
|---|---|---|
temperature | Number | Controls response randomness. Range: 0.0–1.0. Lower values produce more deterministic output. |
max_tokens | Integer | Maximum number of tokens to generate; synonym for max_completion_tokens. Use this for inference servers that do not accept max_completion_tokens. |
max_completion_tokens | Integer | Maximum number of tokens to generate; synonym for max_tokens. Use this for inference servers that do not accept max_tokens. |
top_p | Number | Nucleus sampling parameter. Range: 0.0–1.0. |
send_user_identifier | Boolean | Whether Dremio sends a user identifier to the provider with each request. Defaults to true. Set to false for providers that do not accept the user field (for example, Mistral). |
When the same parameter is defined in multiple places, the most specific definition takes effect: per_model overrides per_model_pattern, which overrides parameters.
{
"parameters": {
"temperature": 0.2,
"max_tokens": 8192
},
"per_model_pattern": {
".*mistral.*": {
"send_user_identifier": false
}
},
"per_model": {
"gpt-4": {
"top_p": 0.95,
"max_tokens": 4096
}
}
}