Dremio
Concept Mapping
Here's a table for Concept Mapping between Dremio and DataHub to provide a clear overview of how entities and concepts in Dremio are mapped to corresponding entities in DataHub:
Source Concept | DataHub Concept | Notes |
---|---|---|
Physical Dataset/Table | Dataset | Subtype: Table |
Virtual Dataset/Views | Dataset | Subtype: View |
Spaces | Container | Mapped to DataHub’s Container aspect. Subtype: Space |
Folders | Container | Mapped as a Container in DataHub. Subtype: Folder |
Sources | Container | Represented as a Container in DataHub. Subtype: Source |
Important Capabilities
Capability | Status | Notes |
---|---|---|
Asset Containers | ✅ | Enabled by default |
Data Profiling | ✅ | Optionally enabled via configuration |
Descriptions | ✅ | Enabled by default |
Detect Deleted Entities | ✅ | Optionally enabled via stateful_ingestion.remove_stale_metadata |
Domains | ✅ | Supported via the domain config field |
Extract Ownership | ✅ | Enabled by default |
Platform Instance | ✅ | Enabled by default |
Table-Level Lineage | ✅ | Enabled by default |
This plugin integrates with Dremio to extract and ingest metadata into DataHub. The following types of metadata are extracted:
Metadata for Spaces, Folders, Sources, and Datasets:
- Includes physical and virtual datasets, with detailed information about each dataset.
- Extracts metadata about Dremio's organizational hierarchy: Spaces (top-level), Folders (sub-level), and Sources (external data connections).
Schema and Column Information:
- Column types and schema metadata associated with each physical and virtual dataset.
- Extracts column-level metadata, such as names, data types, and descriptions, if available.
Lineage Information:
- Dataset-level and column-level lineage tracking:
- Dataset-level lineage shows dependencies and relationships between physical and virtual datasets.
- Column-level lineage tracks transformations applied to individual columns across datasets.
- Lineage information helps trace the flow of data and transformations within Dremio.
- Dataset-level and column-level lineage tracking:
Ownership and Glossary Terms:
- Metadata related to ownership of datasets, extracted from Dremio’s ownership model.
- Glossary terms and business metadata associated with datasets, providing additional context to the data.
- Note: Ownership information will only be available for the Cloud and Enterprise editions, it will not be available for the Community edition.
Optional SQL Profiling (if enabled):
- Table, row, and column statistics can be profiled and ingested via optional SQL queries.
- Extracts statistics about tables and columns, such as row counts and data distribution, for better insight into the dataset structure.
Setup
This integration pulls metadata directly from the Dremio APIs.
You'll need to have a Dremio instance up and running with access to the necessary datasets, and API access should be enabled with a valid token.
The API token should have the necessary permissions to read metadata and retrieve lineage.
Steps to Get the Required Information
Generate an API Token:
- Log in to your Dremio instance.
- Navigate to your user profile in the top-right corner.
- Select Generate API Token to create an API token for programmatic access.
Permissions:
- The token should have read-only or admin permissions that allow it to:
- View all datasets (physical and virtual).
- Access all spaces, folders, and sources.
- Retrieve dataset and column-level lineage information.
- The token should have read-only or admin permissions that allow it to:
Verify External Data Source Permissions:
- If Dremio is connected to external data sources (e.g., AWS S3, relational databases), ensure that Dremio has access to the credentials required for querying those sources.
CLI based Ingestion
Install the Plugin
The dremio
source works out of the box with acryl-datahub
.
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
source:
type: dremio
config:
# Coordinates
hostname: localhost
port: 9047
tls: true
# Credentials with personal access token(recommended)
authentication_method: PAT
password: pass
# OR Credentials with basic auth
# authentication_method: password
# username: user
# password: pass
#For cloud instance
#is_dremio_cloud: True
#dremio_cloud_project_id: <project_id>
include_query_lineage: True
ingest_owner: true
#Optional
source_mappings:
- platform: s3
source_name: samples
#Optional
schema_pattern:
allow:
- "<source_name>.<table_name>"
sink:
# sink configs
Config Details
- Options
- Schema
Note that a .
is used to denote nested fields in the YAML recipe.
Field | Description |
---|---|
authentication_method string | Authentication method: 'password' or 'PAT' (Personal Access Token) Default: PAT |
disable_certificate_verification boolean | Disable TLS certificate verification Default: False |
domain string | Domain for all source objects. |
dremio_cloud_project_id string | ID of Dremio Cloud Project. Found in Project Settings in the Dremio Cloud UI |
dremio_cloud_region Enum | One of: "US", "EU" Default: US |
hostname string | Hostname or IP Address of the Dremio server |
include_query_lineage boolean | Whether to include query-based lineage information. Default: False |
ingest_owner boolean | Ingest Owner from source. This will override Owner info entered from UI Default: True |
is_dremio_cloud boolean | Whether this is a Dremio Cloud instance Default: False |
max_workers integer | Number of worker threads to use for parallel processing Default: 20 |
password string | Dremio password or Personal Access Token |
path_to_certificates string | Path to SSL certificates Default: /vercel/path0/metadata-ingestion/venv/lib/python3.... |
platform_instance string | The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details. |
port integer | Port of the Dremio REST API Default: 9047 |
tls boolean | Whether the Dremio REST API port is encrypted Default: True |
username string | Dremio username |
env string | The environment that all assets produced by this connector belong to Default: PROD |
dataset_pattern AllowDenyPattern | Regex patterns for tables and views to filter in ingestion. Specify regex to match the entire table name in dremio.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'dremio.public.customer.*' Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True} |
dataset_pattern.ignoreCase boolean | Whether to ignore case sensitivity during pattern matching. Default: True |
dataset_pattern.allow array | List of regex patterns to include in ingestion Default: ['.*'] |
dataset_pattern.allow.string string | |
dataset_pattern.deny array | List of regex patterns to exclude from ingestion. Default: [] |
dataset_pattern.deny.string string | |
profile_pattern AllowDenyPattern | Regex patterns for tables to profile Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True} |
profile_pattern.ignoreCase boolean | Whether to ignore case sensitivity during pattern matching. Default: True |
profile_pattern.allow array | List of regex patterns to include in ingestion Default: ['.*'] |
profile_pattern.allow.string string | |
profile_pattern.deny array | List of regex patterns to exclude from ingestion. Default: [] |
profile_pattern.deny.string string | |
schema_pattern AllowDenyPattern | Regex patterns for schemas to filter Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True} |
schema_pattern.ignoreCase boolean | Whether to ignore case sensitivity during pattern matching. Default: True |
schema_pattern.allow array | List of regex patterns to include in ingestion Default: ['.*'] |
schema_pattern.allow.string string | |
schema_pattern.deny array | List of regex patterns to exclude from ingestion. Default: [] |
schema_pattern.deny.string string | |
source_mappings array | Mappings from Dremio sources to DataHub platforms and datasets. |
source_mappings.DremioSourceMapping DremioSourceMapping | Any source that produces dataset urns in a single environment should inherit this class |
source_mappings.DremioSourceMapping.platform ❓ string | Source connection made by Dremio (e.g. S3, Snowflake) |
source_mappings.DremioSourceMapping.source_name ❓ string | Alias of platform in Dremio connection |
source_mappings.DremioSourceMapping.platform_instance string | The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details. |
source_mappings.DremioSourceMapping.env string | The environment that all assets produced by this connector belong to Default: PROD |
usage BaseUsageConfig | The usage config to use when generating usage statistics Default: {'bucket_duration': 'DAY', 'end_time': '2024-12-24... |
usage.bucket_duration Enum | Size of the time window to aggregate usage stats. Default: DAY |
usage.end_time string(date-time) | Latest date of lineage/usage to consider. Default: Current time in UTC |
usage.format_sql_queries boolean | Whether to format sql queries Default: False |
usage.include_operational_stats boolean | Whether to display operational stats. Default: True |
usage.include_read_operational_stats boolean | Whether to report read operational stats. Experimental. Default: False |
usage.include_top_n_queries boolean | Whether to ingest the top_n_queries. Default: True |
usage.start_time string(date-time) | Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration ). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'. |
usage.top_n_queries integer | Number of top queries to save to each table. Default: 10 |
usage.user_email_pattern AllowDenyPattern | regex patterns for user emails to filter in usage. Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True} |
usage.user_email_pattern.ignoreCase boolean | Whether to ignore case sensitivity during pattern matching. Default: True |
usage.user_email_pattern.allow array | List of regex patterns to include in ingestion Default: ['.*'] |
usage.user_email_pattern.allow.string string | |
usage.user_email_pattern.deny array | List of regex patterns to exclude from ingestion. Default: [] |
usage.user_email_pattern.deny.string string | |
profiling ProfileConfig | Configuration for profiling Default: {'enabled': False, 'operation_config': {'lower_fre... |
profiling.enabled boolean | Whether profiling should be done. Default: False |
profiling.include_field_distinct_count boolean | Whether to profile for the number of distinct values for each column. Default: True |
profiling.include_field_distinct_value_frequencies boolean | Whether to profile for distinct value frequencies. Default: False |
profiling.include_field_histogram boolean | Whether to profile for the histogram for numeric fields. Default: False |
profiling.include_field_max_value boolean | Whether to profile for the max value of numeric columns. Default: True |
profiling.include_field_mean_value boolean | Whether to profile for the mean value of numeric columns. Default: True |
profiling.include_field_min_value boolean | Whether to profile for the min value of numeric columns. Default: True |
profiling.include_field_null_count boolean | Whether to profile for the number of nulls for each column. Default: True |
profiling.include_field_quantiles boolean | Whether to profile for the quantiles of numeric columns. Default: False |
profiling.include_field_sample_values boolean | Whether to profile for the sample values for all columns. Default: True |
profiling.include_field_stddev_value boolean | Whether to profile for the standard deviation of numeric columns. Default: True |
profiling.limit integer | Max number of documents to profile. By default, profiles all documents. |
profiling.max_workers integer | Number of worker threads to use for profiling. Set to 1 to disable. Default: 20 |
profiling.offset integer | Offset in documents to profile. By default, uses no offset. |
profiling.profile_table_level_only boolean | Whether to perform profiling at table-level only, or include column-level profiling as well. Default: False |
profiling.query_timeout integer | Time before cancelling Dremio profiling query Default: 300 |
profiling.operation_config OperationConfig | Experimental feature. To specify operation configs. |
profiling.operation_config.lower_freq_profile_enabled boolean | Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling. Default: False |
profiling.operation_config.profile_date_of_month integer | Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect. |
profiling.operation_config.profile_day_of_week integer | Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect. |
stateful_ingestion StatefulStaleMetadataRemovalConfig | Base specialized config for Stateful Ingestion with stale metadata removal capability. |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
stateful_ingestion.remove_stale_metadata boolean | Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled. Default: True |
The JSONSchema for this configuration is inlined below.
{
"title": "DremioSourceConfig",
"description": "Base configuration class for stateful ingestion for source configs to inherit from.",
"type": "object",
"properties": {
"platform_instance": {
"title": "Platform Instance",
"description": "The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.",
"type": "string"
},
"env": {
"title": "Env",
"description": "The environment that all assets produced by this connector belong to",
"default": "PROD",
"type": "string"
},
"stateful_ingestion": {
"$ref": "#/definitions/StatefulStaleMetadataRemovalConfig"
},
"hostname": {
"title": "Hostname",
"description": "Hostname or IP Address of the Dremio server",
"type": "string"
},
"port": {
"title": "Port",
"description": "Port of the Dremio REST API",
"default": 9047,
"type": "integer"
},
"username": {
"title": "Username",
"description": "Dremio username",
"type": "string"
},
"authentication_method": {
"title": "Authentication Method",
"description": "Authentication method: 'password' or 'PAT' (Personal Access Token)",
"default": "PAT",
"type": "string"
},
"password": {
"title": "Password",
"description": "Dremio password or Personal Access Token",
"type": "string"
},
"tls": {
"title": "Tls",
"description": "Whether the Dremio REST API port is encrypted",
"default": true,
"type": "boolean"
},
"disable_certificate_verification": {
"title": "Disable Certificate Verification",
"description": "Disable TLS certificate verification",
"default": false,
"type": "boolean"
},
"path_to_certificates": {
"title": "Path To Certificates",
"description": "Path to SSL certificates",
"default": "/vercel/path0/metadata-ingestion/venv/lib/python3.10/site-packages/certifi/cacert.pem",
"type": "string"
},
"is_dremio_cloud": {
"title": "Is Dremio Cloud",
"description": "Whether this is a Dremio Cloud instance",
"default": false,
"type": "boolean"
},
"dremio_cloud_region": {
"title": "Dremio Cloud Region",
"description": "Dremio Cloud region ('US' or 'EU')",
"default": "US",
"enum": [
"US",
"EU"
],
"type": "string"
},
"dremio_cloud_project_id": {
"title": "Dremio Cloud Project Id",
"description": "ID of Dremio Cloud Project. Found in Project Settings in the Dremio Cloud UI",
"type": "string"
},
"domain": {
"title": "Domain",
"description": "Domain for all source objects.",
"type": "string"
},
"source_mappings": {
"title": "Source Mappings",
"description": "Mappings from Dremio sources to DataHub platforms and datasets.",
"type": "array",
"items": {
"$ref": "#/definitions/DremioSourceMapping"
}
},
"schema_pattern": {
"title": "Schema Pattern",
"description": "Regex patterns for schemas to filter",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"dataset_pattern": {
"title": "Dataset Pattern",
"description": "Regex patterns for tables and views to filter in ingestion. Specify regex to match the entire table name in dremio.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'dremio.public.customer.*'",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"usage": {
"title": "Usage",
"description": "The usage config to use when generating usage statistics",
"default": {
"bucket_duration": "DAY",
"end_time": "2024-12-24T09:36:54.302401+00:00",
"start_time": "2024-12-23T00:00:00+00:00",
"queries_character_limit": 24000,
"top_n_queries": 10,
"user_email_pattern": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"include_operational_stats": true,
"include_read_operational_stats": false,
"format_sql_queries": false,
"include_top_n_queries": true
},
"allOf": [
{
"$ref": "#/definitions/BaseUsageConfig"
}
]
},
"profile_pattern": {
"title": "Profile Pattern",
"description": "Regex patterns for tables to profile",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"profiling": {
"title": "Profiling",
"description": "Configuration for profiling",
"default": {
"enabled": false,
"operation_config": {
"lower_freq_profile_enabled": false,
"profile_day_of_week": null,
"profile_date_of_month": null
},
"limit": null,
"offset": null,
"profile_table_level_only": false,
"include_field_null_count": true,
"include_field_distinct_count": true,
"include_field_min_value": true,
"include_field_max_value": true,
"include_field_mean_value": true,
"include_field_median_value": false,
"include_field_stddev_value": true,
"include_field_quantiles": false,
"include_field_distinct_value_frequencies": false,
"include_field_histogram": false,
"include_field_sample_values": true,
"max_workers": 20,
"query_timeout": 300
},
"allOf": [
{
"$ref": "#/definitions/ProfileConfig"
}
]
},
"max_workers": {
"title": "Max Workers",
"description": "Number of worker threads to use for parallel processing",
"default": 20,
"type": "integer"
},
"include_query_lineage": {
"title": "Include Query Lineage",
"description": "Whether to include query-based lineage information.",
"default": false,
"type": "boolean"
},
"ingest_owner": {
"title": "Ingest Owner",
"description": "Ingest Owner from source. This will override Owner info entered from UI",
"default": true,
"type": "boolean"
}
},
"additionalProperties": false,
"definitions": {
"DynamicTypedStateProviderConfig": {
"title": "DynamicTypedStateProviderConfig",
"type": "object",
"properties": {
"type": {
"title": "Type",
"description": "The type of the state provider to use. For DataHub use `datahub`",
"type": "string"
},
"config": {
"title": "Config",
"description": "The configuration required for initializing the state provider. Default: The datahub_api config if set at pipeline level. Otherwise, the default DatahubClientConfig. See the defaults (https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/src/datahub/ingestion/graph/client.py#L19).",
"default": {},
"type": "object"
}
},
"required": [
"type"
],
"additionalProperties": false
},
"StatefulStaleMetadataRemovalConfig": {
"title": "StatefulStaleMetadataRemovalConfig",
"description": "Base specialized config for Stateful Ingestion with stale metadata removal capability.",
"type": "object",
"properties": {
"enabled": {
"title": "Enabled",
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"default": false,
"type": "boolean"
},
"remove_stale_metadata": {
"title": "Remove Stale Metadata",
"description": "Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.",
"default": true,
"type": "boolean"
}
},
"additionalProperties": false
},
"DremioSourceMapping": {
"title": "DremioSourceMapping",
"description": "Any source that produces dataset urns in a single environment should inherit this class",
"type": "object",
"properties": {
"platform_instance": {
"title": "Platform Instance",
"description": "The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.",
"type": "string"
},
"env": {
"title": "Env",
"description": "The environment that all assets produced by this connector belong to",
"default": "PROD",
"type": "string"
},
"platform": {
"title": "Platform",
"description": "Source connection made by Dremio (e.g. S3, Snowflake)",
"type": "string"
},
"source_name": {
"title": "Source Name",
"description": "Alias of platform in Dremio connection",
"type": "string"
}
},
"required": [
"platform",
"source_name"
],
"additionalProperties": false
},
"AllowDenyPattern": {
"title": "AllowDenyPattern",
"description": "A class to store allow deny regexes",
"type": "object",
"properties": {
"allow": {
"title": "Allow",
"description": "List of regex patterns to include in ingestion",
"default": [
".*"
],
"type": "array",
"items": {
"type": "string"
}
},
"deny": {
"title": "Deny",
"description": "List of regex patterns to exclude from ingestion.",
"default": [],
"type": "array",
"items": {
"type": "string"
}
},
"ignoreCase": {
"title": "Ignorecase",
"description": "Whether to ignore case sensitivity during pattern matching.",
"default": true,
"type": "boolean"
}
},
"additionalProperties": false
},
"BucketDuration": {
"title": "BucketDuration",
"description": "An enumeration.",
"enum": [
"DAY",
"HOUR"
],
"type": "string"
},
"BaseUsageConfig": {
"title": "BaseUsageConfig",
"type": "object",
"properties": {
"bucket_duration": {
"description": "Size of the time window to aggregate usage stats.",
"default": "DAY",
"allOf": [
{
"$ref": "#/definitions/BucketDuration"
}
]
},
"end_time": {
"title": "End Time",
"description": "Latest date of lineage/usage to consider. Default: Current time in UTC",
"type": "string",
"format": "date-time"
},
"start_time": {
"title": "Start Time",
"description": "Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.",
"type": "string",
"format": "date-time"
},
"top_n_queries": {
"title": "Top N Queries",
"description": "Number of top queries to save to each table.",
"default": 10,
"exclusiveMinimum": 0,
"type": "integer"
},
"user_email_pattern": {
"title": "User Email Pattern",
"description": "regex patterns for user emails to filter in usage.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"include_operational_stats": {
"title": "Include Operational Stats",
"description": "Whether to display operational stats.",
"default": true,
"type": "boolean"
},
"include_read_operational_stats": {
"title": "Include Read Operational Stats",
"description": "Whether to report read operational stats. Experimental.",
"default": false,
"type": "boolean"
},
"format_sql_queries": {
"title": "Format Sql Queries",
"description": "Whether to format sql queries",
"default": false,
"type": "boolean"
},
"include_top_n_queries": {
"title": "Include Top N Queries",
"description": "Whether to ingest the top_n_queries.",
"default": true,
"type": "boolean"
}
},
"additionalProperties": false
},
"OperationConfig": {
"title": "OperationConfig",
"type": "object",
"properties": {
"lower_freq_profile_enabled": {
"title": "Lower Freq Profile Enabled",
"description": "Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.",
"default": false,
"type": "boolean"
},
"profile_day_of_week": {
"title": "Profile Day Of Week",
"description": "Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.",
"type": "integer"
},
"profile_date_of_month": {
"title": "Profile Date Of Month",
"description": "Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.",
"type": "integer"
}
},
"additionalProperties": false
},
"ProfileConfig": {
"title": "ProfileConfig",
"type": "object",
"properties": {
"enabled": {
"title": "Enabled",
"description": "Whether profiling should be done.",
"default": false,
"type": "boolean"
},
"operation_config": {
"title": "Operation Config",
"description": "Experimental feature. To specify operation configs.",
"allOf": [
{
"$ref": "#/definitions/OperationConfig"
}
]
},
"limit": {
"title": "Limit",
"description": "Max number of documents to profile. By default, profiles all documents.",
"type": "integer"
},
"offset": {
"title": "Offset",
"description": "Offset in documents to profile. By default, uses no offset.",
"type": "integer"
},
"profile_table_level_only": {
"title": "Profile Table Level Only",
"description": "Whether to perform profiling at table-level only, or include column-level profiling as well.",
"default": false,
"type": "boolean"
},
"include_field_null_count": {
"title": "Include Field Null Count",
"description": "Whether to profile for the number of nulls for each column.",
"default": true,
"type": "boolean"
},
"include_field_distinct_count": {
"title": "Include Field Distinct Count",
"description": "Whether to profile for the number of distinct values for each column.",
"default": true,
"type": "boolean"
},
"include_field_min_value": {
"title": "Include Field Min Value",
"description": "Whether to profile for the min value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_max_value": {
"title": "Include Field Max Value",
"description": "Whether to profile for the max value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_mean_value": {
"title": "Include Field Mean Value",
"description": "Whether to profile for the mean value of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_stddev_value": {
"title": "Include Field Stddev Value",
"description": "Whether to profile for the standard deviation of numeric columns.",
"default": true,
"type": "boolean"
},
"include_field_quantiles": {
"title": "Include Field Quantiles",
"description": "Whether to profile for the quantiles of numeric columns.",
"default": false,
"type": "boolean"
},
"include_field_distinct_value_frequencies": {
"title": "Include Field Distinct Value Frequencies",
"description": "Whether to profile for distinct value frequencies.",
"default": false,
"type": "boolean"
},
"include_field_histogram": {
"title": "Include Field Histogram",
"description": "Whether to profile for the histogram for numeric fields.",
"default": false,
"type": "boolean"
},
"include_field_sample_values": {
"title": "Include Field Sample Values",
"description": "Whether to profile for the sample values for all columns.",
"default": true,
"type": "boolean"
},
"max_workers": {
"title": "Max Workers",
"description": "Number of worker threads to use for profiling. Set to 1 to disable.",
"default": 20,
"type": "integer"
},
"query_timeout": {
"title": "Query Timeout",
"description": "Time before cancelling Dremio profiling query",
"default": 300,
"type": "integer"
}
},
"additionalProperties": false
}
}
}
Starter Receipe for Dremio Cloud Instance
source:
type: dremio
config:
# Authentication details
authentication_method: PAT # Use Personal Access Token for authentication
password: <your_api_token> # Replace <your_api_token> with your Dremio Cloud API token
is_dremio_cloud: True # Set to True for Dremio Cloud instances
dremio_cloud_project_id: <project_id> # Provide the Project ID for Dremio Cloud
# Enable query lineage tracking
include_query_lineage: True
#Optional
source_mappings:
- platform: s3
source_name: samples
# Optional
schema_pattern:
allow:
- "<source_name>.<table_name>"
sink:
# Define your sink configuration here
Code Coordinates
- Class Name:
datahub.ingestion.source.dremio.dremio_source.DremioSource
- Browse on GitHub
Questions
If you've got any questions on configuring ingestion for Dremio, feel free to ping us on our Slack.