Overview
Tool Name
Purpose
The dbt_action tool lets you operate DBT projects programmatically. Use it to initialize projects, run and test models, compile SQL, install packages, generate documentation, and coordinate multi-environment deployments.Functions Available
dbt_action
:Executes DBT operations for project management, model execution, testing, docs, snapshots, seeds, and dependency management. Controlled by the action parameter.
Key Features
Run and Test
Execute model runs and tests with targets, selectors, and parallel threads.
Compile and Document
Compile SQL for inspection and generate searchable documentation sites.
Packages and Artifacts
Install dependencies, seed data, snapshot state, and clean build outputs.
Multi-Env Targets
Switch profiles and targets for development, staging, and production.
Selective Execution
Use selectors to run only what you need for faster iteration.
Input Parameters for Each Function
dbt_action
Parameters
Name | Definition | Format |
---|---|---|
action | Operation to perform. Common values: init , run , test , compile , docs , seed , snapshot , clean , deps . | String (required) |
project_path | Path to the DBT project directory. | String |
profiles_dir | Directory containing profiles.yml . | String |
target | Target environment defined in profiles.yml . | String |
models | Specific model or list of models to run or test. | String or List |
select | DBT selection syntax to include nodes. | String |
exclude | DBT selection syntax to exclude nodes. | String |
vars | Variables to pass to DBT. | Object |
threads | Degree of parallelism. | Integer |
full_refresh | Force full refresh for incremental models. | Boolean |
fail_fast | Stop on first failure. | Boolean |
store_failures | Persist test failures to the database. | Boolean |
project_name | Name when creating a new project (for init ). | String |
database_adapter | Adapter for new projects (for init ). | String |
Use select and exclude to scope execution for quick feedback during development.
Use Cases
- Automated Transformations
Schedule
run
in CI to keep curated models fresh. - Data Quality Gates
Execute
test
on pull requests and block merges on failure. - Documentation Portals
Generate
docs
and publish the site for analysts and stakeholders. - Incremental Model Hygiene
Trigger
full_refresh
during backfills or schema changes. - Dependency Management
Pin versions and run
deps
to ensure reproducible builds.
Workflow/How It Works
- Step 1: Initialize or Point to a Project
Use
action: init
for a new project or set project_path for an existing one. - Step 2: Configure Profiles and Targets Provide profiles_dir and target for warehouse connections.
- Step 3: Install Packages
Run
deps
to install packages frompackages.yml
. - Step 4: Run and Test
Execute
run
with select or models, thentest
to validate results. - Step 5: Compile and Document
Use
compile
to inspect SQL, thendocs
to build the catalog and site. - Step 6: Operate and Maintain
Use
seed
,snapshot
, andclean
as needed for datasets and state.
Integration Relevance
- data_connector_tools for connectivity checks and schema exploration.
- git_action to version models, macros, and tests.
- file_manager_tools to store artifacts and publish generated docs.
- project_manager_tools to track DBT tasks and milestones.
- databricks_action when models rely on Databricks execution.
Configuration Details
- Ensure
profiles.yml
contains correct credentials for each target. - Tune threads based on warehouse capacity and model complexity.
- Pin package versions in
packages.yml
for reproducible builds. - Keep environment variables and vars consistent across environments.
DBT runs execute SQL on your warehouse. Choose the correct target, set threads responsibly, and use full_refresh only when necessary.
Limitations or Notes
- Requires DBT CLI availability in the execution environment.
- Large graphs can consume significant compute, scope with select.
- Adapter features vary by warehouse; validate per adapter.
- Network and warehouse quotas can impact execution times.
Output
- Command Execution: Logs with status, timings, and summary counts.
- Model Results: Per model status and row counts.
- Test Results: Pass or fail with failure details.
- Docs and Compile: Generated site files and compiled SQL outputs.
- Errors: Structured messages with hints for resolution.