Overview

Tool Name

databricks_action

Purpose

The databricks_action tool provides programmatic control of Databricks workspaces. It enables notebook CRUD, execution, imports and exports, permissions management, and job or cluster interactions so you can orchestrate Databricks work without using the UI.

Functions Available

  1. databricks_action: Performs Databricks operations, including notebook create, read, update, delete, run, list, import, export, and status checks through the REST API.

Key Features

Notebook Lifecycle

Create, read, update, list, and delete notebooks programmatically in workspace folders.

Execute & Monitor

Run notebooks on clusters with parameters and monitor run state, logs, and results.

Import & Export

Move notebooks across environments in SOURCE, HTML, JUPYTER, or DBC formats.

Permissions & Sharing

Manage access to notebooks and align with workspace sharing policies.

Jobs and Clusters

Target specific clusters or jobs for scheduled and batch execution flows.

Input Parameters for Each Function

1.databricks_action

Parameters
NameDefinitionFormat
actionDatabricks operation to perform. Common values: create_notebook, run_notebook, get_notebook, delete_notebook, list_notebooks, export_notebook, import_notebook.String (required)
workspace_urlDatabricks workspace base URL. Example: https://adb-1234567890123.4.azuredatabricks.net.String (required)
access_tokenPersonal access token used for API authentication.String (required)
notebook_pathWorkspace path to the target notebook. Example: /Shared/etl/run_daily.String
notebook_contentNotebook source for create or update actions.String
cluster_idCluster identifier to run the notebook on.String
notebook_formatImport or export format. One of SOURCE, HTML, JUPYTER, DBC.String
languageLanguage for new notebooks. One of PYTHON, SCALA, SQL, R.String
parametersKey value pairs passed to the run.Object
timeout_secondsMaximum runtime before the execution is marked timed out.Integer
Genesis Tip
When running production jobs, pass parameters for date ranges or environment flags instead of hardcoding them in notebook cells.

Use Cases

  1. Automated data processing Run ELT notebooks on a schedule and collect run logs for audit.
  2. ML training orchestration Kick off parameterized training runs and export resulting notebooks or metrics.
  3. Cross-environment promotion Export a notebook as SOURCE or DBC, review in Git, then import into another workspace.
  4. Operational monitoring Poll execution status, capture stdout and error traces, and surface results to stakeholders.
IMPORTANT: Notebook execution can modify data and incur compute costs. Use non-production clusters for testing and set timeout_seconds conservatively.

Workflow/How It Works

  1. Authenticate Provide workspace_url and access_token.
  2. Choose an action For example create_notebook, run_notebook, or export_notebook.
  3. Provide context Set notebook_path, optional cluster_id, parameters, and timeout_seconds as needed.
  4. Execute and observe Receive job or run identifiers, then fetch logs or results until completion.
  5. Promote or share Export artifacts, store with file_manager_tools, and link runs back to tasks or tickets.
Use consistent workspace paths like /Repos/team/project/to simplify automation and reviews.

Integration Relevance

  • data_connector_tools: Open connections and run SQL inside notebooks that the action executes.
  • file_manager_tools: Capture notebook exports or logs and attach them to missions or tasks.
  • project_manager_tools: Track notebook work as todos with linked runs and outcomes.
  • git_action: Version notebook source, diffs, and promotion artifacts.
  • dbt_action: Coordinate dbt transformations before or after a Databricks notebook run.

Configuration Details

  • Generate a Databricks personal access token with the permissions required for your operations.
  • Use full workspace_url including protocol and host.
  • Ensure cluster_id points to a running or auto-start cluster that your token can access.
  • Align notebook_format with your import or export destination.
  • Parameter names in parameters must match widgets or parameter parsing in the notebook.

Limitations or Notes

  1. Requires valid workspace access and API permissions.
  2. API rate limits and cluster capacity can throttle concurrent runs.
  3. Large exports or logs may take time to transfer.
  4. Some advanced workspace features are not exposed in REST endpoints.
  5. Tokens expire and must be rotated according to your security policy.

Output

  • Create and update Confirmation with notebook metadata and path.
  • Run execution Run or job identifier, life-cycle state, result summary, and log access.
  • Listing Arrays of notebooks with paths, languages, and modification timestamps.
  • Import and export Content in the requested notebook_format and related download details.
  • Errors HTTP status, Databricks error code, message, and troubleshooting hints.