> **Building with AI coding agents?** If you're using an AI coding agent, install the official Scalekit plugin. It gives your agent full awareness of the Scalekit API — reducing hallucinations and enabling faster, more accurate code generation.
>
> - **Claude Code**: `/plugin marketplace add scalekit-inc/claude-code-authstack` then `/plugin install <auth-type>@scalekit-auth-stack`
> - **GitHub Copilot CLI**: `copilot plugin marketplace add scalekit-inc/github-copilot-authstack` then `copilot plugin install <auth-type>@scalekit-auth-stack`
> - **Codex**: run the bash installer, restart, then open Plugin Directory and enable `<auth-type>`
> - **Skills CLI** (Windsurf, Cline, 40+ agents): `npx skills add scalekit-inc/skills --list` then `--skill <skill-name>`
>
> `<auth-type>` / `<skill-name>`: `agent-auth`, `full-stack-auth`, `mcp-auth`, `modular-sso`, `modular-scim` — [Full setup guide](https://docs.scalekit.com/dev-kit/build-with-ai/)

---

# Databricks Workspace

<div class="grid grid-cols-5 gap-4 items-center">
 <div class="col-span-4">
  Connect to Databricks Workspace APIs using a Service Principal with OAuth 2.0 client credentials to manage clusters, jobs, notebooks, SQL, and more.
 </div>
 <div class="flex justify-center">
  <img src="https://cdn.scalekit.com/sk-connect/assets/provider-icons/databricks-1.svg" width="64" height="64" alt="Databricks Workspace logo" />
 </div>
</div>

Supports authentication: Service Principal (OAuth 2.0)

## Tool list

## `databricksworkspace_cluster_get`

Get details of a specific Databricks cluster by cluster ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `cluster_id` | string | Yes | The unique identifier of the cluster. |

## `databricksworkspace_cluster_start`

Start a terminated Databricks cluster by cluster ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `cluster_id` | string | Yes | The unique identifier of the cluster to start. |

## `databricksworkspace_cluster_terminate`

Terminate a Databricks cluster by cluster ID. The cluster will be deleted and all its associated resources released.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `cluster_id` | string | Yes | The unique identifier of the cluster to terminate. |

## `databricksworkspace_clusters_list`

List all clusters in the Databricks workspace.

## `databricksworkspace_information_schema_columns`

List columns for a table using INFORMATION_SCHEMA.COLUMNS. Returns column name, data type, nullability, numeric precision/scale, max char length, and comment.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog` | string | Yes | The catalog containing the table. |
| `schema` | string | Yes | The schema containing the table. |
| `table` | string | Yes | The table to list columns for. |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to run the query on. |

## `databricksworkspace_information_schema_schemata`

List all schemas within a catalog using INFORMATION_SCHEMA.SCHEMATA. Used for schema discovery during setup.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog` | string | Yes | The catalog to list schemas from. |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to run the query on. |

## `databricksworkspace_information_schema_table_constraints`

List PRIMARY KEY and FOREIGN KEY constraints for tables in a schema using INFORMATION_SCHEMA.TABLE_CONSTRAINTS. Used to auto-detect join keys.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog` | string | Yes | The catalog containing the schema. |
| `schema` | string | Yes | The schema to list constraints from. |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to run the query on. |

## `databricksworkspace_information_schema_tables`

List tables and views in a schema using INFORMATION_SCHEMA.TABLES. Returns table name, type (MANAGED, EXTERNAL, VIEW, etc.), and comment for schema discovery.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog` | string | Yes | The catalog to query INFORMATION_SCHEMA from. |
| `schema` | string | Yes | The schema to list tables from. |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to run the query on. |

## `databricksworkspace_job_get`

Get details of a specific Databricks job by job ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `job_id` | integer | Yes | The unique identifier of the job. |

## `databricksworkspace_job_run_now`

Trigger an immediate run of a Databricks job by job ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `job_id` | integer | Yes | The unique identifier of the job to run. |

## `databricksworkspace_job_runs_list`

List all job runs in the Databricks workspace, optionally filtered by job ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `job_id` | integer | No | Filter runs by a specific job ID. If omitted, returns runs for all jobs. |
| `limit` | integer | No | The number of runs to return. Defaults to 20. Maximum is 1000. |
| `offset` | integer | No | The offset of the first run to return. |

## `databricksworkspace_jobs_list`

List all jobs in the Databricks workspace.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `limit` | integer | No | The number of jobs to return. Defaults to 20. Maximum is 100. |
| `offset` | integer | No | The offset of the first job to return. |

## `databricksworkspace_scim_me_get`

Retrieve information about the currently authenticated service principal in the Databricks workspace.

## `databricksworkspace_scim_users_list`

List all users in the Databricks workspace using the SCIM v2 API.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `count` | integer | No | Maximum number of results to return per page. |
| `filter` | string | No | SCIM filter expression to narrow results (e.g. userName eq "user@example.com"). |
| `startIndex` | integer | No | 1-based index of the first result to return. Used for pagination. |

## `databricksworkspace_secrets_scopes_list`

List all secret scopes available in the Databricks workspace.

## `databricksworkspace_sql_statement_cancel`

Cancel a running SQL statement by its statement ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `statement_id` | string | Yes | The ID of the SQL statement to cancel. |

## `databricksworkspace_sql_statement_execute`

Execute a SQL statement on a Databricks SQL warehouse and return the results.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog` | string | No | The catalog to use for the statement execution. |
| `schema` | string | No | The schema to use for the statement execution. |
| `statement` | string | Yes | The SQL statement to execute. |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to execute the statement on. |

## `databricksworkspace_sql_statement_get`

Get the status and results of a previously executed SQL statement by its statement ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `statement_id` | string | Yes | The ID of the SQL statement to retrieve. |

## `databricksworkspace_sql_statement_result_chunk_get`

Fetch a specific result chunk for a paginated SQL statement result. Use when a statement result has multiple chunks (large result sets).

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `chunk_index` | integer | Yes | The index of the result chunk to fetch (0-based). |
| `statement_id` | string | Yes | The ID of the SQL statement. |

## `databricksworkspace_sql_warehouse_get`

Get details of a specific Databricks SQL warehouse by its ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to retrieve. |

## `databricksworkspace_sql_warehouse_start`

Start a stopped Databricks SQL warehouse by its ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to start. |

## `databricksworkspace_sql_warehouse_stop`

Stop a running Databricks SQL warehouse by its ID.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `warehouse_id` | string | Yes | The ID of the SQL warehouse to stop. |

## `databricksworkspace_sql_warehouses_list`

List all SQL warehouses available in the Databricks workspace.

## `databricksworkspace_unity_catalog_catalogs_list`

List all Unity Catalogs accessible to the service principal in the Databricks workspace.

## `databricksworkspace_unity_catalog_schemas_list`

List all schemas within a Unity Catalog in the Databricks workspace.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog_name` | string | Yes | The name of the catalog to list schemas from. |

## `databricksworkspace_unity_catalog_tables_list`

List all tables and views within a schema in a Unity Catalog in the Databricks workspace.

| Name | Type | Required | Description |
| --- | --- | --- | --- |
| `catalog_name` | string | Yes | The name of the catalog containing the schema. |
| `schema_name` | string | Yes | The name of the schema to list tables from. |

---

## More Scalekit documentation

| Resource | What it contains | When to use it |
|----------|-----------------|----------------|
| [/llms.txt](/llms.txt) | Structured index with routing hints per product area | Start here — find which documentation set covers your topic before loading full content |
| [/llms-full.txt](/llms-full.txt) | Complete documentation for all Scalekit products in one file | Use when you need exhaustive context across multiple products or when the topic spans several areas |
| [sitemap-0.xml](https://docs.scalekit.com/sitemap-0.xml) | Full URL list of every documentation page | Use to discover specific page URLs you can fetch for targeted, page-level answers |
