> **Building with AI coding agents?** If you're using an AI coding agent, install the official Scalekit plugin. It gives your agent full awareness of the Scalekit API — reducing hallucinations and enabling faster, more accurate code generation.
>
> - **Claude Code**: `/plugin marketplace add scalekit-inc/claude-code-authstack` then `/plugin install <auth-type>@scalekit-auth-stack`
> - **GitHub Copilot CLI**: `copilot plugin marketplace add scalekit-inc/github-copilot-authstack` then `copilot plugin install <auth-type>@scalekit-auth-stack`
> - **Codex**: run the bash installer, restart, then open Plugin Directory and enable `<auth-type>`
> - **Skills CLI** (Windsurf, Cline, 40+ agents): `npx skills add scalekit-inc/skills --list` then `--skill <skill-name>`
>
> `<auth-type>` / `<skill-name>`: `agentkit`, `full-stack-auth`, `mcp-auth`, `modular-sso`, `modular-scim` — [Full setup guide](https://docs.scalekit.com/dev-kit/build-with-ai/)

---

# Datadog

Connect to Datadog to monitor metrics, logs, dashboards, monitors, incidents, SLOs, and more across your infrastructure.

**Authentication:** API Key
**Categories:** Monitoring, Observability, Devops, Developer Tools
> Image: Datadog connector card shown in Scalekit's Create Connection search

## What you can do

Connect this agent connector to let your agent:

- **Monitor infrastructure** — list, create, update, and delete monitors; mute and unmute alerts; manage downtime schedules
- **Query metrics** — fetch timeseries data, list metric metadata and tags, submit custom metrics
- **Search logs** — search and aggregate log events; list log indexes and pipelines
- **Manage incidents** — create and retrieve incidents for incident response workflows
- **Track SLOs** — create, update, delete, and get history for service level objectives
- **Build dashboards** — create, update, delete, and list dashboards; capture graph snapshots
- **Run Synthetics** — trigger and manage synthetic tests; get test results; manage locations and global variables
- **Manage RUM** — create and list Real User Monitoring applications
- **Manage notebooks** — create, retrieve, and delete collaborative notebooks
- **Manage users and roles** — create users, assign roles, list permissions
- **Monitor hosts and containers** — list hosts, mute/unmute hosts, manage host tags, list containers and processes
- **Post events** — create and retrieve events in the Datadog event stream
- **Run service checks** — submit custom service check results

## Authentication

This connector uses **API Key** authentication. Scalekit stores and injects your Datadog API key and Application key automatically on every request.

## Set up the connector

Register your Datadog API credentials with Scalekit so Scalekit can proxy API requests and inject your keys automatically. Datadog uses API key authentication — there is no redirect URI or OAuth flow.

1. ### Find your Datadog site

   Datadog hosts accounts on regional sites. You must provide your site when creating a connected account — Scalekit uses it to route API calls to the correct endpoint.

   | Site identifier | Region |
   |----------------|--------|
   | `datadoghq.com` | US1 (default) |
   | `us3.datadoghq.com` | US3 |
   | `us5.datadoghq.com` | US5 |
   | `datadoghq.eu` | EU1 |
   | `ap1.datadoghq.com` | AP1 |
   | `ddog-gov.com` | US1-FED (GovCloud) |

   If you are unsure which site your account uses, check the URL when you sign in to Datadog — for example, `app.datadoghq.eu` means your site is `datadoghq.eu`. See the [Datadog site documentation](https://docs.datadoghq.com/getting_started/site/) for details.

2. ### Get your Datadog API key and Application key

   - Sign in to [Datadog](https://app.datadoghq.com) and go to **Organization Settings** → **API Keys**.
   - Copy an existing API key or click **+ New Key** to create one dedicated to this integration.

   > Image: Datadog Organization Settings API Keys page showing existing keys and a New Key button

   - Go to **Organization Settings** → **Application Keys**.
   - Copy an existing Application key or click **+ New Key** to create a dedicated one. Copy the key value immediately — Datadog will not show it again.

   > Image: Datadog New Application Key creation modal showing key name, key value to copy, and Actions API Access enabled

   > note: Both keys are required
>
> Datadog requires both an **API Key** (for authentication) and an **Application Key** (for authorization to specific actions like reading metrics and managing monitors). Keep both keys in a secure secret store.

3. ### Create a connection in Scalekit

   - In [Scalekit dashboard](https://app.scalekit.com), go to **AgentKit** → **Connections** → **Create Connection**. Find **Datadog** and click **Create**.
   - Note the **Connection name** — you will use this as `connection_name` in your code (e.g., `datadog`).
   - Click **Save**.

   > Image: Scalekit connection configuration for Datadog showing connection name and API Key authentication type

4. ### Add a connected account

   Connected accounts link a specific user identifier in your system to a set of Datadog credentials. Add them via the dashboard for testing, or via the Scalekit API in production.

   **Via dashboard (for testing)**

   - Open the connection you created and click the **Connected Accounts** tab → **Add account**.
   - Fill in:
     - **Your User's ID** — a unique identifier for this user in your system (e.g., `user_123`)
     - **API Key** — the Datadog API key from step 2
     - **Application Key** — the Datadog Application key from step 2
     - **Datadog Site** — your site identifier from step 1 (e.g., `datadoghq.com`)
   - Click **Create Account**.

   > Image: Add connected account form for Datadog in Scalekit showing User ID, API Key, Application Key, and Datadog Site fields

   **Via API (for production)**

   
     ### Node.js

```typescript

const scalekit = new Scalekit(
  process.env.SCALEKIT_ENV_URL,
  process.env.SCALEKIT_CLIENT_ID,
  process.env.SCALEKIT_CLIENT_SECRET,
);

// Never hard-code credentials — read from secure storage or user input
const datadogApiKey = getUserDatadogApiKey();   // retrieve from your secure store
const datadogAppKey = getUserDatadogAppKey();
const datadogSite = getUserDatadogSite();       // e.g. 'datadoghq.com'

await scalekit.actions.upsertConnectedAccount({
  connectionName: 'datadog',
  identifier: 'user_123',
  credentials: {
    api_key: datadogApiKey,
    app_key: datadogAppKey,
    dd_site: datadogSite,
  },
});
```

     ### Python

```python

from scalekit import ScalekitClient

scalekit_client = ScalekitClient(
    env_url=os.environ["SCALEKIT_ENV_URL"],
    client_id=os.environ["SCALEKIT_CLIENT_ID"],
    client_secret=os.environ["SCALEKIT_CLIENT_SECRET"],
)

# Never hard-code credentials — read from secure storage or user input
datadog_api_key = get_user_datadog_api_key()  # retrieve from your secure store
datadog_app_key = get_user_datadog_app_key()
datadog_site = get_user_datadog_site()        # e.g. 'datadoghq.com'

scalekit_client.actions.upsert_connected_account(
    connection_name="datadog",
    identifier="user_123",
    credentials={
        "api_key": datadog_api_key,
        "app_key": datadog_app_key,
        "dd_site": datadog_site,
    }
)
```

   

   > tip: Production usage tip
>
> In production, call `upsert_connected_account` (Python) / `upsertConnectedAccount` (Node.js) when a user connects their Datadog account — for example, on an integrations settings page in your app.

## Code examples

Once a connected account is set up, make API calls through the Scalekit proxy. Scalekit injects the Datadog API key and Application key automatically — you never handle credentials in your application code.

## Proxy API calls

Make authenticated requests to any Datadog API endpoint through the Scalekit proxy.

  ### Node.js

```typescript

const connectionName = 'datadog'; // connection name from your Scalekit dashboard
const identifier = 'user_123';   // your user's unique identifier

const scalekit = new ScalekitClient(
  process.env.SCALEKIT_ENV_URL,
  process.env.SCALEKIT_CLIENT_ID,
  process.env.SCALEKIT_CLIENT_SECRET
);
const actions = scalekit.actions;

// Fetch all monitors via Scalekit proxy — no API key needed here
const result = await actions.request({
  connectionName,
  identifier,
  path: '/api/v1/monitor',
  method: 'GET',
});
console.log(result);
```

  ### Python

```python

from dotenv import load_dotenv
load_dotenv()

connection_name = "datadog"  # connection name from your Scalekit dashboard
identifier = "user_123"      # your user's unique identifier

scalekit_client = scalekit.client.ScalekitClient(
    client_id=os.getenv("SCALEKIT_CLIENT_ID"),
    client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"),
    env_url=os.getenv("SCALEKIT_ENV_URL"),
)
actions = scalekit_client.actions

# Fetch all monitors via Scalekit proxy — no API key needed here
result = actions.request(
    connection_name=connection_name,
    identifier=identifier,
    path="/api/v1/monitor",
    method="GET"
)
print(result)
```

> tip: No OAuth flow needed
>
> Datadog uses API key auth — unlike OAuth connectors, there is no authorization link or redirect flow. Once you call `upsertConnectedAccount` (Node.js) / `upsert_connected_account` (Python), or add an account via the dashboard, your users can make requests immediately.

## Execute tools

Use `executeTool` (Node.js) or `execute_tool` (Python) to call any Datadog tool by name with typed parameters.

### Create a monitor

  ### Node.js

```typescript
const monitor = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_monitor_create',
  toolInput: {
    name: 'High CPU Usage',
    type: 'metric alert',
    query: 'avg(last_5m):avg:system.cpu.user{*} > 90',
    message: 'CPU usage is high on {{host.name}}. @slack-alerts',
  },
});
console.log('Monitor created:', monitor.id);
```

  ### Python

```python
monitor = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_monitor_create",
    tool_input={
        "name": "High CPU Usage",
        "type": "metric alert",
        "query": "avg(last_5m):avg:system.cpu.user{*} > 90",
        "message": "CPU usage is high on {{host.name}}. @slack-alerts",
    },
)
print("Monitor created:", monitor["id"])
```

### Search logs

  ### Node.js

```typescript
const logs = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_logs_search',
  toolInput: {
    query: 'service:web status:error',
    from: '2024-01-01T00:00:00Z',
    to: '2024-01-02T00:00:00Z',
    limit: 50,
  },
});
console.log('Log count:', logs.data?.length);
```

  ### Python

```python
logs = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_logs_search",
    tool_input={
        "query": "service:web status:error",
        "from": "2024-01-01T00:00:00Z",
        "to": "2024-01-02T00:00:00Z",
        "limit": 50,
    },
)
print("Log count:", len(logs.get("data", [])))
```

### Query metrics

  ### Node.js

```typescript
const metrics = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_metrics_query',
  toolInput: {
    query: 'avg:system.cpu.user{*}',
    from: 1704067200,  // Unix timestamp
    to: 1704153600,
  },
});
console.log('Series:', metrics.series);
```

  ### Python

```python
metrics = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_metrics_query",
    tool_input={
        "query": "avg:system.cpu.user{*}",
        "from": 1704067200,
        "to": 1704153600,
    },
)
print("Series:", metrics.get("series"))
```

### Create an incident

  ### Node.js

```typescript
const incident = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_incident_create',
  toolInput: {
    title: 'Database connection failures',
    customer_impacted: true,
    severity: 'SEV-2',
  },
});
console.log('Incident ID:', incident.data?.id);
```

  ### Python

```python
incident = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_incident_create",
    tool_input={
        "title": "Database connection failures",
        "customer_impacted": True,
        "severity": "SEV-2",
    },
)
print("Incident ID:", incident.get("data", {}).get("id"))
```

### Create a scheduled downtime

The `start` and `end` fields use **ISO 8601 format**, not Unix timestamps.

  ### Node.js

```typescript
const downtime = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_downtime_create',
  toolInput: {
    scope: 'env:production',
    start: '2026-06-01T02:00:00Z',
    end: '2026-06-01T04:00:00Z',
    message: 'Scheduled maintenance window',
  },
});
// Use data.id (UUID), not included[].id (user UUID)
const downtimeId = downtime.data?.id;
console.log('Downtime ID:', downtimeId);
```

  ### Python

```python
downtime = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_downtime_create",
    tool_input={
        "scope": "env:production",
        "start": "2026-06-01T02:00:00Z",
        "end": "2026-06-01T04:00:00Z",
        "message": "Scheduled maintenance window",
    },
)
# Use data["id"] (UUID), not included[0]["id"] (user UUID)
downtime_id = downtime["data"]["id"]
print("Downtime ID:", downtime_id)
```

> caution: Downtime response includes two IDs
>
> The `downtime_create` response contains both `data.id` (the downtime UUID) and `included[].id` (the creator's user UUID). Always use `data.id` for subsequent `downtime_get`, `downtime_update`, and `downtime_cancel` calls.

### Create a metric SLO

The `query` field must be a **JSON string** containing `numerator` and `denominator` metric queries. Pass `thresholds` as a JSON string too.

  ### Node.js

```typescript
const slo = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_slo_create',
  toolInput: {
    name: 'API Success Rate',
    type: 'metric',
    query: JSON.stringify({
      numerator: 'sum:requests.success{*}.as_count()',
      denominator: 'sum:requests.total{*}.as_count()',
    }),
    thresholds: JSON.stringify([{ target: 99.5, timeframe: '30d' }]),
  },
});
const sloId = slo.data?.[0]?.id;
```

  ### Python

```python

slo = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_slo_create",
    tool_input={
        "name": "API Success Rate",
        "type": "metric",
        "query": json.dumps({
            "numerator": "sum:requests.success{*}.as_count()",
            "denominator": "sum:requests.total{*}.as_count()",
        }),
        "thresholds": json.dumps([{"target": 99.5, "timeframe": "30d"}]),
    },
)
slo_id = slo["data"][0]["id"]
```

### Retrieve an event

Datadog event IDs are 64-bit integers that exceed the float64 precision limit. Always use the `id_str` field from `event_create` or `events_list_v2` — not the numeric `id` field — to avoid silent precision loss.

  ### Node.js

```typescript
// Create an event and capture its string ID
const created = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_event_create',
  toolInput: {
    title: 'Deployment completed',
    text: 'v2.3.1 deployed to production',
    date_happened: Math.floor(Date.now() / 1000),
  },
});
const eventId = created.event?.id_str; // use id_str, not id

// Retrieve it
const event = await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_event_get',
  toolInput: { event_id: eventId },
});
console.log(event.event?.title);
```

  ### Python

```python

created = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_event_create",
    tool_input={
        "title": "Deployment completed",
        "text": "v2.3.1 deployed to production",
        "date_happened": int(time.time()),
    },
)
event_id = created["event"]["id_str"]  # use id_str, not id

event = actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_event_get",
    tool_input={"event_id": event_id},
)
print(event["event"]["title"])
```

### Submit custom metrics

`datadog_metrics_submit` takes separate array parameters for timestamps and values — not a serialized `series` object.

  ### Node.js

```typescript
await actions.executeTool({
  connectionName,
  identifier,
  toolName: 'datadog_metrics_submit',
  toolInput: {
    metric_name: 'app.request.duration',
    metric_type: 3,                              // 3 = gauge
    points_timestamps: JSON.stringify([Math.floor(Date.now() / 1000)]),
    points_values: JSON.stringify([142.5]),
    tags: JSON.stringify(['env:production', 'service:api']),
  },
});
```

  ### Python

```python

actions.execute_tool(
    connection_name=connection_name,
    identifier=identifier,
    tool_name="datadog_metrics_submit",
    tool_input={
        "metric_name": "app.request.duration",
        "metric_type": 3,                         # 3 = gauge
        "points_timestamps": json.dumps([int(time.time())]),
        "points_values": json.dumps([142.5]),
        "tags": json.dumps(["env:production", "service:api"]),
    },
)
```

> tip: metric_type values
>
> `0` = unspecified, `1` = count, `2` = rate, `3` = gauge. Use `3` (gauge) for point-in-time measurements.

## Getting resource IDs

Most tools require IDs that must be fetched from the API — never guess or hard-code them.

| Resource | Tool to get ID | Field in response |
|----------|---------------|-------------------|
| Monitor ID | `datadog_monitors_list` | `array[].id` |
| Dashboard ID | `datadog_dashboards_list` | `dashboards[].id` |
| Downtime ID | `datadog_downtime_create` response | `data.id` (UUID — not `included[].id`) |
| Notebook ID | `datadog_notebooks_list` | `data[].id` |
| Incident ID | `datadog_incidents_list` | `data[].id` |
| SLO ID | `datadog_slos_list` | `data[].id` |
| Role ID | `datadog_roles_list` | `data[].id` |
| User ID | `datadog_users_list` | `data[].id` |
| RUM App ID | `datadog_rum_applications_list` | `data[].id` |
| Event ID | `datadog_event_create` response | `event.id_str` (**use `id_str`, not `id`** — see note below) |
| Metric name | `datadog_metrics_list` | `metrics[]` (requires `from` Unix timestamp) |
| Log pipeline ID | `datadog_log_pipelines_list` | `array[].id` |

## Why event IDs must come from id_str

Datadog event IDs are 64-bit integers (e.g. `8610103547030771722`) that exceed the float64 precision limit (~9 × 10¹⁵). When the numeric `id` field is parsed as a JSON number, it loses precision and the path resolves to a wrong ID, causing a 400 "No event matches" error. Always read `event.id_str` from the response and pass it as a string to `datadog_event_get`.

## Tool list

## Tool list

### `datadog_api_key_validate`

Validate the current Datadog API key.

### `datadog_current_user_get`

Get the current authenticated Datadog user.

### `datadog_permissions_list`

List all available Datadog permissions.

### `datadog_ip_ranges_list`

Get all IP ranges used by Datadog agents and services.

### `datadog_dashboards_list`

List all Datadog dashboards.

Parameters:

- `count` (`integer`, optional): 50
- `filter_deleted` (`string`, optional): false
- `filter_shared` (`string`, optional): true
- `start` (`integer`, optional): 0

### `datadog_dashboard_get`

Get a specific Datadog dashboard by ID.

Parameters:

- `dashboard_id` (`string`, required): abc-def-ghi

### `datadog_dashboard_create`

Create a new Datadog dashboard.

Parameters:

- `description` (`string`, optional): Overview of my service metrics
- `layout_type` (`string`, required): ordered
- `tags` (`string`, optional): ["team:ops"]
- `template_variables` (`string`, optional): []
- `title` (`string`, required): My Service Dashboard
- `widgets` (`string`, optional): []

### `datadog_dashboard_update`

Update an existing Datadog dashboard.

Parameters:

- `dashboard_id` (`string`, required): abc-def-ghi
- `description` (`string`, optional): Overview of my service metrics
- `layout_type` (`string`, required): ordered
- `title` (`string`, required): My Service Dashboard
- `widgets` (`string`, optional): []

### `datadog_dashboard_delete`

Delete a Datadog dashboard by ID.

Parameters:

- `dashboard_id` (`string`, required): abc-def-ghi

### `datadog_graph_snapshot`

Take a snapshot of a metric graph in Datadog.

Parameters:

- `end` (`integer`, required): 1672617600
- `event_query` (`string`, optional): tags:deploy
- `metric_query` (`string`, required): avg:system.cpu.user{*}
- `start` (`integer`, required): 1672531200
- `title` (`string`, optional): CPU Usage Over Time

### `datadog_monitors_list`

List all Datadog monitors with optional filtering.

Parameters:

- `group_states` (`string`, optional): alert,warn
- `monitor_tags` (`string`, optional): team:backend
- `name` (`string`, optional): CPU monitor
- `page` (`integer`, optional): 0
- `page_size` (`integer`, optional): 100
- `tags` (`string`, optional): env:prod
- `with_downtimes` (`string`, optional): true

### `datadog_monitor_get`

Get a specific Datadog monitor by ID.

Parameters:

- `monitor_id` (`integer`, required): 123456

### `datadog_monitor_create`

Create a new Datadog monitor.

Parameters:

- `message` (`string`, optional): CPU usage is high on {{host.name}}
- `name` (`string`, required): High CPU Usage
- `no_data_timeframe` (`integer`, optional): 10
- `notify_no_data` (`string`, optional): true
- `priority` (`integer`, optional): 3
- `query` (`string`, required): avg(last_5m):avg:system.cpu.user{*} > 90
- `tags` (`string`, optional): ["env:prod"]
- `type` (`string`, required): metric alert

### `datadog_monitor_update`

Update an existing Datadog monitor.

Parameters:

- `message` (`string`, optional): CPU usage is high on {{host.name}}
- `monitor_id` (`integer`, required): 123456
- `name` (`string`, optional): High CPU Usage
- `priority` (`integer`, optional): 3
- `query` (`string`, optional): avg(last_5m):avg:system.cpu.user{*} > 90
- `tags` (`string`, optional): ["env:prod"]

### `datadog_monitor_delete`

Delete a Datadog monitor by ID.

Parameters:

- `monitor_id` (`integer`, required): 123456

### `datadog_monitor_search`

Search Datadog monitors using a query string.

Parameters:

- `page` (`integer`, optional): 0
- `per_page` (`integer`, optional): 30
- `query` (`string`, optional): cpu
- `sort` (`string`, optional): name,asc

### `datadog_monitor_mute`

Mute a Datadog monitor, optionally with a scope and end time.

Parameters:

- `end` (`integer`, optional): 1609545600
- `monitor_id` (`integer`, required): 123456
- `scope` (`string`, optional): role:db

### `datadog_monitor_unmute`

Unmute a Datadog monitor.

Parameters:

- `monitor_id` (`integer`, required): 123456

### `datadog_downtimes_list`

List all Datadog downtimes.

Parameters:

- `filter_monitor_id` (`integer`, optional): 123456
- `page_limit` (`integer`, optional): 25
- `page_offset` (`integer`, optional): 0

### `datadog_downtime_get`

Get a specific Datadog downtime by ID.

Parameters:

- `downtime_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_downtime_create`

Create a new Datadog downtime to suppress alerts.

Parameters:

- `end` (`string`, optional): 2026-04-28T12:00:00+00:00
- `message` (`string`, optional): Scheduled maintenance
- `monitor_id` (`integer`, optional): 123456
- `monitor_tags` (`string`, optional): ["*"]
- `scope` (`string`, required): env:prod
- `start` (`string`, optional): 2026-04-28T10:00:00+00:00
- `timezone` (`string`, optional): UTC

### `datadog_downtime_update`

Update an existing Datadog downtime.

Parameters:

- `downtime_id` (`string`, required): 00000000-0000-0000-0000-000000000000
- `message` (`string`, optional): Extended maintenance window
- `scope` (`string`, optional): env:prod

### `datadog_downtime_cancel`

Cancel a Datadog downtime by ID.

Parameters:

- `downtime_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_incidents_list`

List Datadog incidents with optional filtering.

Parameters:

- `filter` (`string`, optional): service:payment
- `page_offset` (`integer`, optional): 0
- `page_size` (`integer`, optional): 10
- `sort` (`string`, optional): created

### `datadog_incident_get`

Get a specific Datadog incident by ID.

Parameters:

- `incident_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_incident_create`

Create a new Datadog incident.

Parameters:

- `customer_impacted` (`boolean`, required): true
- `severity` (`string`, optional): SEV-2
- `state` (`string`, optional): active
- `title` (`string`, required): Database connection failures

### `datadog_slos_list`

List Service Level Objectives (SLOs) in Datadog.

Parameters:

- `ids` (`string`, optional): id1,id2,id3
- `limit` (`integer`, optional): 25
- `offset` (`integer`, optional): 0
- `query` (`string`, optional): my service
- `tags_query` (`string`, optional): env:prod

### `datadog_slo_get`

Get a specific Datadog Service Level Objective by ID.

Parameters:

- `slo_id` (`string`, required): abc123def456

### `datadog_slo_create`

Create a new Service Level Objective (SLO) in Datadog.

Parameters:

- `description` (`string`, optional): Tracks API availability over 7 days
- `monitor_ids` (`string`, optional): [123456, 789012]
- `name` (`string`, required): API Uptime SLO
- `tags` (`string`, optional): ["env:prod"]
- `thresholds` (`string`, required): [{"timeframe":"7d","target":99.9}]
- `query` (`string`, optional): {"numerator":"sum:requests.success{*}.as_count()","denominator":"sum:requests.total{*}.as_count()"}
- `type` (`string`, required): metric

### `datadog_slo_update`

Update an existing Datadog Service Level Objective.

Parameters:

- `description` (`string`, optional): Updated description
- `name` (`string`, optional): API Uptime SLO
- `slo_id` (`string`, required): abc123def456
- `tags` (`string`, optional): ["env:prod"]
- `thresholds` (`string`, optional): [{"timeframe":"30d","target":99.5}]
- `type` (`string`, required): monitor

### `datadog_slo_delete`

Delete a Datadog Service Level Objective by ID.

Parameters:

- `slo_id` (`string`, required): abc123def456

### `datadog_slo_history`

Get historical data for a specific Datadog SLO.

Parameters:

- `from_ts` (`integer`, required): 1609459200
- `slo_id` (`string`, required): abc123def456
- `target` (`string`, optional): 99.9
- `to_ts` (`integer`, required): 1609545600

### `datadog_metrics_list`

List active metrics reported from a given Unix timestamp.

Parameters:

- `from` (`integer`, required): 1609459200
- `host` (`string`, optional): my-host.example.com
- `tag_filter` (`string`, optional): env:prod

### `datadog_metrics_query`

Query timeseries metric data from Datadog.

Parameters:

- `from` (`integer`, required): 1609459200
- `query` (`string`, required): avg:system.cpu.user{*}
- `to` (`integer`, required): 1609545600

### `datadog_metrics_submit`

Submit metric data points to Datadog.

Parameters:

- `host` (`string`, optional): my-host.example.com
- `metric_name` (`string`, required): my.custom.metric
- `metric_type` (`integer`, required): 3
- `points_timestamps` (`string`, required): [1609459200]
- `points_values` (`string`, required): [42.5]
- `tags` (`string`, optional): ["env:prod"]

### `datadog_metric_metadata_get`

Get metadata for a specific Datadog metric.

Parameters:

- `metric_name` (`string`, required): system.cpu.user

### `datadog_metric_metadata_update`

Update metadata for a specific Datadog metric.

Parameters:

- `description` (`string`, optional): CPU usage percentage
- `metric_name` (`string`, required): system.cpu.user
- `short_name` (`string`, optional): cpu user
- `type` (`string`, optional): gauge
- `unit` (`string`, optional): percent

### `datadog_metric_tags_list`

List all tags for a specific Datadog metric.

Parameters:

- `metric_name` (`string`, required): system.cpu.user

### `datadog_logs_search`

Search and filter Datadog log events.

Parameters:

- `cursor` (`string`, optional): eyJzdGFydEF0IjoiMjAyMS0wMS0wMVQwMDowMDowMFoifQ==
- `from` (`string`, required): 2021-01-01T00:00:00Z
- `limit` (`integer`, optional): 100
- `query` (`string`, optional): service:web status:error
- `sort` (`string`, optional): timestamp
- `to` (`string`, required): 2021-01-02T00:00:00Z

### `datadog_logs_aggregate`

Aggregate Datadog log events with grouping and compute operations.

Parameters:

- `compute` (`string`, required): [{"aggregation":"count","type":"total"}]
- `from` (`string`, required): 2021-01-01T00:00:00Z
- `group_by` (`string`, optional): [{"facet":"service"}]
- `query` (`string`, optional): service:web
- `to` (`string`, required): 2021-01-02T00:00:00Z

### `datadog_log_indexes_list`

List all Datadog log indexes.

### `datadog_log_pipeline_get`

Get a specific Datadog log processing pipeline by ID.

Parameters:

- `pipeline_id` (`string`, required): my-pipeline-id

### `datadog_log_pipelines_list`

List all Datadog log processing pipelines.

### `datadog_audit_logs_search`

Search audit log events in Datadog for a given time window.

Parameters:

- `cursor` (`string`, optional): eyJzdGFydEF0IjoiMjAy...
- `from` (`string`, required): now-1h
- `limit` (`integer`, optional): 25
- `query` (`string`, optional): @action:login
- `sort` (`string`, optional): -timestamp
- `to` (`string`, required): now

### `datadog_events_query`

Query Datadog events within a time range.

Parameters:

- `count` (`integer`, optional): 100
- `end` (`integer`, required): 1609545600
- `page` (`integer`, optional): 0
- `priority` (`string`, optional): normal
- `sources` (`string`, optional): my-app
- `start` (`integer`, required): 1609459200
- `tags` (`string`, optional): env:prod
- `unaggregated` (`string`, optional): false

### `datadog_events_list_v2`

List Datadog events using the v2 API with filtering and pagination.

Parameters:

- `filter_from` (`string`, optional): 2021-01-01T00:00:00Z
- `filter_query` (`string`, optional): source:my-app
- `filter_to` (`string`, optional): 2021-01-02T00:00:00Z
- `page_cursor` (`string`, optional): eyJzdGFydEF0IjoiMjAyMS0wMS0wMVQwMDowMDowMFoifQ==
- `page_limit` (`integer`, optional): 25
- `sort` (`string`, optional): timestamp

### `datadog_event_get`

Get a specific Datadog event by ID.

Parameters:

- `event_id` (`string`, required): 1234567890

### `datadog_event_create`

Create a new event in Datadog.

Parameters:

- `aggregation_key` (`string`, optional): my-deployment
- `alert_type` (`string`, optional): info
- `date_happened` (`integer`, optional): 1609459200
- `host` (`string`, optional): web-01.example.com
- `priority` (`string`, optional): normal
- `tags` (`string`, optional): ["env:prod"]
- `text` (`string`, required): Service v2.1.0 deployed successfully.
- `title` (`string`, required): Deployment completed

### `datadog_hosts_list`

List Datadog hosts with optional filtering and sorting.

Parameters:

- `count` (`integer`, optional): 100
- `filter` (`string`, optional): env:prod
- `include_muted_hosts_data` (`string`, optional): true
- `sort_dir` (`string`, optional): desc
- `sort_field` (`string`, optional): cpu
- `start` (`integer`, optional): 0

### `datadog_hosts_totals`

Get the total number of active and up Datadog hosts.

### `datadog_host_mute`

Mute a Datadog host to suppress alerts.

Parameters:

- `end` (`integer`, optional): 1609545600
- `host_name` (`string`, required): web-01.example.com
- `message` (`string`, optional): Scheduled maintenance
- `override` (`string`, optional): false

### `datadog_host_unmute`

Unmute a Datadog host.

Parameters:

- `host_name` (`string`, required): web-01.example.com

### `datadog_host_tags_get`

Get all tags for a specific host.

Parameters:

- `host_name` (`string`, required): my-host.example.com

### `datadog_host_tags_create`

Add tags to a specific host in Datadog.

Parameters:

- `host_name` (`string`, required): my-host.example.com
- `source` (`string`, optional): users
- `tags` (`string`, required): ["env:prod","role:db"]

### `datadog_host_tags_update`

Replace all tags for a specific host in Datadog.

Parameters:

- `host_name` (`string`, required): my-host.example.com
- `source` (`string`, optional): users
- `tags` (`string`, required): ["env:prod","role:db"]

### `datadog_host_tags_delete`

Remove all tags from a specific host in Datadog.

Parameters:

- `host_name` (`string`, required): my-host.example.com
- `source` (`string`, optional): users

### `datadog_containers_list`

List all containers running on your infrastructure.

Parameters:

- `filter_tags` (`string`, optional): env:prod
- `page_cursor` (`string`, optional): eyJzdGFydEF0IjoiMjAy...
- `page_size` (`integer`, optional): 1000

### `datadog_processes_list`

List live processes running on your infrastructure.

Parameters:

- `from` (`integer`, optional): 1672531200
- `page_cursor` (`string`, optional): eyJzdGFydEF0IjoiMjAy...
- `page_limit` (`integer`, optional): 25
- `search` (`string`, optional): nginx
- `tags` (`string`, optional): env:prod,host:web-01
- `to` (`integer`, optional): 1672617600

### `datadog_synthetics_tests_list`

List all Datadog Synthetics tests.

Parameters:

- `page_number` (`integer`, optional): 0
- `page_size` (`integer`, optional): 25

### `datadog_synthetics_api_test_get`

Get a specific Datadog Synthetics API test by public ID.

Parameters:

- `public_id` (`string`, required): abc-def-ghi

### `datadog_synthetics_browser_test_get`

Get a specific Datadog Synthetics browser test by public ID.

Parameters:

- `public_id` (`string`, required): abc-def-ghi

### `datadog_synthetics_test_results_get`

Get the latest results for a specific Datadog Synthetics test.

Parameters:

- `from_ts` (`integer`, optional): 1609459200
- `public_id` (`string`, required): abc-def-ghi
- `to_ts` (`integer`, optional): 1609545600

### `datadog_synthetics_test_trigger`

Trigger one or more Datadog Synthetics tests to run immediately.

Parameters:

- `tests` (`string`, required): [{"public_id":"abc-def-ghi"}]

### `datadog_synthetics_test_pause_resume`

Pause or resume a Datadog Synthetics test.

Parameters:

- `new_status` (`string`, required): paused
- `public_id` (`string`, required): abc-def-ghi

### `datadog_synthetics_test_delete`

Delete one or more Datadog Synthetics tests by public ID.

Parameters:

- `public_ids` (`string`, required): ["abc-def-ghi"]

### `datadog_synthetics_locations_list`

List all Datadog Synthetics locations (public and private).

### `datadog_synthetics_global_variables_list`

List all Datadog Synthetics global variables.

### `datadog_rum_applications_list`

List all Datadog RUM applications.

### `datadog_rum_application_get`

Get a specific RUM application by its ID.

Parameters:

- `id` (`string`, required): abc123

### `datadog_rum_application_create`

Create a new Datadog RUM application.

Parameters:

- `name` (`string`, required): My Web App
- `type` (`string`, required): browser

### `datadog_notebooks_list`

List all notebooks available in your Datadog account.

Parameters:

- `author_handle` (`string`, optional): user@example.com
- `count` (`integer`, optional): 100
- `include_cells` (`string`, optional): false
- `query` (`string`, optional): my notebook
- `start` (`integer`, optional): 0

### `datadog_notebook_get`

Get a specific Datadog notebook by its ID.

Parameters:

- `notebook_id` (`integer`, required): 12345

### `datadog_notebook_create`

Create a new notebook in Datadog.

Parameters:

- `cells` (`string`, optional): [{"type": "notebook_cells", "attributes": {"definition": {"type": "markdown", "text": "# Hello"}}}]
- `name` (`string`, required): My Notebook

### `datadog_notebook_delete`

Delete a specific notebook by its ID.

Parameters:

- `notebook_id` (`integer`, required): 12345

### `datadog_users_list`

List Datadog users with optional filtering.

Parameters:

- `filter` (`string`, optional): john@example.com
- `page_number` (`integer`, optional): 0
- `page_size` (`integer`, optional): 10
- `sort` (`string`, optional): name
- `sort_dir` (`string`, optional): asc

### `datadog_user_get`

Get a specific Datadog user by UUID.

Parameters:

- `user_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_user_create`

Create a new Datadog user.

Parameters:

- `email` (`string`, required): user@example.com
- `name` (`string`, optional): John Doe
- `roles` (`string`, optional): ["00000000-0000-0000-0000-000000000000"]
- `title` (`string`, optional): Software Engineer

### `datadog_user_update`

Update an existing Datadog user.

Parameters:

- `disabled` (`string`, optional): false
- `name` (`string`, optional): John Doe
- `title` (`string`, optional): Senior Engineer
- `user_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_user_disable`

Disable a Datadog user account by UUID.

Parameters:

- `user_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_user_roles_list`

Get all roles assigned to a specific Datadog user.

Parameters:

- `user_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_roles_list`

List all Datadog roles.

Parameters:

- `filter` (`string`, optional): admin
- `page_number` (`integer`, optional): 0
- `page_size` (`integer`, optional): 10
- `sort` (`string`, optional): name

### `datadog_role_get`

Get a specific Datadog role by ID.

Parameters:

- `role_id` (`string`, required): 00000000-0000-0000-0000-000000000000

### `datadog_role_create`

Create a new Datadog role.

Parameters:

- `name` (`string`, required): Custom Admin Role
- `permissions` (`string`, optional): [{"type":"permissions","id":"00000000-0000-0000-0000-000000000000"}]

### `datadog_service_check_submit`

Submit a service check result to Datadog.

Parameters:

- `check` (`string`, required): app.is_ok
- `host_name` (`string`, required): my-host.example.com
- `message` (`string`, optional): Service is running normally.
- `status` (`integer`, required): 0
- `tags` (`string`, optional): ["env:prod","role:db"]


---

## More Scalekit documentation

| Resource | What it contains | When to use it |
|----------|-----------------|----------------|
| [/llms.txt](/llms.txt) | Structured index with routing hints per product area | Start here — find which documentation set covers your topic before loading full content |
| [/llms-full.txt](/llms-full.txt) | Complete documentation for all Scalekit products in one file | Use when you need exhaustive context across multiple products or when the topic spans several areas |
| [sitemap-0.xml](https://docs.scalekit.com/sitemap-0.xml) | Full URL list of every documentation page | Use to discover specific page URLs you can fetch for targeted, page-level answers |
