Data Visualization & Alerts
Turn check-in output into AI-generated charts and alert rules
Overview
Data Visualization & Alerts turns monitor check-ins into charts and alert rules. Send your job output; Telemetry.host can infer what to visualize and what alerts to suggest.
In simple words: send your output and let AI find what matters: values to chart, trends to track, and alerts to set.
This works best when a monitor has several recent check-ins with similar shape. The data can be structured JSON, plain text, command output, logs, or a mix of fields and messages.
Sending Data
You can send any text output directly to the ping endpoint:
/usr/local/bin/backup.sh 2>&1 | curl -X POST \
https://telemetry.host/ping/{YOUR_MONITOR_ID} \
-H "Content-Type: text/plain" \
--data-binary @-
That output might look like this:
Backup completed
duration_seconds=143
backup_size_mb=2840
warnings=0
You can also send arbitrary JSON:
curl -X POST https://telemetry.host/ping/{YOUR_MONITOR_ID} \
-H "Content-Type: application/json" \
-d '{
"status": "success",
"job": "nightly-backup",
"duration_seconds": 143,
"backup_size_mb": 2840,
"warnings": 0,
"destination": "s3://prod-backups/db"
}'
The AI-assisted workflow can inspect recent check-ins, extract useful values, generate chart data, and suggest alert rules such as “backup size dropped too low” or “duration increased above normal.”
Reliable Curl Delivery
For production scripts, use retry and timeout options so short network problems do not hide a real job result:
/usr/local/bin/backup.sh 2>&1 | curl -X POST \
--fail-with-body \
--retry 25 \
--retry-all-errors \
--retry-connrefused \
--retry-delay 7 \
--retry-max-time 180 \
--connect-timeout 5 \
--max-time 20 \
https://telemetry.host/ping/{YOUR_MONITOR_ID} \
-H "Content-Type: text/plain" \
--data-binary @-
These options make curl fail clearly on HTTP errors, retry temporary failures, retry refused connections during deploys, and avoid hanging forever.
DSL Syntax
The DATA PREP DSL is JSON that describes how check-in data becomes structured data for charts and alerts. It is not Python, shell, or JavaScript.
A pipeline can:
- Extract fields from JSON paths or text patterns
- Derive values from extracted fields
- Emit time-series points, events, or intervals for visualization
- Define alert rules that trigger when extracted or derived values cross a threshold
Basic Shape
Every DSL document is a JSON object with these top-level sections:
version: DSL version. Use1.extract: Array of rules that pull raw values from JSON or text into named fields.derive: Array of rules that calculate new fields from extracted fields.emit: Object that chooses which fields become chart series, events, or intervals.alerts: Array of alert rules evaluated against extracted or derived fields.
Example:
{
"version": 1,
"extract": [
{
"name": "duration_seconds",
"type": "number",
"path": "$.duration_seconds"
}
],
"emit": {
"series": [
{
"id": "duration_seconds",
"label": "Duration",
"kind": "line",
"x": "record_timestamp",
"y": "duration_seconds"
}
],
"events": [],
"intervals": []
},
"alerts": [
{
"id": "slow_backup",
"when": {
"field": "duration_seconds",
"op": ">",
"value": 600
},
"severity": "warning",
"message": "Backup took longer than expected"
}
]
}
When you edit the DSL in the dashboard, Telemetry.host validates it and previews the extracted records before the pipeline is applied.
Top-Level Keywords
| Keyword | Type | Description |
|---|---|---|
version | number | DSL version. Use 1. |
extract | array | Rules that create fields from JSON, text, constants, or defaults. |
derive | array | Rules that create fields from other fields. |
emit | object | Defines chartable series, events, and intervals. |
alerts | array | Alert rules evaluated for each prepared check-in. |
extract Rules
Each extract rule should set a field name and one extraction method.
| Keyword | Type | Description |
|---|---|---|
field | string | Required field name written into prepared_data.fields. |
type | string | Optional coercion: string, number, boolean, or timestamp. |
path | string | JSON path for structured payloads, for example $.stats.duration_ms. |
json_path | string | Alias for path. |
regex | string | Regex used against text payloads, or JSON rendered as text. |
group | string or number | Regex capture group. Defaults to named group value, first capture group, or the whole match. |
grep | string | Filters input to matching lines before regex or aggregation. Useful for logs and command output. |
key_value | string | Extracts a key from key=value style text. Matching is case-insensitive. |
kv_sep | string | Separator for key_value. Defaults to =; use ": " for key: value text. |
aggregate | string | Aggregates multiple matching values: count, first, last, sum, avg, min, or max. |
aggregation | string | Alias for aggregate. |
value | any | Constant fallback value if no path, regex, or key-value match is found. |
default | any | Default used when no value is found. Defaults are still type-coerced. |
Common examples:
{
"extract": [
{ "field": "cpu_pct", "path": "$.metrics.cpu_pct", "type": "number" },
{ "field": "backup_size_raw", "regex": "\\((?P<value>[\\d.]+\\s*[KMGTPE]?i?B?)\\)", "type": "string" },
{ "field": "disk_pct", "key_value": "disk_usage", "type": "number" },
{ "field": "error_lines", "grep": "ERROR", "aggregate": "count", "type": "number" }
]
}
Regex safety notes:
- Patterns are case-insensitive and multiline.
- Backreferences, lookarounds, and nested quantified groups are rejected.
- Very long patterns and slow-running patterns are rejected.
Type Coercion
type controls how raw values become prepared values:
| Type | Behavior |
|---|---|
string | Converts the value to text. |
number | Accepts numbers or numeric strings such as "1,024" and "45.2". |
boolean | Accepts booleans, numbers, and strings such as true, false, yes, no, ok, success, error, and failed. |
timestamp | Converts ISO timestamps, Unix timestamps, millisecond timestamps, or time-of-day strings to ISO timestamp text. |
derive Rules
Derived fields are calculated after extract.
| Keyword | Type | Description |
|---|---|---|
field | string | Required name of the derived field. |
op | string | Operation: copy, date, duration, duration_seconds, or parse_size. |
source | string | Source field for copy, date, and parse_size. |
start | string | Start timestamp field for duration and duration_seconds. |
end | string | End timestamp field for duration and duration_seconds. |
unit | string | Output unit for duration or parse_size. Duration units include seconds, minutes, and hours; size units include kb, mb, gb, tb, and pb. |
precision | number | Optional decimal places for duration output. |
rollover | string | For time-of-day durations that cross midnight, use day, 24h, or midnight. |
Operation details:
op value | Description |
|---|---|
copy | Copies a field value as-is. |
date | Extracts the UTC calendar date (YYYY-MM-DD string) from a timestamp field. Useful for day-level comparisons in alerts. |
duration | Calculates the elapsed time between two timestamp fields. Output unit is set with unit. |
duration_seconds | Like duration but defaults the unit to seconds. |
parse_size | Converts a human-readable size string (e.g. 762K, 2.5G) to a numeric value in the given unit. |
Examples:
{
"derive": [
{ "field": "duration_hours", "op": "duration", "start": "started_at", "end": "finished_at", "unit": "hours", "precision": 2 },
{ "field": "backup_size_mb", "op": "parse_size", "source": "backup_size_raw", "unit": "mb" },
{ "field": "job_label", "op": "copy", "source": "job_name" },
{ "field": "insert_day", "op": "date", "source": "insert_datetime" }
]
}
emit Rules
emit decides what the charting layer can use.
| Keyword | Type | Description |
|---|---|---|
series | array | Numeric or categorical points over time. |
events | array | Point-in-time markers, such as deploys, failures, or job names. |
intervals | array | Start/end spans, such as job runtime windows. |
Series keywords:
| Keyword | Type | Description |
|---|---|---|
id | string | Stable identifier. The runtime stores the emitted series under the y field name. |
label | string | Human-readable label. Defaults to the y field. |
kind | string | Chart hint such as line or bar. Defaults to line. |
x | string | Timestamp field for the point. If missing or invalid, the check-in timestamp is used. |
y | string | Field containing the point value. |
Event keywords:
| Keyword | Type | Description |
|---|---|---|
ts | string | Timestamp field for the event. If missing or invalid, the check-in timestamp is used. |
label_field | string | Field used as the event label. |
label | string | Static event label fallback. |
severity_field | string | Field used as the event severity. |
severity | string | Static severity fallback. |
Interval keywords:
| Keyword | Type | Description |
|---|---|---|
start | string | Start timestamp field. |
end | string | End timestamp field. |
label_field | string | Field used as the interval label. |
label | string | Static label fallback. |
id | string | Fallback label if no label is provided. |
alerts Rules
Alerts run against prepared fields for each check-in.
| Keyword | Type | Description |
|---|---|---|
id | string | Stable alert identifier. |
when | object | Condition object with field, op, and value. |
severity | string | Alert severity. Defaults to info; common values are info, warning, and critical. |
message | string | Human-readable alert message. |
Condition keywords:
| Keyword | Type | Description |
|---|---|---|
field | string | Extracted or derived field to test. |
op | string | Operator: >, >=, <, <=, ==, !=, or contains. Defaults to ==. |
value | any | Comparison value. Numeric operators require a numeric field and numeric value. If value is a string that matches another field name, the two fields are compared directly (field-to-field comparison). |
Example:
{
"alerts": [
{
"id": "disk_almost_full",
"when": { "field": "disk_pct", "op": ">=", "value": 90 },
"severity": "critical",
"message": "Disk usage is above 90%"
}
]
}
Complete DSL Example
This example works well for text output like:
status=success
backup_size=762K
disk_usage=91
started_at=23:53:02
finished_at=00:12:02
DSL:
{
"version": 1,
"extract": [
{ "field": "status", "key_value": "status", "type": "string" },
{ "field": "backup_size_raw", "key_value": "backup_size", "type": "string" },
{ "field": "disk_pct", "key_value": "disk_usage", "type": "number" },
{ "field": "started_at", "key_value": "started_at", "type": "string" },
{ "field": "finished_at", "key_value": "finished_at", "type": "string" }
],
"derive": [
{ "field": "backup_size_mb", "op": "parse_size", "source": "backup_size_raw", "unit": "mb" },
{ "field": "duration_minutes", "op": "duration", "start": "started_at", "end": "finished_at", "unit": "minutes", "precision": 1 }
],
"emit": {
"series": [
{ "id": "backup_size_mb", "label": "Backup Size (MB)", "kind": "bar", "x": "record_timestamp", "y": "backup_size_mb" },
{ "id": "duration_minutes", "label": "Duration (minutes)", "kind": "line", "x": "record_timestamp", "y": "duration_minutes" },
{ "id": "disk_pct", "label": "Disk Usage (%)", "kind": "line", "x": "record_timestamp", "y": "disk_pct" }
],
"events": [
{ "ts": "record_timestamp", "label_field": "status" }
],
"intervals": []
},
"alerts": [
{
"id": "disk_almost_full",
"when": { "field": "disk_pct", "op": ">=", "value": 90 },
"severity": "critical",
"message": "Disk usage is above 90%"
}
]
}
ECharts Configuration
Charts are rendered with Apache ECharts option JSON. The configuration controls chart type, axes, labels, legends, and series.
Telemetry.host uses safe ECharts option objects, not custom JavaScript. Keep the configuration as JSON data: no functions, scripts, or browser code.
For the complete upstream reference, see the official Apache ECharts docs:
ECharts Option Shape
An ECharts configuration is usually called an option. Telemetry.host stores and edits that option as JSON.
Common top-level keywords:
| Keyword | Type | Description |
|---|---|---|
title | object | Chart title and subtitle. |
tooltip | object | Hover tooltip behavior. |
legend | object | Series legend. |
grid | object | Plot area spacing for x/y charts. |
xAxis | object or array | Horizontal axis configuration. |
yAxis | object or array | Vertical axis configuration. |
series | array | Data series to draw. This is the most important section. |
dataset | object or array | Optional tabular data source used by one or more series. |
dataZoom | array | Optional zoom controls for larger timelines. |
Minimal example:
{
"tooltip": { "trigger": "axis" },
"xAxis": { "type": "time" },
"yAxis": { "type": "value", "name": "Duration (seconds)" },
"series": [
{
"name": "Backup duration",
"type": "line",
"data": []
}
]
}
Axes
Most Data Visualization & Alerts charts use x/y axes.
| Keyword | Type | Description |
|---|---|---|
type | string | Axis type: usually time, value, or category. |
name | string | Axis label shown near the axis. |
data | array | Category labels when type is category. |
min / max | number or string | Optional axis bounds. |
axisLabel | object | Label formatting and rotation. |
axisPointer | object | Crosshair or pointer behavior for hover. |
Typical monitor timeline axes:
{
"xAxis": { "type": "time" },
"yAxis": { "type": "value", "name": "Backup Size (MB)" }
}
Use category when the x values are labels instead of timestamps:
{
"xAxis": {
"type": "category",
"data": ["backup", "verify", "upload"]
},
"yAxis": { "type": "value" }
}
Series
series tells ECharts what to draw. Each item is one line, bar set, scatter plot, or other chart layer.
Common series keywords:
| Keyword | Type | Description |
|---|---|---|
name | string | Human-readable series name. Appears in tooltips and legends. |
type | string | Chart type, commonly line, bar, or scatter. |
data | array | Points to draw. For time charts, use [timestamp, value] pairs. |
smooth | boolean | Smooth line segments for line series. |
step | boolean or string | Step line style for state-like values. |
areaStyle | object | Fills the area under a line. Use {} for default fill. |
yAxisIndex | number | Uses a secondary y-axis when yAxis is an array. |
encode | object | Maps dataset columns to axes when using dataset. |
Examples:
{
"series": [
{
"name": "Backup duration",
"type": "line",
"smooth": true,
"data": [
["2026-05-01T02:00:00Z", 143],
["2026-05-02T02:00:00Z", 151]
]
},
{
"name": "Warnings",
"type": "bar",
"data": [
["2026-05-01T02:00:00Z", 0],
["2026-05-02T02:00:00Z", 2]
]
}
]
}
Tooltips And Legends
Use tooltip.trigger: "axis" for time-series charts so hovering one timestamp shows all nearby values.
{
"tooltip": {
"trigger": "axis"
},
"legend": {
"top": 0
}
}
Dataset And Encode
For more complex charts, ECharts can read tabular data from dataset.source and map columns with encode.
{
"dataset": {
"source": [
["timestamp", "duration_seconds", "backup_size_mb"],
["2026-05-01T02:00:00Z", 143, 762],
["2026-05-02T02:00:00Z", 151, 801]
]
},
"xAxis": { "type": "time" },
"yAxis": { "type": "value" },
"series": [
{
"name": "Duration",
"type": "line",
"encode": { "x": "timestamp", "y": "duration_seconds" }
}
]
}
Useful Monitor Chart Patterns
Line chart for a numeric value over time:
{
"tooltip": { "trigger": "axis" },
"xAxis": { "type": "time" },
"yAxis": { "type": "value", "name": "Duration (seconds)" },
"series": [
{
"name": "Duration",
"type": "line",
"data": []
}
]
}
Bar chart for per-run counts:
{
"tooltip": { "trigger": "axis" },
"xAxis": { "type": "time" },
"yAxis": { "type": "value", "name": "Warnings" },
"series": [
{
"name": "Warnings",
"type": "bar",
"data": []
}
]
}
Two metrics with separate y-axes:
{
"tooltip": { "trigger": "axis" },
"legend": {},
"xAxis": { "type": "time" },
"yAxis": [
{ "type": "value", "name": "Duration (seconds)" },
{ "type": "value", "name": "Size (MB)" }
],
"series": [
{
"name": "Duration",
"type": "line",
"yAxisIndex": 0,
"data": []
},
{
"name": "Backup size",
"type": "bar",
"yAxisIndex": 1,
"data": []
}
]
}
Telemetry.host Safety Rules
When editing ECharts JSON in Telemetry.host:
- Use JSON only. Do not add JavaScript functions, callbacks, or scripts.
- Keep
seriespresent and non-empty. - Prefer
line,bar, andscatterfor monitor data. - Prefer
xAxis.type: "time"for check-in timelines. - Keep data points as JSON arrays or dataset rows.
When you edit the ECharts configuration in the dashboard, Telemetry.host validates that it can still render a usable chart.