Concepts

OpenTelemetry

How Yorker emits standard OTLP signals and correlates with your existing observability stack.

OpenTelemetry

Yorker is OTel-native. Every check emits standard OpenTelemetry signals -- metrics and traces -- using the OTLP HTTP JSON protocol. There is no proprietary telemetry format. If your backend speaks OTel, it works with Yorker.

Why OTel-native matters

Most synthetic monitoring tools store results in a proprietary system. When something breaks, you switch between your monitoring dashboard and your observability platform, manually correlating timestamps and URLs.

Yorker eliminates that context switch. Your synthetic check results land in the same backend as your application traces, logs, and metrics. A failing health check and the 500 error it triggered share the same trace ID.

Metrics emitted

Runner-direct OTLP emission (today: browser checks on any location, plus HTTP/MCP checks on private locations that have OTLP_ENDPOINT set) produces these metrics as OTLP gauge data points:

MetricTypeDescription
synthetics.http.response_timeGauge (ms)Total response time from request start to last byte received.
synthetics.check.successGauge (0 or 1)Whether the check passed all assertions.
synthetics.dns.lookup_durationGauge (ms)Time spent resolving DNS.
synthetics.tls.handshake_durationGauge (ms)Time spent on TLS handshake.
synthetics.browser.lcpGauge (ms)Largest Contentful Paint (browser checks only).
synthetics.browser.fcpGauge (ms)First Contentful Paint (browser checks only).
synthetics.browser.clsGauge (score)Cumulative Layout Shift (browser checks only).

These metrics follow OpenTelemetry semantic conventions for synthetic monitoring where they exist, and use the synthetics.* namespace for domain-specific signals.

Once you have configured an OTLP endpoint under Settings > Telemetry (OTLP), the control plane outbox path produces matching log events (synthetics.check.completed, synthetics.check.failed) for every check regardless of type or location. Those log event bodies carry the same response time, status, and timing breakdown as the runner-emitted metrics, so HTTP and MCP checks on Yorker-hosted locations still land observable data in your collector — you just query log events instead of gauges for those. Until you configure an endpoint, no outbox events are enqueued at all.

Resource attributes

Every metric, trace, and log event includes resource attributes that identify the check, location, and run:

AttributeExampleDescription
synthetics.check.idchk_abc123Unique check identifier.
synthetics.check.nameHomepageHuman-readable check name.
synthetics.check.typehttp, browser, or mcpCheck type.
synthetics.location.idloc_us_eastLocation identifier.
synthetics.location.nameUS East (Ashburn)Human-readable location name.
synthetics.location.typehosted or privateWhether the location is Yorker-hosted or a private location.
synthetics.run.idrun_xyz789Unique identifier for this specific execution.
url.fullhttps://example.comThe URL being monitored.
service.namesyntheticsService name used by both runner-direct emissions and control-plane outbox events.

These attributes let you filter, group, and alert on synthetic check data in your observability backend the same way you would with any other OTel-instrumented service.

Labels as resource attributes

Any labels attached to a check are emitted as additional resource attributes on every metric and trace. This lets you slice telemetry by your own dimensions — environment, service, team, criticality — without having to map check IDs back to metadata in your observability backend.

Label formatResource attribute
env:productionyorker.label.env="production"
service:paymentsyorker.label.service="payments"
critical (no colon)yorker.label.critical="true"

See Create a Monitor → Labels for how to attach labels.

Trace correlation

Yorker injects a W3C traceparent header into outbound requests during check execution. This is how it works:

  1. The runner generates a trace ID for the check execution.
  2. The traceparent header is added to the HTTP request (or injected into the browser's network requests for browser checks).
  3. Your backend application picks up the trace context via its own OTel instrumentation.
  4. The synthetic check span and your backend request span share the same distributed trace.

The result: when a check fails, you can click from the Yorker alert directly to the distributed trace in your observability backend. You see the synthetic request, the backend handler, the database query, and the error -- in one view.

Backend compatibility

Yorker works with any OTel-compatible backend. All Yorker telemetry is emitted as OTLP HTTP JSON — the most widely supported OTel transport. Tested backends include:

  • ClickStack (ClickHouse + HyperDX)
  • Grafana Cloud (Tempo + Mimir)
  • Datadog
  • Honeycomb
  • New Relic
  • Jaeger
  • Any OTLP-compatible collector (OpenTelemetry Collector, Alloy, Vector)

Emission model

Yorker has two OTel emitters, and they both target the same otlpEndpoint you configure on your team:

  • Runners emit OTLP metrics, traces, and logs directly to your collector — but only for browser checks today. Hosted HTTP and MCP runners do not emit OTLP from the runner process; private-location operators can enable runner-direct emission by setting OTLP_ENDPOINT/OTLP_API_KEY on their runner container when they start it.
  • The orchestrator drains an emission outbox that the control plane writes to, and ships every OTel log/span event Yorker generates to your collector — including synthetics.check.completed, synthetics.check.failed, synthetics.step.completed, alert state changes, SLO burn warnings, TLS certificate events, monitor/team insights, deployment markers, and maintenance-window events. The control plane enqueues; the orchestrator polls the outbox every ~10 seconds, runs SSRF guards, and POSTs.

The metrics catalogued above (synthetics.http.response_time, synthetics.check.success, and friends) are currently produced by the runner-direct path, which means you will see them for browser checks. For HTTP and MCP checks on hosted locations, the same information reaches your collector via the synthetics.check.completed / check.failed log events — they carry responseTimeMs, status, assertion results, timing breakdown, and the same resource attributes as the metrics, so dashboards and queries can key off either signal.

See the Telemetry flow section in Architecture for the full table of which check type and location combinations take which path.

Setup

To configure OTel emission for your team:

  1. Go to Settings > Telemetry (OTLP) in the Yorker dashboard.
  2. Enter your OTLP endpoint URL (e.g., https://otel-collector.example.com:4318).
  3. Add any required authentication headers (API key, bearer token).
  4. Click Test Connection — Yorker's control plane dispatches a test payload and reports success or failure.
  5. Save.

From this point, the control plane starts enqueueing events for the orchestrator to ship (for every check type and every location), and browser-check runners start including the endpoint in each execution payload for runner-direct metric/trace emission. Team-level OTLP credentials are stored on the team, not per-check.