TelemetryQA PlatformNext.jsActiveIntermediate

Quality Telemetry Dashboard

A production-style QA health dashboard with snapshot/live/cloud modes and proof-first evidence

Ongoing
Started Jan 2026
Team of 1
Automation Engineer - build + test + ops receipts

TL;DR

fast skim
Problem: CI signals get buried; teams need a reliable health + trend view.
Constraints: Public-safe (no secrets), rate-limit-safe, graceful fallbacks.
Built: Next.js dashboard + /api/quality aggregator + snapshot/live/cloud modes + caching.
Results: Evidence-backed telemetry with exports, receipts, and verified routing.
Proof: Live dashboard + system design page + evidence exports + CI.

Recruiter note: this block is designed for remote evaluation — problem, constraints, what shipped, and proof.

Recruiter quick links

Remote-friendly: each link is a short path to proof (design, runtime, evidence).

Proof

Recruiter note: this section is intentionally “evidence-first” (builds, runs, reports).

Quality Gates

This project is presented like a production system: measurable, reproducible, and backed by evidence. (Next step: make these gates fully project-specific and auto-fed into the Quality Dashboard.)

CI pipeline
Test report artifact
API tests
E2E tests
Performance checks
Security checks
Accessibility checks
Run locally
git clone https://github.com/JasonTeixeira/qa-portfolio
# See repo README for setup
# Typical patterns:
# - npm test / npm run test
# - pytest -q
# - make test
49 UI
Tests
CI gates
Performance

Quality Telemetry Dashboard (Portfolio)

Executive summary

I built a production-style quality telemetry dashboard that answers a recruiter / hiring manager question fast:

Can this person build automation systems and make them observable?

The dashboard supports:

  • Snapshot mode: loads a committed metrics file (always works)
  • Live mode: queries GitHub Actions and pulls the latest artifact-backed QA metrics (best-effort, rate-limit-safe)
  • Cloud mode: is designed to read metrics from AWS S3 via a proxy API (credential-free on Vercel)

Interview hooks (talk track)

  • Problem: CI results often get lost in logs. Teams need a simple, consistent way to see health + trends.
  • Constraints: Must be safe on a public portfolio site (no secrets leaked), and must degrade gracefully.
  • What I built: Next.js dashboard + /api/quality aggregator + caching + snapshot fallback + AWS proof artifacts.
  • Proof: The dashboard itself + evidence exports in the artifacts library.
  • What I learned: Reliability is product work — fallbacks, caching, and clear failure modes matter.
  • What I’d do next: Add alert routing (email/Slack), per-suite trend slicing, and a “release gate” summary.

Proof & evidence

  • Dashboard: /dashboard
  • System design: /platform/quality-telemetry
  • API endpoint: /api/quality
  • Evidence library: /artifacts#evidence

AWS receipts (exports committed to the repo):

  • CloudWatch dashboard export: /artifacts/evidence/aws-cloudwatch-dashboard-qa-portfolio-prod-api.json
  • CloudWatch alarms export: /artifacts/evidence/aws-cloudwatch-alarms-qa-portfolio-prod-api.json
  • API Gateway routes export: /artifacts/evidence/aws-apigw-routes-qa-portfolio-prod.json
  • S3 head-object export: /artifacts/evidence/aws-s3-latest-head-object.json
  • IAM role export: /artifacts/evidence/aws-iam-github-oidc-role.json

Technologies Used:

Next.jsTypeScriptPlaywrightGitHub ActionsAWS

Impressed by this project?

I'm available for consulting and full-time QA automation roles. Let's build quality together.