Front-Office Roadmap
This roadmap describes the remaining work required to bring the Front-Office from its current state (v0.12.6, ~65-70% spec coverage) to production readiness (v1.0, 100%).
Each phase is ordered by dependency chain and impact. Tasks are broken down to individual file-level changes.
Derived from the FO Coverage Analysis.
Reference specification: Next-Generation CMS — Technical Architecture V0.1 (October 2025).
Phase overview
Phase 1 — Containerization & Delivery (v0.13) ██████████░░░░░░░░░░ Blocks everything
Phase 2 — Platform Services Integration (v0.14) ░░░░░░░░░░░░░░░░░░░░ Core infra wiring
Phase 3 — Observability & Analytics (v0.15) ░░░░░░░░░░░░░░░░░░░░ Monitoring stack
Phase 4 — Quality Gates & Hardening (v0.16) ░░░░░░░░░░░░░░░░░░░░ CI enforcement
Phase 5 — Production Readiness (v1.0) ░░░░░░░░░░░░░░░░░░░░ Final audit
Phase 1 — Containerization & Delivery (v0.13)
Goal: Make the FO deployable on Kubernetes and served through the CDN.
Priority: P0 — blocks all SLOs and platform alignment.
Estimated effort: 2–3 weeks.
1.1 Dockerfile
| # | Task | File(s) | Details | Effort |
|---|
| 1.1.1 | Create multi-stage Dockerfile | Dockerfile | Stage 1 — build: FROM node:22-alpine AS build, install pnpm, pnpm install --frozen-lockfile, pnpm generate:tokens, pnpm build. Stage 2 — runtime: FROM node:22-alpine, copy .output/ only, expose 3000, CMD ["node", ".output/server/index.mjs"]. Pin exact Node version matching package.json engines. | 0.5d |
| 1.1.2 | Create .dockerignore | .dockerignore | Exclude node_modules/, .nuxt/, .output/, tests/, __tests__/, .storybook/, *.md, .git/, data/, log/. Keep scripts/ (needed for token generation at build). | 0.25d |
| 1.1.3 | Add build and start scripts verification | package.json | Ensure build script outputs to .output/. Verify node .output/server/index.mjs starts Nitro correctly with HOST=0.0.0.0 and PORT=3000. | 0.25d |
| 1.1.4 | Test image locally | — | docker build -t cms-fo:local . → verify image < 200 MB, docker run -p 3000:3000 -e NUXT_PUBLIC_API_URL=... cms-fo:local → verify homepage renders. | 0.25d |
Acceptance: docker build succeeds, image < 200 MB, curl http://localhost:3000 returns valid HTML.
1.2 Health endpoint
| # | Task | File(s) | Details | Effort |
|---|
| 1.2.1 | Create readiness probe route | server/routes/healthz.get.ts | Return 200 OK with JSON body: { status: "ok", version: pkg.version, uptime: process.uptime(), timestamp: new Date().toISOString() }. Read version from package.json at startup. | 0.25d |
| 1.2.2 | Create liveness probe route | server/routes/livez.get.ts | Lightweight — return 200 OK with { status: "alive" }. No external calls. | 0.1d |
| 1.2.3 | Create startup probe route | server/routes/startupz.get.ts | Check that Nitro storage is initialized. Return 200 if ready, 503 if not. | 0.1d |
Acceptance: GET /healthz, /livez, /startupz return correct status codes. No auth required.
1.3 Helm chart
| # | Task | File(s) | Details | Effort |
|---|
| 1.3.1 | Scaffold Helm chart | helm/cms-fo/Chart.yaml | apiVersion: v2, name: cms-fo, version: 0.13.0, appVersion: 0.13.0. | 0.1d |
| 1.3.2 | Create values.yaml | helm/cms-fo/values.yaml | replicaCount: 2, image.repository, image.tag, resources.limits: { cpu: 500m, memory: 512Mi }, resources.requests: { cpu: 100m, memory: 128Mi }, env map for all NUXT_PUBLIC_* vars, REDIS_*, OTEL_*. | 0.25d |
| 1.3.3 | Create Deployment template | helm/cms-fo/templates/deployment.yaml | Pod spec with: livenessProbe: httpGet /livez (period 30s), readinessProbe: httpGet /healthz (period 10s), startupProbe: httpGet /startupz (failureThreshold 30, period 5s), env from values and Secrets, imagePullPolicy: IfNotPresent. | 0.5d |
| 1.3.4 | Create Service template | helm/cms-fo/templates/service.yaml | ClusterIP service on port 80 → container port 3000. | 0.1d |
| 1.3.5 | Create HPA template | helm/cms-fo/templates/hpa.yaml | minReplicas: 2, maxReplicas: 10, target CPU 70%, target memory 80%. | 0.1d |
| 1.3.6 | Create Ingress template | helm/cms-fo/templates/ingress.yaml | Configurable host, TLS from Secret, annotations for platform Ingress Controller. | 0.25d |
| 1.3.7 | Create ConfigMap template | helm/cms-fo/templates/configmap.yaml | Non-sensitive NUXT_PUBLIC_* variables. | 0.1d |
| 1.3.8 | Create ExternalSecret template | helm/cms-fo/templates/external-secret.yaml | For sensitive vars (Redis password, API keys) — placeholder for Vault/Kubernetes Secrets integration. | 0.1d |
| 1.3.9 | Validate chart | — | helm lint helm/cms-fo/, helm template cms-fo helm/cms-fo/ → verify YAML output. helm install --dry-run on staging. | 0.25d |
Acceptance: helm install cms-fo helm/cms-fo/ --namespace cms-staging deploys pods that pass all probes.
| # | Task | File(s) | Details | Effort |
|---|
| 1.4.1 | Create surrogate-key middleware | server/middleware/surrogate-keys.ts | Intercept every SSR response. Compute Surrogate-Key header value from: route path segments (/articles/my-slug → articles articles/my-slug), tenant_id from runtime config, site_id from config, content type (page, article, category, tag, homepage). Format: space-separated keys. | 0.5d |
| 1.4.2 | Add tenant/site prefixed keys | server/middleware/surrogate-keys.ts | Prepend tenant:{id} and site:{id} to every key set. E.g. tenant:demo site:1 articles articles/my-slug. This enables per-tenant and per-site purges. | 0.25d |
| 1.4.3 | Skip surrogate keys for preview | server/middleware/surrogate-keys.ts | If preview_token query param present, do NOT set Surrogate-Key header (already handled by preview-no-store.ts with Cache-Control: no-store). | 0.1d |
| 1.4.4 | Create purge API route | server/api/cdn-purge.post.ts | Accept POST with { keys: string[] } body. Call CDN vendor purge API (abstract behind utils/cdn-client.ts). Require an internal shared secret (X-Purge-Secret header) — not public. Return { purged: keys.length }. | 0.5d |
| 1.4.5 | Create CDN client abstraction | server/utils/cdn-client.ts | purgeSurrogateKeys(keys: string[]): Promise<void>. Initial implementation: HTTP call to CDN purge endpoint (vendor-specific). Env vars: CDN_PURGE_URL, CDN_PURGE_TOKEN. Stub for dev mode (log only). | 0.5d |
| 1.4.6 | Document purge contract for API team | docs/cdn-purge-contract.md (in cms-doc) | Document the POST /api/cdn-purge endpoint contract so the API team can call it on publish events. Include key naming convention. | 0.25d |
Acceptance: SSR responses include Surrogate-Key header. POST /api/cdn-purge with correct secret purges the CDN. Preview responses have no surrogate keys.
| # | Task | File(s) | Details | Effort |
|---|
| 1.5.1 | Review current route rules | nuxt.config.ts → nitro.routeRules | Current state: homepage s-maxage=1800, articles/tags/categories s-maxage=300, /** catch-all s-maxage=300, assets max-age=31536000 immutable. | — |
| 1.5.2 | Add stale-if-error to all public routes | nuxt.config.ts | Add stale-if-error=86400 (24h) to all Cache-Control headers. Ensures CDN serves stale content if origin is down. Currently missing. | 0.25d |
| 1.5.3 | Separate ISR values from CDN values | nuxt.config.ts | Clarify: ISR isr: 300 controls Nitro revalidation, s-maxage controls CDN TTL. They should be independent. Set CDN s-maxage higher (e.g. 3600) and use surrogate-key purge for instant invalidation instead of relying on short TTLs. | 0.25d |
| 1.5.4 | Add Vary: Accept-Language for i18n | nuxt.config.ts or server/middleware/surrogate-keys.ts | Ensure CDN caches separate versions per locale. Add Vary: Accept-Language, X-Site-Id header to SSR responses. | 0.25d |
| 1.5.5 | Validate with curl tests | — | Write a shell script scripts/test-cache-headers.sh that curls each route type and asserts correct Cache-Control, Surrogate-Key, Vary headers. | 0.25d |
Acceptance: All public routes return Cache-Control with s-maxage, stale-while-revalidate, stale-if-error. Vary header includes Accept-Language. Preview routes return no-store.
Goal: Connect the FO to all shared One-Platform services.
Priority: P1 — required for functional completeness (spec v0.5–v0.8).
Estimated effort: 3–4 weeks.
2.1 Redis integration
| # | Task | File(s) | Details | Effort |
|---|
| 2.1.1 | Add Redis driver to Nitro storage | nuxt.config.ts | Change nitro.storage.cache.driver from 'memory' to 'redis'. Use env vars: REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, REDIS_DB. Keep 'memory' fallback when REDIS_HOST is unset (dev mode). | 0.25d |
| 2.1.2 | Add unstorage Redis driver dependency | package.json | pnpm add unstorage @unstorage/redis-driver (or the Nitro built-in redis driver if available). Verify version compatibility with Nitro. | 0.1d |
| 2.1.3 | Create Redis health check | server/routes/healthz.get.ts | Extend the existing health endpoint: ping Redis storage, include { redis: "connected" } or { redis: "disconnected" } in response. Return 503 if Redis is down and REDIS_HOST is configured. | 0.25d |
| 2.1.4 | Add Redis connection environment vars | helm/cms-fo/values.yaml | Add REDIS_HOST, REDIS_PORT, REDIS_PASSWORD (from Secret), REDIS_DB to the env map. | 0.1d |
Acceptance: GET /healthz reports Redis status. Nitro cache is shared between pod restarts. Memory fallback works in local dev.
2.2 Preview token validation via Redis
| # | Task | File(s) | Details | Effort |
|---|
| 2.2.1 | Create preview token validation middleware | server/middleware/preview-token-redis.ts | On X-Preview-Token or ?preview_token= in request: read preview-token:{value} from Redis storage. If key exists and not expired, set event.context.preview = { token, validUntil } and continue. If key missing/expired, return 403 { error: "Invalid or expired preview token" }. Skip if no token present (public request). | 0.5d |
| 2.2.2 | Remove hardcoded preview logic | composables/usePreview.ts | Refactor: keep previewToken extraction from query, but rely on the server middleware for validation. Remove any client-side-only validation. The server already sets Cache-Control: no-store via preview-no-store.ts. | 0.25d |
| 2.2.3 | Add preview token info to SSR context | server/middleware/preview-token-redis.ts | Store validated preview metadata in event.context.preview so downstream handlers (page rendering) can detect preview mode server-side. | 0.1d |
| 2.2.4 | Write integration test | tests/integration/preview-token.spec.ts | Test with valid token → 200. Expired token → 403. Missing token → normal public response. Ensure Cache-Control: no-store is set in preview mode. | 0.5d |
Acceptance: Preview tokens are validated server-side against Redis. Multi-instance FO serves identical preview content. Expired tokens are rejected.
2.3 Remove temporary JSON storage
| # | Task | File(s) | Details | Effort |
|---|
| 2.3.1 | Create API proxy for homepage blocks | server/api/homepage-blocks.get.ts | Replace current file-read logic with $fetch call to CMS API: GET /public/homepage-blocks?site={siteId}. Forward X-Tenant-Id, Accept-Language, X-Preview-Token headers. Cache response in Nitro storage (Redis) with homepage-blocks:site:{id} key, TTL 5 min. | 0.5d |
| 2.3.2 | Remove PUT route | server/api/homepage-blocks.put.ts | Delete file entirely. Homepage block editing is handled by the BO → API, not by the FO. | 0.1d |
| 2.3.3 | Delete local JSON data files | data/homepage-blocks.json | Remove the temporary JSON file. Ensure no other code references it. | 0.1d |
| 2.3.4 | Update useHomepageBlocks composable | composables/useHomepageBlocks.ts | Update the fetch URL from /api/homepage-blocks?site=... (local) to go through the new proxy route (same URL, but the server-side handler now calls the real API). Verify useAsyncData caching still works. | 0.25d |
| 2.3.5 | Add surrogate key for homepage | server/api/homepage-blocks.get.ts | Set Surrogate-Key: homepage site:{id} tenant:{tenantId} on response so CDN purge covers homepage changes. | 0.1d |
Acceptance: Homepage renders from API data. No JSON files in data/. Homepage invalidation works via CDN purge.
2.4 Search engine integration
| # | Task | File(s) | Details | Effort |
|---|
| 2.4.1 | Create search proxy route | server/api/search.get.ts | Accept ?q=, ?type=, ?category=, ?tag=, ?locale=, ?page=, ?limit=. Forward to CMS API search endpoint: GET /public/search?q=.... Return JSON with { results: [], total: number, facets: {} }. Cache for 60s in Nitro storage. | 0.5d |
| 2.4.2 | Add search result type definition | types/search.ts | SearchResult { id, title, type: 'article'|'page', url, excerpt, score, highlights, publishedAt, category?, thumbnail? }. SearchResponse { results: SearchResult[], total: number, facets: { types: Record<string, number>, categories: Record<string, number> }, query: string, page: number }. | 0.25d |
| 2.4.3 | Refactor useSearch composable | composables/useSearch.ts | Replace current dual-fetch (articles + pages) with single call to /api/search. Add: facets ref, totalResults ref, currentPage ref, loadMore() method, selectedFacets ref. Keep debounce at 300ms. Remove manual result merging logic. | 0.5d |
| 2.4.4 | Update search results page | pages/search.vue | Add: faceted sidebar (filter by type, category), pagination, result highlighting, “no results” state with suggestions, loading skeleton. Use new useSearch composable. | 0.5d |
| 2.4.5 | Add search suggestions component | components/SearchSuggestions.vue | Dropdown below search input. Show top 5 results with title + type badge. Debounce 200ms. Keyboard navigation (arrow keys + enter). Close on Escape or click-outside. | 0.5d |
| 2.4.6 | Integrate suggestions in Header | components/Header.vue | Replace current search input behavior with <SearchSuggestions /> component. Wire to useSearch.handleSearch() on Enter. | 0.25d |
| 2.4.7 | Write E2E test | tests/e2e/search.spec.ts | Test: type query → suggestions appear → click suggestion → navigate. Full search page with facets. Empty query. No results. | 0.5d |
Acceptance: Search works end-to-end via the search engine. Faceted results display correctly. Suggestions appear in < 300ms. Empty state handled.
2.5 Matomo analytics + CMP
| # | Task | File(s) | Details | Effort |
|---|
| 2.5.1 | Create CMP composable | composables/useConsent.ts | Manage user consent state. Expose: hasAnalyticsConsent: Ref<boolean>, hasMarketingConsent: Ref<boolean>, showConsentBanner: Ref<boolean>, acceptAll(), rejectAll(), acceptCategory(category). Persist to cookie cms_consent with SameSite=Lax; Secure. | 0.5d |
| 2.5.2 | Create consent banner component | components/ConsentBanner.vue | GDPR-compliant banner: title, description, “Accept all”, “Reject all”, “Customize” buttons. Customize panel: checkboxes for analytics, marketing. Link to privacy policy. Respects prefers-reduced-motion. Accessible (focus trap, ARIA roles). | 0.5d |
| 2.5.3 | Integrate banner in layout | layouts/default.vue | Render <ConsentBanner /> conditionally based on useConsent().showConsentBanner. | 0.1d |
| 2.5.4 | Create Matomo plugin | plugins/matomo.client.ts | Watch useConsent().hasAnalyticsConsent. When true: inject Matomo tracker script via useHead(). Configure window._paq with: site ID from env NUXT_PUBLIC_MATOMO_SITE_ID, tracker URL from NUXT_PUBLIC_MATOMO_URL, disableCookies if consent not full. When consent revoked: call _paq.push(['optUserOut']) and remove cookies. | 0.5d |
| 2.5.5 | Add custom dimensions | plugins/matomo.client.ts | After tracker init, push custom dimensions: dimension1: tenantId, dimension2: siteId, dimension3: locale, dimension4: template name, dimension5: page type (article/page/category/homepage). | 0.25d |
| 2.5.6 | Track SPA navigation | plugins/matomo.client.ts | Use router.afterEach() hook to push trackPageView with correct title and URL on each client-side navigation. | 0.25d |
| 2.5.7 | Honor Do Not Track | plugins/matomo.client.ts | Check navigator.doNotTrack === '1'. If set, do not init Matomo even with consent (spec 5.3.5). | 0.1d |
| 2.5.8 | Add env vars to config | nuxt.config.ts, helm/cms-fo/values.yaml | Add NUXT_PUBLIC_MATOMO_URL, NUXT_PUBLIC_MATOMO_SITE_ID to runtimeConfig.public and Helm values. | 0.1d |
| 2.5.9 | Write E2E test | tests/e2e/consent-matomo.spec.ts | Test: banner appears on first visit → reject → no Matomo requests. Accept → Matomo requests fired. Revoke → optUserOut called. DNT header → no tracking. | 0.5d |
Acceptance: Matomo loads only after explicit user consent. Zero tracking without consent. DNT respected. Custom dimensions sent. SPA navigations tracked.
Phase 3 — Observability & Analytics (v0.15)
Goal: Instrument the FO for production monitoring.
Priority: P1 — required for production operations (spec 5.5).
Estimated effort: 2 weeks.
3.1 OpenTelemetry server-side instrumentation
| # | Task | File(s) | Details | Effort |
|---|
| 3.1.1 | Install OTel dependencies | package.json | pnpm add @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/exporter-metrics-otlp-http @opentelemetry/resources @opentelemetry/semantic-conventions. | 0.1d |
| 3.1.2 | Create OTel initialization module | server/utils/otel.ts | Initialize NodeSDK with: resource (service.name: "cms-fo", service.version, deployment.environment), traceExporter (OTLP HTTP to OTEL_EXPORTER_OTLP_ENDPOINT), metricReader (periodic, 30s interval), auto-instrumentations for http, fetch. Only init if OTEL_EXPORTER_OTLP_ENDPOINT env var is set. No-op in dev. | 0.5d |
| 3.1.3 | Register OTel in Nitro plugin | server/plugins/otel.ts | Import and start the SDK from server/utils/otel.ts at Nitro startup. Ensure graceful shutdown on SIGTERM. | 0.25d |
| 3.1.4 | Add tenant/site span attributes | server/middleware/otel-attributes.ts | For every request: add span attributes tenant.id (from runtime config), site.id, http.route (path), http.locale (from Accept-Language). | 0.25d |
| 3.1.5 | Add env vars | nuxt.config.ts, helm/cms-fo/values.yaml | OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_SERVICE_NAME=cms-fo, OTEL_TRACES_SAMPLER=parentbased_traceidratio, OTEL_TRACES_SAMPLER_ARG=0.01 (1% in prod, 1.0 in staging). | 0.1d |
Acceptance: Traces appear in Grafana Tempo with service.name=cms-fo. Spans include tenant/site attributes.
3.2 Trace propagation to API
| # | Task | File(s) | Details | Effort |
|---|
| 3.2.1 | Inject traceparent in API calls | utils/api.ts → secureFetch() | In the getDefaultHeaders() function, if running server-side and OTel is active, get current span context and inject traceparent header using W3CTraceContextPropagator. Import from @opentelemetry/core. | 0.5d |
| 3.2.2 | Create custom SSR render span | server/plugins/otel.ts | Wrap SSR rendering in a custom span: tracer.startSpan('ssr.render', { attributes: { 'http.route': url } }). Record duration. End span after response sent. | 0.25d |
| 3.2.3 | Verify end-to-end trace | — | Deploy FO + API on staging. Trigger a page load. Verify in Grafana Tempo that the trace shows: FO ssr.render → FO fetch /public/articles → API controller. | 0.25d |
Acceptance: Distributed traces span FO → API with correct parent-child relationships.
3.3 Prometheus metrics
| # | Task | File(s) | Details | Effort |
|---|
| 3.3.1 | Create metrics endpoint | server/routes/metrics.get.ts | Expose GET /metrics in Prometheus text format. Include: ssr_render_duration_seconds{route, status} (histogram), http_requests_total{route, method, status} (counter), cache_hits_total / cache_misses_total (counters), active_connections (gauge). Use OTel MeterProvider or standalone prom-client. | 0.5d |
| 3.3.2 | Instrument SSR render duration | server/plugins/otel.ts | Record histogram metric for each SSR response with route and status code labels. | 0.25d |
| 3.3.3 | Instrument cache hit/miss | utils/cache.ts | In getCachedResponse(): increment cache_hits_total or cache_misses_total counter. Label by cache_layer (memory, storage, redis). | 0.25d |
| 3.3.4 | Add Kubernetes annotations for scraping | helm/cms-fo/templates/deployment.yaml | Add pod annotations: prometheus.io/scrape: "true", prometheus.io/port: "3000", prometheus.io/path: "/metrics". | 0.1d |
Acceptance: GET /metrics returns valid Prometheus text format. Grafana can scrape and graph the metrics.
3.4 Structured logging
| # | Task | File(s) | Details | Effort |
|---|
| 3.4.1 | Create structured logger | server/utils/logger.ts | Wrapper around consola (Nitro default). In production: output JSON to stdout with fields: timestamp, level, message, trace_id (from OTel context), tenant_id, site_id, route, status_code, duration_ms, user_agent. In dev: keep human-readable format. | 0.5d |
| 3.4.2 | Integrate logger in request lifecycle | server/middleware/request-logger.ts | Log every request on completion: { method, url, status, duration_ms, trace_id, cache_status }. Use event.node.res.on('finish', ...). | 0.25d |
| 3.4.3 | Replace console.log/warn/error | All server/**/*.ts | Audit all server-side files. Replace console.* with structured logger calls. Ensure error logs include stack traces as error.stack field. | 0.25d |
| 3.4.4 | Configure log level from env | server/utils/logger.ts | LOG_LEVEL env var: debug, info, warn, error. Default: info in production, debug in dev. | 0.1d |
Acceptance: All server logs are JSON in production. trace_id correlates with OTel traces. Logs are queryable in Loki/ELK.
3.5 Web Vitals forwarding
| # | Task | File(s) | Details | Effort |
|---|
| 3.5.1 | Create OTel RUM exporter | utils/otel-rum.ts | Client-side module. Send metrics via navigator.sendBeacon to /api/rum endpoint (or directly to OTel Collector via OTLP/HTTP). Batch metrics in 10s intervals. Include: web_vital_lcp, web_vital_fid, web_vital_cls, web_vital_ttfb, web_vital_inp. | 0.5d |
| 3.5.2 | Create server-side RUM ingest route | server/api/rum.post.ts | Accept batched Web Vitals. Forward to OTel Collector as metrics. Validate payload. Rate-limit by IP (max 10 req/min). | 0.25d |
| 3.5.3 | Integrate with existing useWebVitals | composables/useWebVitals.ts | Replace the optional logEndpoint fetch with the new OTel RUM exporter. Keep console logging in dev. Add route and template labels to each metric. | 0.25d |
Acceptance: Web Vitals appear as metrics in Grafana with route-level granularity.
3.6 Grafana dashboard
| # | Task | File(s) | Details | Effort |
|---|
| 3.6.1 | Create dashboard JSON template | monitoring/grafana-dashboard-fo.json | Panels: Row 1 — Traffic: requests/s, error rate (%), active connections. Row 2 — Performance: SSR render p50/p95/p99, TTFB distribution, cache hit ratio. Row 3 — Web Vitals: LCP, FID, CLS, INP by route. Row 4 — Resources: CPU, memory per pod. Variables: $environment, $tenant_id, $site_id. | 0.5d |
| 3.6.2 | Create alert rules | monitoring/alerts-fo.yaml | Alerts: error rate > 1% for 5 min, SSR p95 > 600ms for 10 min, cache hit ratio < 85% for 15 min, pod restarts > 3 in 10 min. | 0.25d |
Acceptance: Dashboard imported into Grafana. All panels show data. Alerts fire correctly in staging.
Phase 4 — Quality Gates & Hardening (v0.16)
Goal: Enforce performance, accessibility, and security standards in CI.
Priority: P2 — required for v1.0 certification (spec 5.6, 4.8).
Estimated effort: 2–3 weeks.
4.1 Lighthouse CI
| # | Task | File(s) | Details | Effort |
|---|
| 4.1.1 | Install Lighthouse CI | package.json | pnpm add -D @lhci/cli. | 0.1d |
| 4.1.2 | Create Lighthouse config | lighthouserc.json | ci.collect.url: ['http://localhost:3000/', 'http://localhost:3000/articles', 'http://localhost:3000/articles/test-article', 'http://localhost:3000/categories', 'http://localhost:3000/search?q=test']. ci.collect.numberOfRuns: 3. ci.assert.assertions: categories:performance >= 90, categories:seo >= 90, categories:accessibility >= 90, largest-contentful-paint <= 2500, first-input-delay <= 100, cumulative-layout-shift <= 0.1, total-byte-weight <= 300000. | 0.25d |
| 4.1.3 | Add npm script | package.json | "lighthouse": "lhci autorun", "lighthouse:ci": "lhci autorun --collect.startServerCommand='node .output/server/index.mjs'". | 0.1d |
| 4.1.4 | Create CI pipeline step | .github/workflows/lighthouse.yml or CI config | Step: build → start server → run Lighthouse → upload results. Fail on assertion violations. Upload HTML report as artifact. | 0.5d |
| 4.1.5 | Configure LHCI server (optional) | — | If LHCI server available: configure ci.upload.target: 'lhci' with server URL for historical trend tracking. Otherwise: ci.upload.target: 'filesystem'. | 0.25d |
Acceptance: Lighthouse runs on every merge request. Merge blocked if any score < 90 or budget exceeded.
4.2 k6 load testing
| # | Task | File(s) | Details | Effort |
|---|
| 4.2.1 | Create base k6 config | k6/config.js | Export options: stages (ramp up 30s to 50 VUs, hold 2 min, ramp down 30s). Thresholds: http_req_duration['p(95)'] < 600, http_req_failed < 0.01. Tags: environment, test_name. | 0.25d |
| 4.2.2 | Write homepage scenario | k6/scenarios/homepage.js | GET / → check status 200, check Surrogate-Key header present, check response time < 600ms. Parse HTML, verify title present. | 0.25d |
| 4.2.3 | Write article listing scenario | k6/scenarios/articles.js | GET /articles → check status 200, check response contains article links. Follow one article link → check 200. | 0.25d |
| 4.2.4 | Write search scenario | k6/scenarios/search.js | GET /api/search?q=test → check status 200, check results array present, check response time < 300ms. | 0.25d |
| 4.2.5 | Write preview scenario | k6/scenarios/preview.js | GET /?preview_token=valid_token → check 200, check Cache-Control: no-store. GET /?preview_token=invalid → check 403. | 0.25d |
| 4.2.6 | Create k6 runner script | k6/run.sh | Run all scenarios sequentially. Export results to JSON. If K6_CLOUD_TOKEN set, export to Grafana Cloud k6. | 0.1d |
| 4.2.7 | Add npm script | package.json | "test:load": "k6 run k6/scenarios/homepage.js", "test:load:all": "bash k6/run.sh". | 0.1d |
| 4.2.8 | Create CI pipeline step | .github/workflows/k6.yml or CI config | Nightly job: deploy to staging → run k6 → report to Grafana. On merge-to-main: run lightweight smoke test (10 VUs, 30s). | 0.5d |
Acceptance: k6 runs nightly on staging. SLOs validated: p95 SSR < 600ms, error rate < 1%. Results visible in Grafana.
4.3 Security hardening
| # | Task | File(s) | Details | Effort |
|---|
| 4.3.1 | Add Trusted Types CSP directive | server/plugins/csp-hash.ts | Append ; require-trusted-types-for 'script' to Content-Security-Policy-Report-Only header. Add ; trusted-types 'none'. Do NOT enforce yet — report-only. | 0.25d |
| 4.3.2 | Create CSP reporting endpoint | server/api/csp-report.post.ts | Accept CSP violation reports (JSON format). Log with structured logger: { type: 'csp-violation', directive, blockedUri, sourceFile, lineNumber }. Rate-limit by IP. | 0.25d |
| 4.3.3 | Audit CSP hashes | server/plugins/csp-hash.ts | Verify no unsafe-inline or unsafe-eval in production build. Test by temporarily enforcing CSP and checking for violations. Remove any fallback unsafe-* directives. | 0.25d |
| 4.3.4 | Add connect-src whitelist | server/plugins/csp-hash.ts | Replace hardcoded https://local.api.cms https://api.example.com with env-driven: NUXT_PUBLIC_API_URL, NUXT_PUBLIC_MEDIA_BASE_URL, NUXT_PUBLIC_MATOMO_URL. | 0.25d |
| 4.3.5 | Add container scan to CI | .github/workflows/security.yml or CI config | Step: docker build → trivy image --severity HIGH,CRITICAL --exit-code 1 cms-fo:$TAG. | 0.25d |
| 4.3.6 | Add dependency audit to CI | .github/workflows/security.yml or CI config | Step: pnpm audit --audit-level high. Fail on high/critical vulnerabilities. | 0.1d |
| 4.3.7 | Create .trivyignore | .trivyignore | Placeholder for accepted vulnerabilities with documented justification per entry. | 0.1d |
Acceptance: Trusted Types violations are reported (not blocking). No unsafe-* CSP directives. Trivy scan passes. pnpm audit clean.
4.4 Accessibility CI gate
| # | Task | File(s) | Details | Effort |
|---|
| 4.4.1 | Install axe-core for Playwright | package.json | pnpm add -D @axe-core/playwright. | 0.1d |
| 4.4.2 | Create a11y test helper | tests/helpers/a11y.ts | async function checkA11y(page: Page, url: string): navigate to URL, inject axe, run axe.run() with WCAG 2.1 AA ruleset. Collect violations. Assert zero violations. Format violations for readable output. | 0.25d |
| 4.4.3 | Write a11y test suite | tests/e2e/accessibility.spec.ts | Test pages: / (homepage), /articles (listing), /articles/test-article (detail), /categories (listing), /search (search page), /search?q=test (with results). Each: checkA11y(page, url). | 0.5d |
| 4.4.4 | Add a11y to CI pipeline | .github/workflows/a11y.yml or CI config | Run a11y tests on every merge request. Fail if any WCAG 2.1 AA violation found. Upload HTML report. | 0.25d |
| 4.4.5 | Create RGAA manual checklist | docs/rgaa-checklist.md (in cms-doc) | Checklist for manual review: keyboard navigation (Tab/Shift+Tab), focus visibility, skip-to-content link (already in app.vue), screen reader landmarks, color contrast (4.5:1 normal, 3:1 large), form labels, error messages, ARIA attributes. | 0.25d |
| 4.4.6 | Fix skip-link behavior | app.vue | Verify existing sr-only skip link actually moves focus to #main-content. Ensure the target element has tabindex="-1". Test with keyboard. | 0.1d |
Acceptance: axe-core reports zero WCAG 2.1 AA violations on all tested pages. Manual checklist documented and reviewed.
Phase 5 — Production Readiness (v1.0)
Goal: Final validation, documentation, and sign-off.
Priority: P3 — certification milestone.
Estimated effort: 1–2 weeks.
5.1 SLO validation
| # | Task | File(s) | Details | Effort |
|---|
| 5.1.1 | Run 48-hour soak test | — | Deploy to staging with production-like config. Run k6 at sustained load (20 VUs, mixed scenarios). Collect metrics in Grafana. | 0.5d (setup) + 2d (run) |
| 5.1.2 | Generate SLO report | docs/slo-validation-report.md | Document measured values vs. spec targets: API p95 <= 300ms, TTFB p95 <= 600ms, cache hit >= 85%, error rate <= 1%, Lighthouse >= 90. Include Grafana screenshots. | 0.5d |
| 5.1.3 | Identify and remediate violations | — | If any SLO is breached: profile, fix, re-run. Document root cause and fix. | 1–3d (variable) |
| # | Task | File(s) | Details | Effort |
|---|
| 5.2.1 | Engage auditor | — | Internal team or external RGAA auditor. Provide staging URL, test accounts, list of page types. | — |
| 5.2.2 | Remediate findings | Various components | Fix any critical/high findings. Document deferred low findings with justification. | 1–3d (variable) |
| 5.2.3 | Generate compliance report | docs/accessibility-audit-report.md | RGAA / WCAG 2.1 AA compliance score, findings, remediations, remaining exceptions. | 0.5d |
5.3 Security sign-off
| # | Task | File(s) | Details | Effort |
|---|
| 5.3.1 | Run final Trivy scan | — | Scan production image. Zero HIGH/CRITICAL. | 0.1d |
| 5.3.2 | Run final pnpm audit | — | Zero high/critical. Document accepted medium risks. | 0.1d |
| 5.3.3 | Review CSP violation reports | — | Analyze 1 week of CSP + Trusted Types reports from staging. Confirm no legitimate code is blocked. | 0.25d |
| 5.3.4 | Generate security report | docs/security-review.md | CSP policy summary, Trusted Types status, container scan results, dependency audit, accepted risks. | 0.25d |
5.4 Operational documentation
| # | Task | File(s) | Details | Effort |
|---|
| 5.4.1 | Write operational runbook | docs/runbook-fo.md (in cms-doc) | Sections: scaling (HPA config, manual override), CDN purge (curl commands, key format), Redis failover (connection retry behavior), log queries (Loki/ELK examples by error type), alert response (per alert rule: severity, impact, remediation steps), deployment (Helm upgrade, rollback). | 0.5d |
| 5.4.2 | Write ADRs for deviations | docs/adr/ (in cms-doc) | One ADR per deviation from spec: CDN vendor choice, search engine choice, Matomo setup, OTel sampling rate, any accepted risks. Format: context, decision, consequences. | 0.5d |
| 5.4.3 | Update FO Coverage Analysis | docs/developer/guides/fo-coverage-analysis.mdx | Update all statuses to Covered. Mark overall score as 100%. Add final notes. | 0.25d |
Dependency map
Phase 1 (Containerization)
│
├──► 1.1 Dockerfile ──► 1.3 Helm ──► 1.4 CDN ──► 1.5 Cache audit
│ │
│ └──► (DevOps: CDN vendor)
│
├──► 1.2 Health endpoints (parallel with 1.1)
│
└──► Phase 2 (Platform Services)
│
├──► 2.1 Redis ──► 2.2 Preview tokens ──► 2.3 Remove JSON
│
├──► 2.4 Search (needs API team search endpoint)
│
├──► 2.5 Matomo + CMP (independent)
│
└──► Phase 3 (Observability)
│
├──► 3.1 OTel SDK ──► 3.2 Trace propagation
│ │
│ └──► 3.3 Metrics ──► 3.6 Dashboard
│
├──► 3.4 Structured logging (parallel with 3.1)
│
├──► 3.5 Web Vitals (parallel with 3.1)
│
└──► Phase 4 (Quality Gates)
│
├──► 4.1 Lighthouse CI (independent)
├──► 4.2 k6 (independent)
├──► 4.3 Security (independent)
├──► 4.4 Accessibility (independent)
│
└──► Phase 5 (v1.0)
External dependency summary
| Team | Task | Phase.Step | Urgency | Blocker? |
|---|
| DevOps | Kubernetes namespace provisioning | 1.3 | Immediate | Yes |
| DevOps | CDN vendor selection + DNS | 1.4 | Immediate | Yes |
| DevOps | Redis cluster provisioning | 2.1 | High | Yes |
| DevOps | Search engine provisioning (ES/OpenSearch) | 2.4 | High | Yes |
| DevOps | Matomo instance + per-tenant DB | 2.5 | High | Yes |
| DevOps | OTel Collector endpoint + credentials | 3.1 | Medium | Yes |
| DevOps | Grafana access + dashboard provisioning | 3.6 | Medium | No |
| DevOps | CI runner with Chrome headless | 4.1 | Medium | Yes |
| DevOps | CI pipeline template for k6/Lighthouse stages | 4.1, 4.2 | Medium | No |
| API team | Homepage-blocks REST endpoint | 2.3 | High | Yes |
| API team | CDN purge trigger on publish event | 1.4 | High | Yes |
| API team | Search endpoint (/public/search) | 2.4 | High | Yes |
| Security | CSP / Trusted Types review | 4.3 | Medium | No |
| External | RGAA / accessibility auditor | 5.2 | Low (plan ahead) | No |
Timeline estimate
These estimates assume DevOps provisions shared services on schedule.
Calendar dates depend on team capacity and external dependency resolution.
Effort is expressed in developer-days (1 developer).
| Phase | Tasks | Effort | Target |
|---|
| Phase 1 — Containerization & Delivery | 20 tasks | ~6d | v0.13 |
| Phase 2 — Platform Services | 27 tasks | ~9d | v0.14 |
| Phase 3 — Observability | 16 tasks | ~6d | v0.15 |
| Phase 4 — Quality Gates | 18 tasks | ~6d | v0.16 |
| Phase 5 — Production Readiness | 10 tasks | ~5d (+ variable remediation) | v1.0 |
| Total | 91 tasks | ~32 dev-days + remediation | |
Further reading