Front-Office Roadmap

This roadmap describes the remaining work required to bring the Front-Office from its current state (v0.12.6, ~65-70% spec coverage) to production readiness (v1.0, 100%). Each phase is ordered by dependency chain and impact. Tasks are broken down to individual file-level changes.
Derived from the FO Coverage Analysis. Reference specification: Next-Generation CMS — Technical Architecture V0.1 (October 2025).

Phase overview

Phase 1 — Containerization & Delivery    (v0.13)   ██████████░░░░░░░░░░  Blocks everything
Phase 2 — Platform Services Integration  (v0.14)   ░░░░░░░░░░░░░░░░░░░░  Core infra wiring
Phase 3 — Observability & Analytics      (v0.15)   ░░░░░░░░░░░░░░░░░░░░  Monitoring stack
Phase 4 — Quality Gates & Hardening      (v0.16)   ░░░░░░░░░░░░░░░░░░░░  CI enforcement
Phase 5 — Production Readiness           (v1.0)    ░░░░░░░░░░░░░░░░░░░░  Final audit

Phase 1 — Containerization & Delivery (v0.13)

Goal: Make the FO deployable on Kubernetes and served through the CDN. Priority: P0 — blocks all SLOs and platform alignment. Estimated effort: 2–3 weeks.

1.1 Dockerfile

#TaskFile(s)DetailsEffort
1.1.1Create multi-stage DockerfileDockerfileStage 1 — build: FROM node:22-alpine AS build, install pnpm, pnpm install --frozen-lockfile, pnpm generate:tokens, pnpm build. Stage 2 — runtime: FROM node:22-alpine, copy .output/ only, expose 3000, CMD ["node", ".output/server/index.mjs"]. Pin exact Node version matching package.json engines.0.5d
1.1.2Create .dockerignore.dockerignoreExclude node_modules/, .nuxt/, .output/, tests/, __tests__/, .storybook/, *.md, .git/, data/, log/. Keep scripts/ (needed for token generation at build).0.25d
1.1.3Add build and start scripts verificationpackage.jsonEnsure build script outputs to .output/. Verify node .output/server/index.mjs starts Nitro correctly with HOST=0.0.0.0 and PORT=3000.0.25d
1.1.4Test image locallydocker build -t cms-fo:local . → verify image < 200 MB, docker run -p 3000:3000 -e NUXT_PUBLIC_API_URL=... cms-fo:local → verify homepage renders.0.25d
Acceptance: docker build succeeds, image < 200 MB, curl http://localhost:3000 returns valid HTML.

1.2 Health endpoint

#TaskFile(s)DetailsEffort
1.2.1Create readiness probe routeserver/routes/healthz.get.tsReturn 200 OK with JSON body: { status: "ok", version: pkg.version, uptime: process.uptime(), timestamp: new Date().toISOString() }. Read version from package.json at startup.0.25d
1.2.2Create liveness probe routeserver/routes/livez.get.tsLightweight — return 200 OK with { status: "alive" }. No external calls.0.1d
1.2.3Create startup probe routeserver/routes/startupz.get.tsCheck that Nitro storage is initialized. Return 200 if ready, 503 if not.0.1d
Acceptance: GET /healthz, /livez, /startupz return correct status codes. No auth required.

1.3 Helm chart

#TaskFile(s)DetailsEffort
1.3.1Scaffold Helm charthelm/cms-fo/Chart.yamlapiVersion: v2, name: cms-fo, version: 0.13.0, appVersion: 0.13.0.0.1d
1.3.2Create values.yamlhelm/cms-fo/values.yamlreplicaCount: 2, image.repository, image.tag, resources.limits: { cpu: 500m, memory: 512Mi }, resources.requests: { cpu: 100m, memory: 128Mi }, env map for all NUXT_PUBLIC_* vars, REDIS_*, OTEL_*.0.25d
1.3.3Create Deployment templatehelm/cms-fo/templates/deployment.yamlPod spec with: livenessProbe: httpGet /livez (period 30s), readinessProbe: httpGet /healthz (period 10s), startupProbe: httpGet /startupz (failureThreshold 30, period 5s), env from values and Secrets, imagePullPolicy: IfNotPresent.0.5d
1.3.4Create Service templatehelm/cms-fo/templates/service.yamlClusterIP service on port 80 → container port 3000.0.1d
1.3.5Create HPA templatehelm/cms-fo/templates/hpa.yamlminReplicas: 2, maxReplicas: 10, target CPU 70%, target memory 80%.0.1d
1.3.6Create Ingress templatehelm/cms-fo/templates/ingress.yamlConfigurable host, TLS from Secret, annotations for platform Ingress Controller.0.25d
1.3.7Create ConfigMap templatehelm/cms-fo/templates/configmap.yamlNon-sensitive NUXT_PUBLIC_* variables.0.1d
1.3.8Create ExternalSecret templatehelm/cms-fo/templates/external-secret.yamlFor sensitive vars (Redis password, API keys) — placeholder for Vault/Kubernetes Secrets integration.0.1d
1.3.9Validate charthelm lint helm/cms-fo/, helm template cms-fo helm/cms-fo/ → verify YAML output. helm install --dry-run on staging.0.25d
Acceptance: helm install cms-fo helm/cms-fo/ --namespace cms-staging deploys pods that pass all probes.

1.4 CDN integration — Surrogate-Key headers

#TaskFile(s)DetailsEffort
1.4.1Create surrogate-key middlewareserver/middleware/surrogate-keys.tsIntercept every SSR response. Compute Surrogate-Key header value from: route path segments (/articles/my-slugarticles articles/my-slug), tenant_id from runtime config, site_id from config, content type (page, article, category, tag, homepage). Format: space-separated keys.0.5d
1.4.2Add tenant/site prefixed keysserver/middleware/surrogate-keys.tsPrepend tenant:{id} and site:{id} to every key set. E.g. tenant:demo site:1 articles articles/my-slug. This enables per-tenant and per-site purges.0.25d
1.4.3Skip surrogate keys for previewserver/middleware/surrogate-keys.tsIf preview_token query param present, do NOT set Surrogate-Key header (already handled by preview-no-store.ts with Cache-Control: no-store).0.1d
1.4.4Create purge API routeserver/api/cdn-purge.post.tsAccept POST with { keys: string[] } body. Call CDN vendor purge API (abstract behind utils/cdn-client.ts). Require an internal shared secret (X-Purge-Secret header) — not public. Return { purged: keys.length }.0.5d
1.4.5Create CDN client abstractionserver/utils/cdn-client.tspurgeSurrogateKeys(keys: string[]): Promise<void>. Initial implementation: HTTP call to CDN purge endpoint (vendor-specific). Env vars: CDN_PURGE_URL, CDN_PURGE_TOKEN. Stub for dev mode (log only).0.5d
1.4.6Document purge contract for API teamdocs/cdn-purge-contract.md (in cms-doc)Document the POST /api/cdn-purge endpoint contract so the API team can call it on publish events. Include key naming convention.0.25d
Acceptance: SSR responses include Surrogate-Key header. POST /api/cdn-purge with correct secret purges the CDN. Preview responses have no surrogate keys.

1.5 Cache-Control headers audit

#TaskFile(s)DetailsEffort
1.5.1Review current route rulesnuxt.config.tsnitro.routeRulesCurrent state: homepage s-maxage=1800, articles/tags/categories s-maxage=300, /** catch-all s-maxage=300, assets max-age=31536000 immutable.
1.5.2Add stale-if-error to all public routesnuxt.config.tsAdd stale-if-error=86400 (24h) to all Cache-Control headers. Ensures CDN serves stale content if origin is down. Currently missing.0.25d
1.5.3Separate ISR values from CDN valuesnuxt.config.tsClarify: ISR isr: 300 controls Nitro revalidation, s-maxage controls CDN TTL. They should be independent. Set CDN s-maxage higher (e.g. 3600) and use surrogate-key purge for instant invalidation instead of relying on short TTLs.0.25d
1.5.4Add Vary: Accept-Language for i18nnuxt.config.ts or server/middleware/surrogate-keys.tsEnsure CDN caches separate versions per locale. Add Vary: Accept-Language, X-Site-Id header to SSR responses.0.25d
1.5.5Validate with curl testsWrite a shell script scripts/test-cache-headers.sh that curls each route type and asserts correct Cache-Control, Surrogate-Key, Vary headers.0.25d
Acceptance: All public routes return Cache-Control with s-maxage, stale-while-revalidate, stale-if-error. Vary header includes Accept-Language. Preview routes return no-store.

Phase 2 — Platform Services Integration (v0.14)

Goal: Connect the FO to all shared One-Platform services. Priority: P1 — required for functional completeness (spec v0.5–v0.8). Estimated effort: 3–4 weeks.

2.1 Redis integration

#TaskFile(s)DetailsEffort
2.1.1Add Redis driver to Nitro storagenuxt.config.tsChange nitro.storage.cache.driver from 'memory' to 'redis'. Use env vars: REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, REDIS_DB. Keep 'memory' fallback when REDIS_HOST is unset (dev mode).0.25d
2.1.2Add unstorage Redis driver dependencypackage.jsonpnpm add unstorage @unstorage/redis-driver (or the Nitro built-in redis driver if available). Verify version compatibility with Nitro.0.1d
2.1.3Create Redis health checkserver/routes/healthz.get.tsExtend the existing health endpoint: ping Redis storage, include { redis: "connected" } or { redis: "disconnected" } in response. Return 503 if Redis is down and REDIS_HOST is configured.0.25d
2.1.4Add Redis connection environment varshelm/cms-fo/values.yamlAdd REDIS_HOST, REDIS_PORT, REDIS_PASSWORD (from Secret), REDIS_DB to the env map.0.1d
Acceptance: GET /healthz reports Redis status. Nitro cache is shared between pod restarts. Memory fallback works in local dev.

2.2 Preview token validation via Redis

#TaskFile(s)DetailsEffort
2.2.1Create preview token validation middlewareserver/middleware/preview-token-redis.tsOn X-Preview-Token or ?preview_token= in request: read preview-token:{value} from Redis storage. If key exists and not expired, set event.context.preview = { token, validUntil } and continue. If key missing/expired, return 403 { error: "Invalid or expired preview token" }. Skip if no token present (public request).0.5d
2.2.2Remove hardcoded preview logiccomposables/usePreview.tsRefactor: keep previewToken extraction from query, but rely on the server middleware for validation. Remove any client-side-only validation. The server already sets Cache-Control: no-store via preview-no-store.ts.0.25d
2.2.3Add preview token info to SSR contextserver/middleware/preview-token-redis.tsStore validated preview metadata in event.context.preview so downstream handlers (page rendering) can detect preview mode server-side.0.1d
2.2.4Write integration testtests/integration/preview-token.spec.tsTest with valid token → 200. Expired token → 403. Missing token → normal public response. Ensure Cache-Control: no-store is set in preview mode.0.5d
Acceptance: Preview tokens are validated server-side against Redis. Multi-instance FO serves identical preview content. Expired tokens are rejected.

2.3 Remove temporary JSON storage

#TaskFile(s)DetailsEffort
2.3.1Create API proxy for homepage blocksserver/api/homepage-blocks.get.tsReplace current file-read logic with $fetch call to CMS API: GET /public/homepage-blocks?site={siteId}. Forward X-Tenant-Id, Accept-Language, X-Preview-Token headers. Cache response in Nitro storage (Redis) with homepage-blocks:site:{id} key, TTL 5 min.0.5d
2.3.2Remove PUT routeserver/api/homepage-blocks.put.tsDelete file entirely. Homepage block editing is handled by the BO → API, not by the FO.0.1d
2.3.3Delete local JSON data filesdata/homepage-blocks.jsonRemove the temporary JSON file. Ensure no other code references it.0.1d
2.3.4Update useHomepageBlocks composablecomposables/useHomepageBlocks.tsUpdate the fetch URL from /api/homepage-blocks?site=... (local) to go through the new proxy route (same URL, but the server-side handler now calls the real API). Verify useAsyncData caching still works.0.25d
2.3.5Add surrogate key for homepageserver/api/homepage-blocks.get.tsSet Surrogate-Key: homepage site:{id} tenant:{tenantId} on response so CDN purge covers homepage changes.0.1d
Acceptance: Homepage renders from API data. No JSON files in data/. Homepage invalidation works via CDN purge.

2.4 Search engine integration

#TaskFile(s)DetailsEffort
2.4.1Create search proxy routeserver/api/search.get.tsAccept ?q=, ?type=, ?category=, ?tag=, ?locale=, ?page=, ?limit=. Forward to CMS API search endpoint: GET /public/search?q=.... Return JSON with { results: [], total: number, facets: {} }. Cache for 60s in Nitro storage.0.5d
2.4.2Add search result type definitiontypes/search.tsSearchResult { id, title, type: 'article'|'page', url, excerpt, score, highlights, publishedAt, category?, thumbnail? }. SearchResponse { results: SearchResult[], total: number, facets: { types: Record<string, number>, categories: Record<string, number> }, query: string, page: number }.0.25d
2.4.3Refactor useSearch composablecomposables/useSearch.tsReplace current dual-fetch (articles + pages) with single call to /api/search. Add: facets ref, totalResults ref, currentPage ref, loadMore() method, selectedFacets ref. Keep debounce at 300ms. Remove manual result merging logic.0.5d
2.4.4Update search results pagepages/search.vueAdd: faceted sidebar (filter by type, category), pagination, result highlighting, “no results” state with suggestions, loading skeleton. Use new useSearch composable.0.5d
2.4.5Add search suggestions componentcomponents/SearchSuggestions.vueDropdown below search input. Show top 5 results with title + type badge. Debounce 200ms. Keyboard navigation (arrow keys + enter). Close on Escape or click-outside.0.5d
2.4.6Integrate suggestions in Headercomponents/Header.vueReplace current search input behavior with <SearchSuggestions /> component. Wire to useSearch.handleSearch() on Enter.0.25d
2.4.7Write E2E testtests/e2e/search.spec.tsTest: type query → suggestions appear → click suggestion → navigate. Full search page with facets. Empty query. No results.0.5d
Acceptance: Search works end-to-end via the search engine. Faceted results display correctly. Suggestions appear in < 300ms. Empty state handled.

2.5 Matomo analytics + CMP

#TaskFile(s)DetailsEffort
2.5.1Create CMP composablecomposables/useConsent.tsManage user consent state. Expose: hasAnalyticsConsent: Ref<boolean>, hasMarketingConsent: Ref<boolean>, showConsentBanner: Ref<boolean>, acceptAll(), rejectAll(), acceptCategory(category). Persist to cookie cms_consent with SameSite=Lax; Secure.0.5d
2.5.2Create consent banner componentcomponents/ConsentBanner.vueGDPR-compliant banner: title, description, “Accept all”, “Reject all”, “Customize” buttons. Customize panel: checkboxes for analytics, marketing. Link to privacy policy. Respects prefers-reduced-motion. Accessible (focus trap, ARIA roles).0.5d
2.5.3Integrate banner in layoutlayouts/default.vueRender <ConsentBanner /> conditionally based on useConsent().showConsentBanner.0.1d
2.5.4Create Matomo pluginplugins/matomo.client.tsWatch useConsent().hasAnalyticsConsent. When true: inject Matomo tracker script via useHead(). Configure window._paq with: site ID from env NUXT_PUBLIC_MATOMO_SITE_ID, tracker URL from NUXT_PUBLIC_MATOMO_URL, disableCookies if consent not full. When consent revoked: call _paq.push(['optUserOut']) and remove cookies.0.5d
2.5.5Add custom dimensionsplugins/matomo.client.tsAfter tracker init, push custom dimensions: dimension1: tenantId, dimension2: siteId, dimension3: locale, dimension4: template name, dimension5: page type (article/page/category/homepage).0.25d
2.5.6Track SPA navigationplugins/matomo.client.tsUse router.afterEach() hook to push trackPageView with correct title and URL on each client-side navigation.0.25d
2.5.7Honor Do Not Trackplugins/matomo.client.tsCheck navigator.doNotTrack === '1'. If set, do not init Matomo even with consent (spec 5.3.5).0.1d
2.5.8Add env vars to confignuxt.config.ts, helm/cms-fo/values.yamlAdd NUXT_PUBLIC_MATOMO_URL, NUXT_PUBLIC_MATOMO_SITE_ID to runtimeConfig.public and Helm values.0.1d
2.5.9Write E2E testtests/e2e/consent-matomo.spec.tsTest: banner appears on first visit → reject → no Matomo requests. Accept → Matomo requests fired. Revoke → optUserOut called. DNT header → no tracking.0.5d
Acceptance: Matomo loads only after explicit user consent. Zero tracking without consent. DNT respected. Custom dimensions sent. SPA navigations tracked.

Phase 3 — Observability & Analytics (v0.15)

Goal: Instrument the FO for production monitoring. Priority: P1 — required for production operations (spec 5.5). Estimated effort: 2 weeks.

3.1 OpenTelemetry server-side instrumentation

#TaskFile(s)DetailsEffort
3.1.1Install OTel dependenciespackage.jsonpnpm add @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/exporter-metrics-otlp-http @opentelemetry/resources @opentelemetry/semantic-conventions.0.1d
3.1.2Create OTel initialization moduleserver/utils/otel.tsInitialize NodeSDK with: resource (service.name: "cms-fo", service.version, deployment.environment), traceExporter (OTLP HTTP to OTEL_EXPORTER_OTLP_ENDPOINT), metricReader (periodic, 30s interval), auto-instrumentations for http, fetch. Only init if OTEL_EXPORTER_OTLP_ENDPOINT env var is set. No-op in dev.0.5d
3.1.3Register OTel in Nitro pluginserver/plugins/otel.tsImport and start the SDK from server/utils/otel.ts at Nitro startup. Ensure graceful shutdown on SIGTERM.0.25d
3.1.4Add tenant/site span attributesserver/middleware/otel-attributes.tsFor every request: add span attributes tenant.id (from runtime config), site.id, http.route (path), http.locale (from Accept-Language).0.25d
3.1.5Add env varsnuxt.config.ts, helm/cms-fo/values.yamlOTEL_EXPORTER_OTLP_ENDPOINT, OTEL_SERVICE_NAME=cms-fo, OTEL_TRACES_SAMPLER=parentbased_traceidratio, OTEL_TRACES_SAMPLER_ARG=0.01 (1% in prod, 1.0 in staging).0.1d
Acceptance: Traces appear in Grafana Tempo with service.name=cms-fo. Spans include tenant/site attributes.

3.2 Trace propagation to API

#TaskFile(s)DetailsEffort
3.2.1Inject traceparent in API callsutils/api.tssecureFetch()In the getDefaultHeaders() function, if running server-side and OTel is active, get current span context and inject traceparent header using W3CTraceContextPropagator. Import from @opentelemetry/core.0.5d
3.2.2Create custom SSR render spanserver/plugins/otel.tsWrap SSR rendering in a custom span: tracer.startSpan('ssr.render', { attributes: { 'http.route': url } }). Record duration. End span after response sent.0.25d
3.2.3Verify end-to-end traceDeploy FO + API on staging. Trigger a page load. Verify in Grafana Tempo that the trace shows: FO ssr.renderFO fetch /public/articlesAPI controller.0.25d
Acceptance: Distributed traces span FO → API with correct parent-child relationships.

3.3 Prometheus metrics

#TaskFile(s)DetailsEffort
3.3.1Create metrics endpointserver/routes/metrics.get.tsExpose GET /metrics in Prometheus text format. Include: ssr_render_duration_seconds{route, status} (histogram), http_requests_total{route, method, status} (counter), cache_hits_total / cache_misses_total (counters), active_connections (gauge). Use OTel MeterProvider or standalone prom-client.0.5d
3.3.2Instrument SSR render durationserver/plugins/otel.tsRecord histogram metric for each SSR response with route and status code labels.0.25d
3.3.3Instrument cache hit/missutils/cache.tsIn getCachedResponse(): increment cache_hits_total or cache_misses_total counter. Label by cache_layer (memory, storage, redis).0.25d
3.3.4Add Kubernetes annotations for scrapinghelm/cms-fo/templates/deployment.yamlAdd pod annotations: prometheus.io/scrape: "true", prometheus.io/port: "3000", prometheus.io/path: "/metrics".0.1d
Acceptance: GET /metrics returns valid Prometheus text format. Grafana can scrape and graph the metrics.

3.4 Structured logging

#TaskFile(s)DetailsEffort
3.4.1Create structured loggerserver/utils/logger.tsWrapper around consola (Nitro default). In production: output JSON to stdout with fields: timestamp, level, message, trace_id (from OTel context), tenant_id, site_id, route, status_code, duration_ms, user_agent. In dev: keep human-readable format.0.5d
3.4.2Integrate logger in request lifecycleserver/middleware/request-logger.tsLog every request on completion: { method, url, status, duration_ms, trace_id, cache_status }. Use event.node.res.on('finish', ...).0.25d
3.4.3Replace console.log/warn/errorAll server/**/*.tsAudit all server-side files. Replace console.* with structured logger calls. Ensure error logs include stack traces as error.stack field.0.25d
3.4.4Configure log level from envserver/utils/logger.tsLOG_LEVEL env var: debug, info, warn, error. Default: info in production, debug in dev.0.1d
Acceptance: All server logs are JSON in production. trace_id correlates with OTel traces. Logs are queryable in Loki/ELK.

3.5 Web Vitals forwarding

#TaskFile(s)DetailsEffort
3.5.1Create OTel RUM exporterutils/otel-rum.tsClient-side module. Send metrics via navigator.sendBeacon to /api/rum endpoint (or directly to OTel Collector via OTLP/HTTP). Batch metrics in 10s intervals. Include: web_vital_lcp, web_vital_fid, web_vital_cls, web_vital_ttfb, web_vital_inp.0.5d
3.5.2Create server-side RUM ingest routeserver/api/rum.post.tsAccept batched Web Vitals. Forward to OTel Collector as metrics. Validate payload. Rate-limit by IP (max 10 req/min).0.25d
3.5.3Integrate with existing useWebVitalscomposables/useWebVitals.tsReplace the optional logEndpoint fetch with the new OTel RUM exporter. Keep console logging in dev. Add route and template labels to each metric.0.25d
Acceptance: Web Vitals appear as metrics in Grafana with route-level granularity.

3.6 Grafana dashboard

#TaskFile(s)DetailsEffort
3.6.1Create dashboard JSON templatemonitoring/grafana-dashboard-fo.jsonPanels: Row 1 — Traffic: requests/s, error rate (%), active connections. Row 2 — Performance: SSR render p50/p95/p99, TTFB distribution, cache hit ratio. Row 3 — Web Vitals: LCP, FID, CLS, INP by route. Row 4 — Resources: CPU, memory per pod. Variables: $environment, $tenant_id, $site_id.0.5d
3.6.2Create alert rulesmonitoring/alerts-fo.yamlAlerts: error rate > 1% for 5 min, SSR p95 > 600ms for 10 min, cache hit ratio < 85% for 15 min, pod restarts > 3 in 10 min.0.25d
Acceptance: Dashboard imported into Grafana. All panels show data. Alerts fire correctly in staging.

Phase 4 — Quality Gates & Hardening (v0.16)

Goal: Enforce performance, accessibility, and security standards in CI. Priority: P2 — required for v1.0 certification (spec 5.6, 4.8). Estimated effort: 2–3 weeks.

4.1 Lighthouse CI

#TaskFile(s)DetailsEffort
4.1.1Install Lighthouse CIpackage.jsonpnpm add -D @lhci/cli.0.1d
4.1.2Create Lighthouse configlighthouserc.jsonci.collect.url: ['http://localhost:3000/', 'http://localhost:3000/articles', 'http://localhost:3000/articles/test-article', 'http://localhost:3000/categories', 'http://localhost:3000/search?q=test']. ci.collect.numberOfRuns: 3. ci.assert.assertions: categories:performance >= 90, categories:seo >= 90, categories:accessibility >= 90, largest-contentful-paint <= 2500, first-input-delay <= 100, cumulative-layout-shift <= 0.1, total-byte-weight <= 300000.0.25d
4.1.3Add npm scriptpackage.json"lighthouse": "lhci autorun", "lighthouse:ci": "lhci autorun --collect.startServerCommand='node .output/server/index.mjs'".0.1d
4.1.4Create CI pipeline step.github/workflows/lighthouse.yml or CI configStep: build → start server → run Lighthouse → upload results. Fail on assertion violations. Upload HTML report as artifact.0.5d
4.1.5Configure LHCI server (optional)If LHCI server available: configure ci.upload.target: 'lhci' with server URL for historical trend tracking. Otherwise: ci.upload.target: 'filesystem'.0.25d
Acceptance: Lighthouse runs on every merge request. Merge blocked if any score < 90 or budget exceeded.

4.2 k6 load testing

#TaskFile(s)DetailsEffort
4.2.1Create base k6 configk6/config.jsExport options: stages (ramp up 30s to 50 VUs, hold 2 min, ramp down 30s). Thresholds: http_req_duration['p(95)'] < 600, http_req_failed < 0.01. Tags: environment, test_name.0.25d
4.2.2Write homepage scenariok6/scenarios/homepage.jsGET / → check status 200, check Surrogate-Key header present, check response time < 600ms. Parse HTML, verify title present.0.25d
4.2.3Write article listing scenariok6/scenarios/articles.jsGET /articles → check status 200, check response contains article links. Follow one article link → check 200.0.25d
4.2.4Write search scenariok6/scenarios/search.jsGET /api/search?q=test → check status 200, check results array present, check response time < 300ms.0.25d
4.2.5Write preview scenariok6/scenarios/preview.jsGET /?preview_token=valid_token → check 200, check Cache-Control: no-store. GET /?preview_token=invalid → check 403.0.25d
4.2.6Create k6 runner scriptk6/run.shRun all scenarios sequentially. Export results to JSON. If K6_CLOUD_TOKEN set, export to Grafana Cloud k6.0.1d
4.2.7Add npm scriptpackage.json"test:load": "k6 run k6/scenarios/homepage.js", "test:load:all": "bash k6/run.sh".0.1d
4.2.8Create CI pipeline step.github/workflows/k6.yml or CI configNightly job: deploy to staging → run k6 → report to Grafana. On merge-to-main: run lightweight smoke test (10 VUs, 30s).0.5d
Acceptance: k6 runs nightly on staging. SLOs validated: p95 SSR < 600ms, error rate < 1%. Results visible in Grafana.

4.3 Security hardening

#TaskFile(s)DetailsEffort
4.3.1Add Trusted Types CSP directiveserver/plugins/csp-hash.tsAppend ; require-trusted-types-for 'script' to Content-Security-Policy-Report-Only header. Add ; trusted-types 'none'. Do NOT enforce yet — report-only.0.25d
4.3.2Create CSP reporting endpointserver/api/csp-report.post.tsAccept CSP violation reports (JSON format). Log with structured logger: { type: 'csp-violation', directive, blockedUri, sourceFile, lineNumber }. Rate-limit by IP.0.25d
4.3.3Audit CSP hashesserver/plugins/csp-hash.tsVerify no unsafe-inline or unsafe-eval in production build. Test by temporarily enforcing CSP and checking for violations. Remove any fallback unsafe-* directives.0.25d
4.3.4Add connect-src whitelistserver/plugins/csp-hash.tsReplace hardcoded https://local.api.cms https://api.example.com with env-driven: NUXT_PUBLIC_API_URL, NUXT_PUBLIC_MEDIA_BASE_URL, NUXT_PUBLIC_MATOMO_URL.0.25d
4.3.5Add container scan to CI.github/workflows/security.yml or CI configStep: docker buildtrivy image --severity HIGH,CRITICAL --exit-code 1 cms-fo:$TAG.0.25d
4.3.6Add dependency audit to CI.github/workflows/security.yml or CI configStep: pnpm audit --audit-level high. Fail on high/critical vulnerabilities.0.1d
4.3.7Create .trivyignore.trivyignorePlaceholder for accepted vulnerabilities with documented justification per entry.0.1d
Acceptance: Trusted Types violations are reported (not blocking). No unsafe-* CSP directives. Trivy scan passes. pnpm audit clean.

4.4 Accessibility CI gate

#TaskFile(s)DetailsEffort
4.4.1Install axe-core for Playwrightpackage.jsonpnpm add -D @axe-core/playwright.0.1d
4.4.2Create a11y test helpertests/helpers/a11y.tsasync function checkA11y(page: Page, url: string): navigate to URL, inject axe, run axe.run() with WCAG 2.1 AA ruleset. Collect violations. Assert zero violations. Format violations for readable output.0.25d
4.4.3Write a11y test suitetests/e2e/accessibility.spec.tsTest pages: / (homepage), /articles (listing), /articles/test-article (detail), /categories (listing), /search (search page), /search?q=test (with results). Each: checkA11y(page, url).0.5d
4.4.4Add a11y to CI pipeline.github/workflows/a11y.yml or CI configRun a11y tests on every merge request. Fail if any WCAG 2.1 AA violation found. Upload HTML report.0.25d
4.4.5Create RGAA manual checklistdocs/rgaa-checklist.md (in cms-doc)Checklist for manual review: keyboard navigation (Tab/Shift+Tab), focus visibility, skip-to-content link (already in app.vue), screen reader landmarks, color contrast (4.5:1 normal, 3:1 large), form labels, error messages, ARIA attributes.0.25d
4.4.6Fix skip-link behaviorapp.vueVerify existing sr-only skip link actually moves focus to #main-content. Ensure the target element has tabindex="-1". Test with keyboard.0.1d
Acceptance: axe-core reports zero WCAG 2.1 AA violations on all tested pages. Manual checklist documented and reviewed.

Phase 5 — Production Readiness (v1.0)

Goal: Final validation, documentation, and sign-off. Priority: P3 — certification milestone. Estimated effort: 1–2 weeks.

5.1 SLO validation

#TaskFile(s)DetailsEffort
5.1.1Run 48-hour soak testDeploy to staging with production-like config. Run k6 at sustained load (20 VUs, mixed scenarios). Collect metrics in Grafana.0.5d (setup) + 2d (run)
5.1.2Generate SLO reportdocs/slo-validation-report.mdDocument measured values vs. spec targets: API p95 <= 300ms, TTFB p95 <= 600ms, cache hit >= 85%, error rate <= 1%, Lighthouse >= 90. Include Grafana screenshots.0.5d
5.1.3Identify and remediate violationsIf any SLO is breached: profile, fix, re-run. Document root cause and fix.1–3d (variable)

5.2 Formal accessibility audit

#TaskFile(s)DetailsEffort
5.2.1Engage auditorInternal team or external RGAA auditor. Provide staging URL, test accounts, list of page types.
5.2.2Remediate findingsVarious componentsFix any critical/high findings. Document deferred low findings with justification.1–3d (variable)
5.2.3Generate compliance reportdocs/accessibility-audit-report.mdRGAA / WCAG 2.1 AA compliance score, findings, remediations, remaining exceptions.0.5d

5.3 Security sign-off

#TaskFile(s)DetailsEffort
5.3.1Run final Trivy scanScan production image. Zero HIGH/CRITICAL.0.1d
5.3.2Run final pnpm auditZero high/critical. Document accepted medium risks.0.1d
5.3.3Review CSP violation reportsAnalyze 1 week of CSP + Trusted Types reports from staging. Confirm no legitimate code is blocked.0.25d
5.3.4Generate security reportdocs/security-review.mdCSP policy summary, Trusted Types status, container scan results, dependency audit, accepted risks.0.25d

5.4 Operational documentation

#TaskFile(s)DetailsEffort
5.4.1Write operational runbookdocs/runbook-fo.md (in cms-doc)Sections: scaling (HPA config, manual override), CDN purge (curl commands, key format), Redis failover (connection retry behavior), log queries (Loki/ELK examples by error type), alert response (per alert rule: severity, impact, remediation steps), deployment (Helm upgrade, rollback).0.5d
5.4.2Write ADRs for deviationsdocs/adr/ (in cms-doc)One ADR per deviation from spec: CDN vendor choice, search engine choice, Matomo setup, OTel sampling rate, any accepted risks. Format: context, decision, consequences.0.5d
5.4.3Update FO Coverage Analysisdocs/developer/guides/fo-coverage-analysis.mdxUpdate all statuses to Covered. Mark overall score as 100%. Add final notes.0.25d

Dependency map

Phase 1 (Containerization)

    ├──► 1.1 Dockerfile ──► 1.3 Helm ──► 1.4 CDN ──► 1.5 Cache audit
    │                                       │
    │                                       └──► (DevOps: CDN vendor)

    ├──► 1.2 Health endpoints (parallel with 1.1)

    └──► Phase 2 (Platform Services)

             ├──► 2.1 Redis ──► 2.2 Preview tokens ──► 2.3 Remove JSON

             ├──► 2.4 Search (needs API team search endpoint)

             ├──► 2.5 Matomo + CMP (independent)

             └──► Phase 3 (Observability)

                      ├──► 3.1 OTel SDK ──► 3.2 Trace propagation
                      │                          │
                      │                          └──► 3.3 Metrics ──► 3.6 Dashboard

                      ├──► 3.4 Structured logging (parallel with 3.1)

                      ├──► 3.5 Web Vitals (parallel with 3.1)

                      └──► Phase 4 (Quality Gates)

                               ├──► 4.1 Lighthouse CI (independent)
                               ├──► 4.2 k6 (independent)
                               ├──► 4.3 Security (independent)
                               ├──► 4.4 Accessibility (independent)

                               └──► Phase 5 (v1.0)

External dependency summary

TeamTaskPhase.StepUrgencyBlocker?
DevOpsKubernetes namespace provisioning1.3ImmediateYes
DevOpsCDN vendor selection + DNS1.4ImmediateYes
DevOpsRedis cluster provisioning2.1HighYes
DevOpsSearch engine provisioning (ES/OpenSearch)2.4HighYes
DevOpsMatomo instance + per-tenant DB2.5HighYes
DevOpsOTel Collector endpoint + credentials3.1MediumYes
DevOpsGrafana access + dashboard provisioning3.6MediumNo
DevOpsCI runner with Chrome headless4.1MediumYes
DevOpsCI pipeline template for k6/Lighthouse stages4.1, 4.2MediumNo
API teamHomepage-blocks REST endpoint2.3HighYes
API teamCDN purge trigger on publish event1.4HighYes
API teamSearch endpoint (/public/search)2.4HighYes
SecurityCSP / Trusted Types review4.3MediumNo
ExternalRGAA / accessibility auditor5.2Low (plan ahead)No

Timeline estimate

These estimates assume DevOps provisions shared services on schedule. Calendar dates depend on team capacity and external dependency resolution. Effort is expressed in developer-days (1 developer).
PhaseTasksEffortTarget
Phase 1 — Containerization & Delivery20 tasks~6dv0.13
Phase 2 — Platform Services27 tasks~9dv0.14
Phase 3 — Observability16 tasks~6dv0.15
Phase 4 — Quality Gates18 tasks~6dv0.16
Phase 5 — Production Readiness10 tasks~5d (+ variable remediation)v1.0
Total91 tasks~32 dev-days + remediation

Further reading