spec
Ingress info service
This document specifies a small HTTP REST service that runs inside a Kubernetes cluster and exposes a configured subset of Ingress objects as JSON. It contains the API contract, operational requirements, deployment notes, RBAC, metrics, and a development blueprint.
High-level summary
- Purpose: expose a filtered view of cluster Ingresses to internal clients.
- Selection: configured label selector (deploy-time) read from a ConfigMap key
labelSelector. - API: single cluster-wide list endpoint
GET /v1/ingresses(no pagination). - Output: top-level hosts array with host metadata and nested paths (with pathType). Each host includes extracted metadata (servicename, user_group, description, contact_information) and a
missing_annotationsflag; each path includes a tri-state reachability value. - Reachability: computed per host+path from Endpoints/EndpointSlice presence (“reachable”|“unreachable”|“unknown”).
- Sync: Kubernetes informers (watch) with periodic resync.
- Auth: static API keys stored in a Kubernetes Secret (map); clients present keys as
Authorization: Bearer <key>. - RBAC: ClusterRole allowing read/list/watch for Ingress, Endpoints and EndpointSlice.
- Observability: basic Prometheus metrics at
/metrics; structured JSON logs.
API contract
GET /v1/ingresses
- Description: return the list of Ingress objects that match the configured label selector.
- Auth:
Authorization: Bearer <key>required. - Response: 200 OK with JSON body. Conditional responses supported:
If-None-Match(ETag) andIf-Modified-Since. - Body schema (top-level minimal envelope):
- Notes:
metadatais Minimal (name + namespace).hostsincludes host string and for each host an array ofpathswithpathandpathType.- Each host MUST include the following metadata fields extracted from ingress labels/annotations according to the configured mapping:
servicename,user_group,description,contact_information.- If one or more of the required label/annotation keys are missing for a host, the service sets
missing_annotations: truefor that host. - Extraction mapping is configurable via the ConfigMap (see Runtime configuration below) and supports reading from either labels or annotations. The service should prefer annotations over labels when both are present for the same logical field.
- If one or more of the required label/annotation keys are missing for a host, the service sets
reachabilityis a tri-state string computed from presence of Endpoints/EndpointSlice for services referenced by the ingress rules (if computation fails or insufficient info, useunknown).
Error responses:
- 401 Unauthorized — missing/invalid API key
- 403 Forbidden — API key not allowed (if using allowlist in future)
- 500 Internal Server Error — on unexpected errors (informative JSON with
message)
Reachability behaviour
- Source: Endpoints / EndpointSlice presence only (no active network probes).
- Per-host+path rule: for each host/path rule the service resolves the backend Service referenced by that path (using the ingress rule backend). A path is
reachableif the referenced Service has at least one ready endpoint (Endpoint or EndpointSlice) in the cluster. If no endpoints are present:unreachable. If an API error or partial information occurs:unknown. - Implementation notes: when parsing the ingress backend, treat missing backend service references conservatively (count as no endpoints). The code should tolerate different ingress API versions.
Runtime configuration
-
ConfigMap
ingress-info-config(namespace where the app runs) with keylabelSelectorcontaining a Kubernetes label-selector string, e.g.app=external,tier in (frontend). -
Secret
ingress-info-keyscontaining a map of named API keys: each key is an entryclient1: <base64-or-plain-key>(mounted or read as env; service reads map and accepts any key value). -
ConfigMap
ingress-info-configalso contains an optional mapping block that instructs the service which label or annotation key to read for each host metadata field. Example:
The mapping accepts either annotation.<key> or label.<key> values. The service should attempt to read the configured annotation key first, then fall back to the configured label key if present. If a mapping entry is missing entirely, the corresponding host field is left empty and missing_annotations is set to true for that host.
Example ConfigMap snippet:
Example Secret snippet (map of keys):
Auth header: Authorization: Bearer <key> — the service validates presence of the key by exact match against values in the Secret.
RBAC (ClusterRole)
Required verbs and resources (read-only):
Bind via a ServiceAccount and ClusterRoleBinding for the Deployment.
Deployment & manifests (summary)
- Container image: based on the in-house secure-base
- Deployment: ClusterIP service (cluster internal) with a single container.
- Env/config mounts: mount ConfigMap and Secret; the app reads the ConfigMap key on startup and watches it for changes.
- Probes: liveness
/healthzand readiness/readyz(readiness becomes true after initial informer sync and config loaded).
Minimal Deployment hints:
Service:
- ClusterIP service exposes port 8080 to cluster-internal clients.
Observability
- Metrics:
/metricsexposing:- ingress_info_observed_total (counter)
- ingress_info_reachable_total (gauge)
- informer_sync_errors (counter)
- http_requests_total (counter) with labels
code,method - http_request_duration_seconds (histogram)
- Logging: structured JSON, level=info. Include per-request audit fields: request_id (generated), client_id (optional; inferred from key name if provided), request_path, latency_ms, response_status.
Implementation blueprint (developer notes)
Contract / Types:
- IngressItem { metadata: { name, namespace }, labels: map[string]string, annotations: map[string]string, hosts: [ { host: string, paths: [ { path: string, pathType: string } ] } ], reachability: “reachable”|“unreachable”|“unknown” }
Runtime flow:
- Startup: read ConfigMap
ingress-info-configkeylabelSelector. Create informer factories for Ingress, Endpoints, EndpointSlice. - Wait for initial cache sync. Only then mark readiness true.
- Maintain an in-memory projection filtered by the configured label selector. Update projection on add/update/delete informer events.
- For each ingress, compute reachability by resolving backend Service references and checking Endpoints/EndpointSlice caches for presence of at least one endpoint.
- Serve
GET /v1/ingressesfrom the in-memory projection. Support ETag / If-None-Match using a hash of current projection resourceVersion.
Edge cases:
- If ingress references services in other namespaces, resolve using namespace of the ingress unless explicit namespace is used (support cross-namespace only if referenced). If resolution fails, mark reachability
unknown. - If EndpointSlice API is not present, fall back to Endpoints.
Tests and acceptance criteria:
- Unit tests: parser for ingress -> hosts+paths; reachability computation given synthetic Endpoint/EndpointSlice objects.
- Integration test: run against kind/minikube with example ingresses and Services; verify
reachableflips when endpoints added/removed. - Acceptance: GET returns correct fields; readiness only after initial sync; ETag changes when resources change.
Security notes:
- Static API keys are simple but require rotation; recommend storing keys in an ops-secret manager and rotating via CI.
- Keep the service internal (ClusterIP) and restrict which namespaces can mount the Secret.
Tech stack
The following stack has been selected for the implementation and is recorded here for developers and CI:
- Language: Go 1.25
- Kubernetes client: client-go informers (use shared informer factories for Ingress, Endpoints, EndpointSlice)
- HTTP router: huma (lightweight, OpenAPI-friendly)
- Logging: Go’s
slogfor structured JSON logs - Metrics: prometheus/client_golang
- Config: read selector from a ConfigMap via in-cluster client; use environment variables for simple flags
- Auth: validate API key against a mounted Secret (read into an in-memory map at startup); clients present
Authorization: Bearer <key> - Testing:
go testwith table-driven unit tests; integration tests usingkindto verify informer behaviour and reachability transitions - Build/CI: GitLab CI pipeline (use your standard SFB pipeline templates) with stages: test, build, image, and deploy-test
- Manifests: example Kubernetes YAML (Deployment, Service, ConfigMap, Secret, ServiceAccount, ClusterRole, ClusterRoleBinding, Service)
CLI (Kong)
This service exposes a small CLI used for local development, debugging and to start the long-running server. We will use Kong (github.com/alecthomas/kong) as the CLI parser to provide a declarative, env-aware and well-documented command-line surface.
Planned commands:
serve— start the HTTP server. Flags (examples):--listen :8080,--metrics-addr :9090,--kubeconfig <path>,--namespace <ns>,--label-selector <selector>,--resync-period 5m,--log-level info.version— print build-time version and commit info injected via-ldflags.config reload(optional) — trigger a manual reload of the in-memory config/keys (useful for debugging).
CLI contract:
- Inputs: flags and env vars (Kong will bind common env names like
INGRESS_LISTEN,KUBECONFIG, etc.). - Output: start server or print info; non-zero exit on fatal errors or validation failures.
Notes:
- Validate flags with Kong and prefer sensible defaults to avoid unexpected runtime failures.
- Inject version information at build time with
-ldflagsand wire it into the Kongversioncommand. - Keep the CLI surface minimal: require
serveto actually start the server so running the binary with no args prints help and exits.
Notes:
- Prefer
client-goinformers over manual watch loops for correctness and efficiency. Use a resync interval of 5m and handle informer reconnects. - Use
humafor request routing and simple OpenAPI compatibility; integrateslogfor JSON output and ensure logs include the configured audit fields. - The GitLab CI job should run unit tests, build a multi-stage Docker image, and (optionally) run the integration test suite against a
kindcluster.
Dependency notes:
- Add
github.com/alecthomas/kongto the module dependencies for CLI parsing.