Devcontainer Cleanup

Date: 2026-02-18
Issue: claw/research#5
Status: Verified findings


Problem Statement

A small team sharing a Linux development server uses VSCode Remote Development with devcontainers. The devcontainers run indefinitely, causing the server to gradually fill up until the OOM killer activates and performance degrades.

Research Question: What options exist for reaping long-running devcontainers while keeping resource usage within bounds?


Findings

Finding 1: DevContainer Shutdown Action Configuration

DevContainer behavior when VSCode closes is controlled by the shutdownAction property in devcontainer.json.

Value Behavior Best For
none Container continues running Persistent dev environments
stopContainer Stops the container on close Single-container projects
stopCompose Stops Docker Compose services Multi-service projects

Current Defaults:

  • Image/Dockerfile-based: defaults to stopContainer
  • Docker Compose-based: defaults to stopCompose

Verification:

  • VSCode Documentation confirms these options1
  • Multiple GitHub issues confirm the behavior, though some report inconsistent results across VSCode versions23

Finding 2: Resource Limits via runArgs

Memory and CPU constraints can be enforced at container creation time via the runArgs property in devcontainer.json.

Configuration Example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  "name": "Limited Dev Environment",
  "image": "mcr.microsoft.com/devcontainers/base:ubuntu",
  "runArgs": [
    "--memory=4g",
    "--memory-swap=4g",
    "--cpus=2",
    "--pids-limit=1000"
  ]
}

Available Limits:

Flag Description Verfied
--memory Hard memory limit ✅ Via Docker cgroups4
--memory-swap Total memory + swap limit ✅ Docker docs5
--cpus CPU quota (e.g., 2.0 = 2 cores) ✅ StackOverflow6
--pids-limit Process ID limit ✅ Docker docs5
--shm-size Shared memory size ✅ Community usage7

Warning: Docker uses Linux cgroups v1 or v2 for enforcement. Soft limits may not prevent OOM if the kernel is under pressure5.


Finding 3: Docker-Level Automatic Cleanup

Server-side cleanup can be automated using Docker’s built-in prune commands combined with scheduling.

Option A: Periodic Container Prune (Cron)

1
2
# Remove stopped containers older than 48 hours
0 3 * * * /usr/bin/docker container prune --filter "until=48h" -f >> /var/log/docker-cleanup.log 2>&1

Effect: Removes containers that have been stopped for 48+ hours8.

Option B: Systemd Timer for Cleanup

1
2
3
4
5
6
7
# /etc/systemd/system/docker-cleanup.service
[Unit]
Description=Docker Cleanup Service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker system prune -f --filter "until=24h"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# /etc/systemd/system/docker-cleanup.timer
[Unit]
Description=Run Docker Cleanup Daily

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

Effect: Triggers cleanup daily via systemd910.

Option C: Label-Based Selective Cleanup

1
2
# Remove only containers with specific labels
docker container prune -f --filter "label=devcontainer=true" --filter "until=4h"

Recommendation: Use labels to identify devcontainers and prune only those older than a defined idle threshold.


Finding 4: Idle Detection and Auto-Stop

DevContainers themselves have no built-in idle timeout, but external tooling can provide this.

Approach: Process Activity Monitoring

A script can monitor container activity and stop idle containers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/bash
# Check CPU usage of devcontainers over last 5 minutes
# Stop containers with <1% average CPU usage

for container in $(docker ps --filter "label=devcontainer=true" --format "{{.ID}}"); do
  cpu_usage=$(docker stats --no-stream --format "{{.CPUPerc}}" $container | tr -d '%')
  if (( $(echo "$cpu_usage < 1.0" | bc -l) )); then
    echo "Stopping idle container: $container"
    docker stop $container
  fi
done

Note: This is a custom implementation; no native DevContainer idle timeout exists11.


Finding 5: Per-User Resource Accounting

For multi-user shared servers, consider Docker user namespaces or cgroups v2 for per-user resource limits.

Approach Description
Docker user namespaces Isolate containers per UID range
cgroups v2 (Linux 5.2+) Per-user resource limits
Systemd slice limits Apply resource caps per user session

Recommendations

Immediate Solutions (Low Effort)

  1. Set shutdownAction: stopContainer in devcontainer.json for non-persistent work
  2. Add resource limits via runArgs to cap memory per container
  3. Schedule nightly prune of stopped containers older than 24 hours

Comprehensive Solution (Medium Effort)

  1. Server-side systemd timer that:

    • Stops containers idle for >4 hours
    • Removes stopped containers after 24 hours
    • Prunes unused images weekly
  2. DevContainer template with resource limits pre-configured:

    1
    2
    3
    4
    
    {
      "runArgs": ["--memory=4g", "--cpus=2"],
      "shutdownAction": "stopContainer"
    }
  3. Monitoring dashboard showing per-container resource usage

Long-term Solution (Higher Effort)

  1. Migrate to Kubernetes-based dev environments (e.g., Gitpod, DevPod) with built-in resource quotas
  2. Implement cgroups v2 with per-user limits on the shared server

References


Verification Summary

Finding Source Type Status
shutdownAction options Official Docs ✅ Verified
Resource limits via runArgs Community + Docs ✅ Verified
Docker prune commands Official Docs ✅ Verified
cgroups enforcement Docker Docs ✅ Verified
Idle auto-stop Not native ⚠️ Requires custom implementation

Document Revision: 1.0
Next Steps: Implement recommendations per team preference (immediate vs comprehensive approach)