Building a VPS Metrics Dashboard

A lightweight, self-hosted VPS monitoring solution with Python, Docker, and a web dashboard.

Introduction

Monitoring your VPS shouldn't require complex setups or expensive services. I wanted something simple, lightweight, and self-hosted that would give me real-time visibility into my servers without the overhead of Prometheus, Grafana, or cloud-hosted solutions.

So I built metrics-dashboard - a complete VPS monitoring solution that runs in a single Docker container and provides a beautiful web frontend.

What It Does

The metrics-dashboard consists of two components:

  1. Python Metrics Agent - A high-performance system metrics collector that reads directly from /proc for accurate, real-time data
  2. Web Dashboard - A responsive frontend that displays metrics with 2-second auto-refresh

Features

The Python agent collects:

  • CPU - Overall usage and per-core breakdown
  • Memory - RAM and swap usage with buffers/cache breakdown
  • Disk - Filesystem usage and I/O rates per device
  • Network - Interface stats with bandwidth rates
  • Docker - Running container status
  • Processes - Top 10 by CPU and memory
  • Temperatures - CPU and thermal zone readings
  • Load Averages - 1, 5, and 15 minute averages
  • Logs - Recent system events (Docker, Caddy, SSH)

All metrics are exposed via a simple JSON API with optional token authentication.

Quick Start

The easiest way to get started is with the standalone setup:

# Clone the repository
git clone https://github.com/bilawalriaz/metrics-dashboard.git
cd metrics-dashboard/deployment

# Run the setup script
./setup.sh

That's it. The dashboard will be available at http://localhost:8080.

Architecture

┌─────────────────────────────────────────────────────────────────────────┐
│                              Browser                                    │
└─────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────┐
│                         Caddy (Docker)                                  │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ Serves frontend HTML files                                     │   │
│  │ Proxies /api/metrics?token=SECRET to metrics-agent            │   │
│  └─────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────┐
│                    Metrics Agent (Docker container)                      │
│  - Python agent exposing JSON at /metrics                               │
│  - Reads from /proc for accurate system metrics                         │
│  - Docker socket access for container status                            │
└─────────────────────────────────────────────────────────────────────────┘

Manual Deployment

If you prefer manual setup or want to integrate into an existing docker-compose:

1. Add to docker-compose.yml

services:
  metrics-agent:
    build:
      context: ./agent
      dockerfile: Dockerfile
    container_name: metrics-agent
    restart: unless-stopped
    volumes:
      - /proc:/proc:ro
      - /sys:/sys:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    group_add:
      - "996"  # Docker socket group
    networks:
      - caddy_net

  metrics-dashboard:
    image: caddy:latest
    container_name: metrics-dashboard
    restart: unless-stopped
    ports:
      - "8080:80"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./frontend:/srv/frontend:ro
    networks:
      - caddy_net

networks:
  caddy_net:
    driver: bridge

2. Configure Caddy

# Metrics API with token authentication
@metrics {
    path /metrics*
    query token=YOUR_SECRET_TOKEN
}

handle @metrics {
    reverse_proxy metrics-agent:8000
}

# Serve frontend
:8080 {
    root * /srv/frontend
    file_server
    handle /api/* {
        reverse_proxy metrics-agent:8000
    }
}

3. Start and Access

docker-compose up -d

Dashboard: http://localhost:8080 API: http://localhost:8080/api/metrics?token=YOUR_SECRET_TOKEN

API Response

The /metrics endpoint returns comprehensive system data:

{
  "hostname": "vps-name",
  "timestamp": "2026-01-09T12:00:00Z",
  "uptime": {
    "uptime_seconds": 1234567,
    "boot_time": 1704796800
  },
  "cpu": {
    "percent": 15.2,
    "cores": [
      {"id": 0, "percent": 12.5},
      {"id": 1, "percent": 8.3}
    ],
    "context_switches_sec": 12345.6
  },
  "memory": {
    "total": 8589934592,
    "used": 4294967296,
    "percent": 50.0
  },
  "filesystems": [
    {
      "mount": "/",
      "total": 107374182400,
      "used": 53687091200,
      "percent": 50.0
    }
  ],
  "network": [
    {
      "interface": "eth0",
      "rx_bytes_sec": 1024000,
      "tx_bytes_sec": 512000
    }
  ],
  "containers": [
    {"name": "caddy", "status": "running", "image": "caddy"}
  ]
}

Why This Approach

Benefits

  • Zero external dependencies - No databases, no message queues, no complex services
  • Lightweight - Single Python container, minimal resource usage
  • Secure - Token-based API authentication, read-only container access
  • Real-time - 2-second refresh with delta-based rate calculations
  • Beautiful - Clean, modern frontend design
  • Easy deployment - Single docker-compose up command

Trade-offs

  • Single-server monitoring (not distributed)
  • No historical data storage (real-time only)
  • Basic alerting (can be added via external tools)

For my use case, these are features, not bugs. I wanted something simple that shows me what's happening now, not a complex time-series database.

Customization

The frontend is just HTML, CSS, and JavaScript. Edit frontend/index.html to customize:

  • Refresh interval
  • Color schemes
  • Layout and components
  • Additional charts or visualizations

Production Tips

If deploying to production:

  1. Change the default API token in the Caddyfile
  2. Use HTTPS with Caddy's automatic Let's Encrypt
  3. Restrict network access - don't expose the API publicly
  4. Monitor the agent - add health checks and restart policies
  5. Secure the Docker socket - the agent only needs read-only access

Live Demo

See it in action at agent.hyperflash.uk

Conclusion

Sometimes you don't need a complex monitoring stack. A well-designed single-purpose tool can give you better visibility with less overhead.

The metrics-dashboard is intentionally simple - it does one thing well: show you what's happening on your VPS right now.