015 Project 15: TUI System Monitor

015 Build a TUI System Monitor

Create a live terminal UI for CPU, memory, and goroutine stats using Bubble Tea + gopsutil.

ticker -> collect metrics -> update model -> render tui frame

Setup

go mod init example.com/gotop
go get github.com/charmbracelet/bubbletea@latest
go get github.com/charmbracelet/lipgloss@latest
go get github.com/shirou/gopsutil/v4/cpu@latest
go get github.com/shirou/gopsutil/v4/mem@latest

Full main.go

package main

import (
    "fmt"
    "runtime"
    "time"

    tea "github.com/charmbracelet/bubbletea"
    "github.com/charmbracelet/lipgloss"
    "github.com/shirou/gopsutil/v4/cpu"
    "github.com/shirou/gopsutil/v4/mem"
)

type tickMsg struct{}

type sample struct {
    cpuPct   float64
    memPct   float64
    gorCount int
    ts       time.Time
}

type model struct {
    last sample
    err  error
}

func tickCmd() tea.Cmd {
    return tea.Tick(1*time.Second, func(time.Time) tea.Msg { return tickMsg{} })
}

func (m model) Init() tea.Cmd { return tickCmd() }

func (m model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
    switch msg := msg.(type) {
    case tea.KeyMsg:
        if msg.String() == "q" || msg.String() == "ctrl+c" {
            return m, tea.Quit
        }
    case tickMsg:
        cp, err := cpu.Percent(0, false)
        if err != nil || len(cp) == 0 {
            m.err = err
            return m, tickCmd()
        }
        vm, err := mem.VirtualMemory()
        if err != nil {
            m.err = err
            return m, tickCmd()
        }
        m.last = sample{
            cpuPct:   cp[0],
            memPct:   vm.UsedPercent,
            gorCount: runtime.NumGoroutine(),
            ts:       time.Now(),
        }
        m.err = nil
        return m, tickCmd()
    }
    return m, nil
}

func (m model) View() string {
    title := lipgloss.NewStyle().Bold(true).Foreground(lipgloss.Color("86")).Render("GoTop - System Monitor")
    if m.err != nil {
        return fmt.Sprintf("%s\n\nerror: %v\n\npress q to quit\n", title, m.err)
    }
    return fmt.Sprintf(
        "%s\n\nCPU:        %6.2f%%\nMemory:     %6.2f%%\nGoroutines: %d\nUpdated:    %s\n\npress q to quit\n",
        title,
        m.last.cpuPct,
        m.last.memPct,
        m.last.gorCount,
        m.last.ts.Format(time.TimeOnly),
    )
}

func main() {
    p := tea.NewProgram(model{})
    if _, err := p.Run(); err != nil {
        panic(err)
    }
}

Run

go run .

Step-by-Step Explanation

  1. Collect node/resource metrics from API sources.
  2. Compute deterministic scores per target.
  3. Rank candidates by score and policy constraints.
  4. Produce dry-run placement/migration decisions.
  5. Apply gradually with canary and rollback plan.

Code Anatomy

  • Data collection stage fetches capacity and load.
  • Scoring stage turns metrics into comparable values.
  • Decision stage emits ranked scheduling actions.

Learning Goals

  • Build scheduling logic that is explainable and auditable.
  • Balance utilization, reliability, and operational safety.
  • Prepare for production-grade orchestration workflows.