Worker Pools

Worker Pools: Controlling Chaos

Go makes it easy to spawn 100,000 goroutines. The OS makes it easy to crash when you open 100,000 file descriptors.

Unbounded concurrency is a bug. You must limit parallelism to match your resource limits (CPU, Memory, Network/DB Connections).

Pattern 1: The Semaphore (Simple Limiter)

The easiest way to limit concurrency is a buffered channel (semaphore).

func ProcessItems(items []string) {
    sem := make(chan struct{}, 10) // Limit to 10 concurrent
    var wg sync.WaitGroup

    for _, item := range items {
        wg.Add(1)
        go func(val string) {
            defer wg.Done()

            sem <- struct{}{}        // Acquire token (blocks if full)
            defer func() { <-sem }() // Release token

            DoWork(val)
        }(item)
    }
    wg.Wait()
}

Pros: Trivial to implement. Cons: Still spawns N goroutines (memory cost), just blocks them from executing the heavy work.

Pattern 2: The Worker Pool (Fixed Goroutines)

Instead of spawning a goroutine per item, spawn a fixed number of workers (e.g., runtime.NumCPU()) and feed them work.

func WorkerPool(jobs <-chan Job, results chan<- Result, workers int) {
    var wg sync.WaitGroup

    // Spawn fixed workers
    for i := 0; i < workers; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            for job := range jobs {
                results <- process(job)
            }
        }(i)
    }

    wg.Wait()
    close(results) // Close results when all workers satisfy
}

func main() {
    jobs := make(chan Job, 100)
    results := make(chan Result, 100)

    go func() {
        // Enqueue jobs
        jobs <- Job{...}
        close(jobs) // Important: Tell workers no more jobs coming
    }()

    // Start pool
    WorkerPool(jobs, results, 5)

    // Consume results
    for res := range results {
        fmt.Println(res)
    }
}

Pros: Fixed memory footprint. Zero waste. Cons: Slightly more code.

2026: errgroup with Limits

The golang.org/x/sync/errgroup package handles error propagation and limits.

g := new(errgroup.Group)
g.SetLimit(10) // New in recent versions

for _, item := range items {
    item := item
    g.Go(func() error {
        return process(item)
    })
}

if err := g.Wait(); err != nil {
    return err
}

This is the preferred modern way. It handles wait groups, error propagation (first error cancels context), and concurrency limits in one standard package.

Summary

  • Never use go func() inside a loop without a limit mechanism.
  • Use errgroup.SetLimit for 90% of cases.
  • Use manual Worker Patterns for complex, long-lived background processing queues.