Garbage Collection
Overview
Go uses automatic garbage collection (GC) to manage memory. Understanding how it works helps you write GC-friendly code.
Modern GC Architecture
Concurrent Mark and Sweep
Go’s GC is concurrent and non-moving. At a high level: 1. Mark Setup: (STW) Prepare for marking. 2. Marking: (Concurrent) Mark reachable objects. 3. Mark Termination: (STW) Finalize marking. 4. Sweeping: (Concurrent) Reclaim memory.
GC cycle (simplified)
Roots ---> [mark queue] ---> reachable objects marked
| |
+---- stack/globals +-- write barrier keeps invariants
unmarked heap objects ---> sweep ---> free spans for reuse
Go 1.26: Green Tea GC
Go 1.26 switched the default collector to Green Tea GC (GTGC) for lower pause times and better locality in pointer-heavy workloads.
- Better cache locality from span-aware scanning.
- Lower overhead for high-allocation services.
- If needed for debugging/regression checks, you can opt out with:
GOEXPERIMENT=nogreenteagc go test ./...GOGC and Memory Limits
Think of GOGC and GOMEMLIMIT as two controls:
GOGCcontrols when to trigger based on heap growth.GOMEMLIMITputs a soft cap on total Go-managed memory.
Heap growth trigger Memory budget trigger
live heap * (1 + GOGC/100) total Go memory near GOMEMLIMIT
Tuning with GOGC
# Default: GC when heap doubles (100% growth)
GOGC=100 ./myapp
# More aggressive: GC at 50% growth (uses less memory)
GOGC=50 ./myapp
# Less frequent: GC at 200% growth (faster, more memory)
GOGC=200 ./myapp
# Disable GC (not recommended)
GOGC=off ./myappTuning with GOMEMLIMIT
# Keep process around 2 GiB Go-managed memory
GOMEMLIMIT=2GiB ./myappProgrammatic control:
import "runtime/debug"
func init() {
debug.SetMemoryLimit(2 << 30) // 2 GiB
}Memory Stats
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("Alloc: %d MB\n", m.Alloc/1024/1024)
fmt.Printf("Total Alloc: %d MB\n", m.TotalAlloc/1024/1024)
fmt.Printf("Heap Objects: %d\n", m.HeapObjects)
fmt.Printf("GC Cycles: %d\n", m.NumGC)A low-overhead option for production telemetry:
import "runtime/metrics"
samples := []metrics.Sample{
{Name: "/gc/heap/live:bytes"},
{Name: "/gc/heap/goal:bytes"},
}
metrics.Read(samples)Reducing GC Pressure
1. Reduce Allocations
// Bad: allocates each call
func getBuffer() []byte {
return make([]byte, 1024)
}
// Good: reuse with sync.Pool
var bufPool = sync.Pool{
New: func() any { return make([]byte, 1024) },
}
func getBuffer() []byte {
return bufPool.Get().([]byte)
}
func putBuffer(b []byte) {
bufPool.Put(b)
}2. Preallocate
result := make([]int, 0, expectedSize)3. Use Value Types
// More allocations
type Points []*Point
// Fewer allocations
type Points []Point4. Avoid String Concatenation in Loops
// Bad: allocates each iteration
s := ""
for _, part := range parts {
s += part
}
// Good: single allocation
var b strings.Builder
for _, part := range parts {
b.WriteString(part)
}
s := b.String()Profiling
# CPU profile
go test -cpuprofile=cpu.out
go tool pprof cpu.out
# Memory profile
go test -memprofile=mem.out
go tool pprof mem.out
# View allocations
go tool pprof -alloc_space mem.outSummary
| Optimization | Technique |
|---|---|
| Reuse memory | sync.Pool |
| Preallocate | make([]T, 0, cap) |
| Values vs pointers | Use values for small types |
| String building | strings.Builder |