Chapter 3: Basic ContainerLab Operations

Learning Objectives

By the end of this chapter, you will be able to: - Understand ContainerLab topology file structure and syntax - Create and deploy basic network topologies - Use essential ContainerLab CLI commands - Manage lab lifecycle effectively - Navigate and interact with deployed containers

Understanding Topology Files

YAML Basics for ContainerLab

ContainerLab uses YAML (YAML Ain’t Markup Language) for topology definitions. YAML is human-readable and uses indentation to represent data structure.

Key YAML Concepts

# Comments start with hash
name: my-lab                    # String value
version: 1.0                    # Number value
enabled: true                   # Boolean value

# Lists (arrays)
items:
  - item1
  - item2
  - item3

# Dictionaries (objects)
node:
  name: router1
  type: cisco
  image: cisco/iosxe:latest

Basic Topology Structure

Every ContainerLab topology file contains these main sections:

name: lab-name                  # Lab identifier
prefix: custom                  # Optional prefix for container names
mgmt:                          # Management network configuration
  network: custom-mgmt         # Custom management network
  ipv4-subnet: 172.20.20.0/24  # Management subnet

topology:
  nodes:                       # Network devices definition
    node1:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
    node2:
      kind: arista_eos
      image: arista/ceos:latest

  links:                       # Connections between nodes
    - endpoints: ["node1:eth1", "node2:eth1"]
    - endpoints: ["node1:eth2", "node2:eth2"]

Node Configuration

Basic Node Properties

topology:
  nodes:
    router1:
      kind: cisco_iosxe          # Device type
      image: cisco/iosxe:latest  # Container image
      mgmt-ipv4: 172.20.20.10   # Management IP
      ports:                     # Port mappings
        - "22:22"               # SSH access
        - "80:80"               # HTTP access
      env:                       # Environment variables
        HOSTNAME: router1
      labels:                    # Metadata labels
        role: edge-router
        location: site-a

Advanced Node Configuration

topology:
  nodes:
    core-switch:
      kind: arista_eos
      image: arista/ceos:4.28.3M
      mgmt-ipv4: 172.20.20.20
      startup-config: configs/core-switch.cfg
      binds:                     # Volume mounts
        - /host/path:/container/path
      sysctls:                   # Kernel parameters
        net.ipv4.ip_forward: 1
      cpu: 2                     # CPU limit
      memory: 4GB               # Memory limit

Essential CLI Commands

Lab Deployment Commands

Deploy a Lab

# Deploy lab from topology file
containerlab deploy -t topology.yml

# Deploy with custom name
containerlab deploy -t topology.yml --name custom-lab

# Deploy in background
containerlab deploy -t topology.yml --detach

# Deploy with specific prefix
containerlab deploy -t topology.yml --prefix mylab

Destroy a Lab

# Destroy lab
containerlab destroy -t topology.yml

# Destroy by name
containerlab destroy --name lab-name

# Force destroy (cleanup even if errors)
containerlab destroy -t topology.yml --cleanup

# Destroy all labs
containerlab destroy --all

Lab Inspection Commands

Inspect Lab Status

# Inspect specific lab
containerlab inspect -t topology.yml

# Inspect by name
containerlab inspect --name lab-name

# Show all running labs
containerlab inspect --all

# Output in different formats
containerlab inspect -t topology.yml --format table
containerlab inspect -t topology.yml --format json

Graph Generation

# Generate topology graph
containerlab graph -t topology.yml

# Generate with custom output
containerlab graph -t topology.yml --output topology.png

# Generate interactive graph
containerlab graph -t topology.yml --format html

Configuration Management

Save Configurations

# Save all node configurations
containerlab save -t topology.yml

# Save specific node configuration
containerlab save -t topology.yml --node router1

# Save to custom directory
containerlab save -t topology.yml --destination ./backups/

Load Configurations

# Load configurations to all nodes
containerlab config -t topology.yml

# Load configuration to specific node
containerlab config -t topology.yml --node router1 --config router1.cfg

Creating Your First Lab

Simple Two-Router Topology

Let’s create a basic lab with two Cisco routers:

# File: basic-routers.yml
name: basic-routers
prefix: lab

topology:
  nodes:
    r1:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.10

    r2:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.11

  links:
    - endpoints: ["r1:GigabitEthernet2", "r2:GigabitEthernet2"]

Deploy and test:

# Deploy the lab
containerlab deploy -t basic-routers.yml

# Check lab status
containerlab inspect -t basic-routers.yml

# Connect to router 1
docker exec -it clab-lab-r1 bash

# From within the router, access CLI
cli

# Check interface status
show ip interface brief

# Exit router CLI and container
exit
exit

Multi-Vendor Topology

Create a lab with different vendor devices:

# File: multi-vendor.yml
name: multi-vendor
prefix: mv

topology:
  nodes:
    cisco-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.10

    arista-switch:
      kind: arista_eos
      image: arista/ceos:latest
      mgmt-ipv4: 172.20.20.11

    linux-host:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 172.20.20.12

  links:
    - endpoints: ["cisco-router:eth1", "arista-switch:eth1"]
    - endpoints: ["arista-switch:eth2", "linux-host:eth1"]

Campus Network Topology

A more complex example representing a small campus:

# File: campus-network.yml
name: campus-network
prefix: campus

mgmt:
  network: campus-mgmt
  ipv4-subnet: 192.168.100.0/24

topology:
  nodes:
    # Core Layer
    core-sw1:
      kind: arista_eos
      image: arista/ceos:latest
      mgmt-ipv4: 192.168.100.10
      labels:
        layer: core

    core-sw2:
      kind: arista_eos
      image: arista/ceos:latest
      mgmt-ipv4: 192.168.100.11
      labels:
        layer: core

    # Distribution Layer
    dist-sw1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.20
      labels:
        layer: distribution

    dist-sw2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.21
      labels:
        layer: distribution

    # Access Layer
    access-sw1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.30
      labels:
        layer: access

    access-sw2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.31
      labels:
        layer: access

    # End Devices
    pc1:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 192.168.100.100

    pc2:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 192.168.100.101

  links:
    # Core interconnection
    - endpoints: ["core-sw1:eth1", "core-sw2:eth1"]
    - endpoints: ["core-sw1:eth2", "core-sw2:eth2"]

    # Core to Distribution
    - endpoints: ["core-sw1:eth3", "dist-sw1:eth1"]
    - endpoints: ["core-sw1:eth4", "dist-sw2:eth1"]
    - endpoints: ["core-sw2:eth3", "dist-sw1:eth2"]
    - endpoints: ["core-sw2:eth4", "dist-sw2:eth2"]

    # Distribution to Access
    - endpoints: ["dist-sw1:eth3", "access-sw1:eth1"]
    - endpoints: ["dist-sw2:eth3", "access-sw2:eth1"]

    # Access to End Devices
    - endpoints: ["access-sw1:eth2", "pc1:eth1"]
    - endpoints: ["access-sw2:eth2", "pc2:eth1"]

Lab Lifecycle Management

Deployment Process

  1. Validation Phase
    • Topology file syntax check
    • Image availability verification
    • Resource requirement assessment
  2. Preparation Phase
    • Container image pulling
    • Network creation
    • Volume preparation
  3. Deployment Phase
    • Container creation and startup
    • Network interface attachment
    • Initial configuration application
  4. Verification Phase
    • Container health checks
    • Network connectivity validation
    • Service availability confirmation

Monitoring Lab Status

# Check overall lab health
containerlab inspect -t topology.yml

# Monitor container resources
docker stats $(docker ps --filter "label=containerlab" --format "{{.Names}}")

# Check container logs
docker logs clab-lab-router1

# Monitor network interfaces
docker exec clab-lab-router1 ip addr show

Lab Maintenance

Updating Configurations

# Apply new configuration to running lab
docker cp new-config.cfg clab-lab-router1:/tmp/
docker exec -it clab-lab-router1 cli
# From router CLI:
copy tftp://management-ip/config running-config

Scaling Labs

# Add nodes to existing topology
# Edit topology file to add new nodes
containerlab deploy -t updated-topology.yml --reconfigure

# Remove nodes
# Edit topology file to remove nodes
containerlab deploy -t updated-topology.yml --reconfigure

Container Interaction

Accessing Containers

Direct Shell Access

# Access container shell
docker exec -it clab-lab-router1 bash

# Access with specific user
docker exec -it --user root clab-lab-router1 bash

# Run single command
docker exec clab-lab-router1 show version

Network Device CLI Access

# Cisco devices
docker exec -it clab-lab-cisco-router cli

# Arista devices
docker exec -it clab-lab-arista-switch Cli

# Nokia devices
docker exec -it clab-lab-nokia-router sr_cli

File Transfer

Copy Files to/from Containers

# Copy file to container
docker cp local-file.cfg clab-lab-router1:/tmp/

# Copy file from container
docker cp clab-lab-router1:/etc/config.cfg ./backup-config.cfg

# Copy directory
docker cp ./configs/ clab-lab-router1:/tmp/configs/

Using Volume Mounts

topology:
  nodes:
    router1:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      binds:
        - ./configs:/tmp/configs:ro    # Read-only mount
        - ./logs:/var/log:rw          # Read-write mount

Network Connectivity Testing

Basic Connectivity Tests

# From Linux containers
docker exec clab-lab-pc1 ping 192.168.1.1
docker exec clab-lab-pc1 traceroute 192.168.1.1
docker exec clab-lab-pc1 nslookup google.com

# From network devices (varies by vendor)
docker exec clab-lab-router1 cli -c "ping 192.168.1.1"
docker exec clab-lab-switch1 Cli -c "ping 192.168.1.1"

Advanced Network Testing

# Install network tools in Linux containers
docker exec clab-lab-pc1 apk add --no-cache iperf3 tcpdump nmap

# Performance testing
docker exec -d clab-lab-pc1 iperf3 -s
docker exec clab-lab-pc2 iperf3 -c pc1-ip

# Packet capture
docker exec -d clab-lab-pc1 tcpdump -i eth1 -w /tmp/capture.pcap

Best Practices

Topology Design

  1. Use meaningful names for nodes and labs
  2. Organize by function (core, distribution, access)
  3. Include metadata using labels
  4. Plan IP addressing systematically
  5. Document topology with comments

Resource Management

  1. Monitor system resources during deployment
  2. Use appropriate image versions for your needs
  3. Clean up unused labs regularly
  4. Optimize container resource limits

Configuration Management

  1. Use startup configurations for consistent deployments
  2. Version control topology files
  3. Backup configurations regularly
  4. Test changes in isolated environments

Troubleshooting Common Issues

Deployment Failures

Image Pull Issues

# Check image availability
docker pull cisco/iosxe:latest

# Use alternative registry
docker pull registry.example.com/cisco/iosxe:latest

# Check Docker Hub rate limits
docker system events --filter type=image

Resource Constraints

# Check system resources
free -h
df -h

# Monitor during deployment
watch -n 1 'docker stats --no-stream'

# Adjust container limits
topology:
  nodes:
    router1:
      cpu: 1
      memory: 2GB

Connectivity Issues

Container Network Problems

# Check container networks
docker network ls
docker network inspect clab-network

# Verify interface configuration
docker exec clab-lab-router1 ip addr show

# Check routing
docker exec clab-lab-router1 ip route show

Inter-Container Communication

# Test basic connectivity
docker exec clab-lab-pc1 ping clab-lab-pc2

# Check bridge configuration
brctl show

# Verify iptables rules
sudo iptables -L -n

Summary

This chapter covered the fundamental operations needed to work with ContainerLab effectively. You learned how to structure topology files, use essential CLI commands, and manage lab lifecycles. These skills form the foundation for all subsequent networking labs and experiments.

Key takeaways: - YAML topology files define your entire lab infrastructure - ContainerLab CLI provides comprehensive lab management capabilities - Proper planning and organization improve lab maintainability - Understanding container interaction is crucial for effective troubleshooting

In the next chapter, we’ll dive deeper into network topologies and design patterns for more complex scenarios.

Review Questions

  1. What are the main sections of a ContainerLab topology file?
  2. How do you deploy and destroy a lab using ContainerLab CLI?
  3. What’s the difference between kind and image in node configuration?
  4. How can you access the CLI of different network operating systems?
  5. What are best practices for lab resource management?

Hands-on Exercises

Exercise 1: Basic Lab Creation

  1. Create a simple two-router topology
  2. Deploy the lab and verify connectivity
  3. Access both routers and check interface status
  4. Save the router configurations
  5. Destroy the lab

Exercise 2: Multi-Vendor Lab

  1. Create a topology with Cisco, Arista, and Linux nodes
  2. Configure basic IP addressing
  3. Test connectivity between all nodes
  4. Generate a topology graph
  5. Document the lab setup

Exercise 3: Campus Network Simulation

  1. Implement the campus network topology from this chapter
  2. Deploy and verify all nodes are running
  3. Plan and document IP addressing scheme
  4. Test management connectivity to all devices
  5. Practice lab lifecycle management operations

Additional Resources