Chapter 4: Network Topologies and Design

Learning Objectives

By the end of this chapter, you will be able to: - Design and implement various network topologies in ContainerLab - Understand topology file structure and advanced syntax - Create scalable and maintainable network designs - Implement hierarchical network models - Apply network design best practices

Network Topology Fundamentals

Physical vs. Logical Topologies

In ContainerLab, we work primarily with logical topologies that represent how network devices are interconnected. Understanding both physical and logical aspects helps in creating realistic simulations.

Physical Topology Considerations

  • Cable types and limitations: Simulated through link properties
  • Distance constraints: Represented by latency and bandwidth settings
  • Hardware limitations: Modeled through container resource constraints
  • Redundancy requirements: Implemented through multiple links

Logical Topology Elements

  • Network segments: Created through ContainerLab links
  • Broadcast domains: Defined by switch configurations
  • Routing domains: Established through routing protocol configurations
  • Security zones: Implemented through firewall and ACL configurations

Common Network Topologies

Star Topology

# Star topology with central switch
name: star-topology
topology:
  nodes:
    central-switch:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.10

    pc1:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 172.20.20.11

    pc2:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 172.20.20.12

    pc3:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 172.20.20.13

    pc4:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 172.20.20.14

  links:
    - endpoints: ["central-switch:eth1", "pc1:eth1"]
    - endpoints: ["central-switch:eth2", "pc2:eth1"]
    - endpoints: ["central-switch:eth3", "pc3:eth1"]
    - endpoints: ["central-switch:eth4", "pc4:eth1"]

Advantages: - Simple to implement and troubleshoot - Centralized management - Easy to add new devices

Disadvantages: - Single point of failure - Limited scalability - Bandwidth bottleneck at center

Ring Topology

# Ring topology for redundancy
name: ring-topology
topology:
  nodes:
    switch1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.10

    switch2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.11

    switch3:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.12

    switch4:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.13

  links:
    # Ring connections
    - endpoints: ["switch1:eth1", "switch2:eth1"]
    - endpoints: ["switch2:eth2", "switch3:eth1"]
    - endpoints: ["switch3:eth2", "switch4:eth1"]
    - endpoints: ["switch4:eth2", "switch1:eth2"]  # Completes the ring

Advantages: - Built-in redundancy - No single point of failure - Predictable performance

Disadvantages: - Complex troubleshooting - Potential for loops without proper protocols - Limited bandwidth sharing

Mesh Topology

# Full mesh topology for maximum redundancy
name: mesh-topology
topology:
  nodes:
    router1:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.10

    router2:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.11

    router3:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.12

    router4:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.13

  links:
    # Full mesh - every router connected to every other router
    - endpoints: ["router1:eth1", "router2:eth1"]
    - endpoints: ["router1:eth2", "router3:eth1"]
    - endpoints: ["router1:eth3", "router4:eth1"]
    - endpoints: ["router2:eth2", "router3:eth2"]
    - endpoints: ["router2:eth3", "router4:eth2"]
    - endpoints: ["router3:eth3", "router4:eth3"]

Advantages: - Maximum redundancy - Optimal path selection - High fault tolerance

Disadvantages: - Expensive to implement - Complex configuration - Scalability challenges

Hierarchical Network Design

Three-Tier Architecture

The three-tier hierarchical model is the foundation of most enterprise networks:

Core Layer

  • High-speed packet switching
  • Redundancy and fault tolerance
  • Minimal processing overhead

Distribution Layer

  • Policy enforcement
  • Routing between VLANs
  • Access control and security

Access Layer

  • End-device connectivity
  • Port security
  • VLAN assignment

Implementing Three-Tier Design

# Three-tier campus network
name: three-tier-campus
prefix: campus

mgmt:
  network: campus-mgmt
  ipv4-subnet: 192.168.100.0/24

topology:
  nodes:
    # Core Layer - High-performance switches
    core-sw1:
      kind: arista_eos
      image: arista/ceos:latest
      mgmt-ipv4: 192.168.100.10
      labels:
        tier: core
        role: core-switch
      cpu: 2
      memory: 4GB

    core-sw2:
      kind: arista_eos
      image: arista/ceos:latest
      mgmt-ipv4: 192.168.100.11
      labels:
        tier: core
        role: core-switch
      cpu: 2
      memory: 4GB

    # Distribution Layer - Policy and routing
    dist-sw1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.20
      labels:
        tier: distribution
        role: distribution-switch
        building: building-a
      cpu: 1
      memory: 2GB

    dist-sw2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.21
      labels:
        tier: distribution
        role: distribution-switch
        building: building-a
      cpu: 1
      memory: 2GB

    dist-sw3:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.22
      labels:
        tier: distribution
        role: distribution-switch
        building: building-b
      cpu: 1
      memory: 2GB

    dist-sw4:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.23
      labels:
        tier: distribution
        role: distribution-switch
        building: building-b
      cpu: 1
      memory: 2GB

    # Access Layer - End device connectivity
    access-sw1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.30
      labels:
        tier: access
        role: access-switch
        building: building-a
        floor: floor-1

    access-sw2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.31
      labels:
        tier: access
        role: access-switch
        building: building-a
        floor: floor-2

    access-sw3:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.32
      labels:
        tier: access
        role: access-switch
        building: building-b
        floor: floor-1

    access-sw4:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 192.168.100.33
      labels:
        tier: access
        role: access-switch
        building: building-b
        floor: floor-2

    # End Devices
    pc1:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 192.168.100.100
      labels:
        role: end-device
        location: building-a-floor-1

    pc2:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 192.168.100.101
      labels:
        role: end-device
        location: building-a-floor-2

    pc3:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 192.168.100.102
      labels:
        role: end-device
        location: building-b-floor-1

    pc4:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 192.168.100.103
      labels:
        role: end-device
        location: building-b-floor-2

    # Servers
    server1:
      kind: linux
      image: ubuntu:20.04
      mgmt-ipv4: 192.168.100.200
      labels:
        role: server
        service: web-server

    server2:
      kind: linux
      image: ubuntu:20.04
      mgmt-ipv4: 192.168.100.201
      labels:
        role: server
        service: database-server

  links:
    # Core Layer Interconnection (redundant)
    - endpoints: ["core-sw1:eth1", "core-sw2:eth1"]
    - endpoints: ["core-sw1:eth2", "core-sw2:eth2"]

    # Core to Distribution (dual-homed)
    - endpoints: ["core-sw1:eth3", "dist-sw1:eth1"]
    - endpoints: ["core-sw1:eth4", "dist-sw2:eth1"]
    - endpoints: ["core-sw1:eth5", "dist-sw3:eth1"]
    - endpoints: ["core-sw1:eth6", "dist-sw4:eth1"]

    - endpoints: ["core-sw2:eth3", "dist-sw1:eth2"]
    - endpoints: ["core-sw2:eth4", "dist-sw2:eth2"]
    - endpoints: ["core-sw2:eth5", "dist-sw3:eth2"]
    - endpoints: ["core-sw2:eth6", "dist-sw4:eth2"]

    # Distribution Layer Interconnection (within building)
    - endpoints: ["dist-sw1:eth3", "dist-sw2:eth3"]
    - endpoints: ["dist-sw3:eth3", "dist-sw4:eth3"]

    # Distribution to Access
    - endpoints: ["dist-sw1:eth4", "access-sw1:eth1"]
    - endpoints: ["dist-sw1:eth5", "access-sw2:eth1"]
    - endpoints: ["dist-sw2:eth4", "access-sw1:eth2"]
    - endpoints: ["dist-sw2:eth5", "access-sw2:eth2"]

    - endpoints: ["dist-sw3:eth4", "access-sw3:eth1"]
    - endpoints: ["dist-sw3:eth5", "access-sw4:eth1"]
    - endpoints: ["dist-sw4:eth4", "access-sw3:eth2"]
    - endpoints: ["dist-sw4:eth5", "access-sw4:eth2"]

    # Access to End Devices
    - endpoints: ["access-sw1:eth3", "pc1:eth1"]
    - endpoints: ["access-sw2:eth3", "pc2:eth1"]
    - endpoints: ["access-sw3:eth3", "pc3:eth1"]
    - endpoints: ["access-sw4:eth3", "pc4:eth1"]

    # Servers connected to core (high bandwidth)
    - endpoints: ["core-sw1:eth7", "server1:eth1"]
    - endpoints: ["core-sw2:eth7", "server2:eth1"]

Two-Tier (Collapsed Core) Design

For smaller networks, the core and distribution layers can be combined:

# Two-tier collapsed core design
name: two-tier-network
topology:
  nodes:
    # Collapsed Core/Distribution
    core-dist-sw1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.10
      labels:
        tier: core-distribution

    core-dist-sw2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.11
      labels:
        tier: core-distribution

    # Access Layer
    access-sw1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.20

    access-sw2:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.21

  links:
    # Core-Distribution interconnection
    - endpoints: ["core-dist-sw1:eth1", "core-dist-sw2:eth1"]
    - endpoints: ["core-dist-sw1:eth2", "core-dist-sw2:eth2"]

    # Dual-homed access switches
    - endpoints: ["core-dist-sw1:eth3", "access-sw1:eth1"]
    - endpoints: ["core-dist-sw2:eth3", "access-sw1:eth2"]
    - endpoints: ["core-dist-sw1:eth4", "access-sw2:eth1"]
    - endpoints: ["core-dist-sw2:eth4", "access-sw2:eth2"]

Advanced Topology Features

Network Segmentation

VLAN-based Segmentation

topology:
  nodes:
    switch1:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      startup-config: |
        vlan 10
         name USERS
        vlan 20
         name SERVERS
        vlan 30
         name MANAGEMENT
        !
        interface GigabitEthernet1/0/1
         switchport mode access
         switchport access vlan 10
        !
        interface GigabitEthernet1/0/2
         switchport mode access
         switchport access vlan 20
        !
        interface GigabitEthernet1/0/3
         switchport mode trunk
         switchport trunk allowed vlan 10,20,30

  links:
    - endpoints: ["switch1:eth1", "user-pc:eth1"]
      vars:
        vlan: 10
    - endpoints: ["switch1:eth2", "server:eth1"]
      vars:
        vlan: 20
    - endpoints: ["switch1:eth3", "router:eth1"]
      vars:
        trunk: [10, 20, 30]

Subnet-based Segmentation

topology:
  nodes:
    router1:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      startup-config: |
        interface GigabitEthernet0/0/1
         ip address 192.168.10.1 255.255.255.0
         description USER_NETWORK
        !
        interface GigabitEthernet0/0/2
         ip address 192.168.20.1 255.255.255.0
         description SERVER_NETWORK
        !
        interface GigabitEthernet0/0/3
         ip address 192.168.30.1 255.255.255.0
         description MANAGEMENT_NETWORK

Multi-Site Topologies

Hub-and-Spoke WAN Design

# Hub-and-spoke WAN topology
name: hub-spoke-wan
topology:
  nodes:
    # Hub site
    hub-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.10
      labels:
        site: headquarters
        role: hub-router

    hub-switch:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.11
      labels:
        site: headquarters
        role: lan-switch

    # Spoke sites
    spoke1-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.20
      labels:
        site: branch-office-1
        role: spoke-router

    spoke1-switch:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.21
      labels:
        site: branch-office-1
        role: lan-switch

    spoke2-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      mgmt-ipv4: 172.20.20.30
      labels:
        site: branch-office-2
        role: spoke-router

    spoke2-switch:
      kind: cisco_iosxe
      image: cisco/catalyst:latest
      mgmt-ipv4: 172.20.20.31
      labels:
        site: branch-office-2
        role: lan-switch

    # Internet/WAN simulation
    wan-cloud:
      kind: linux
      image: alpine:latest
      mgmt-ipv4: 172.20.20.100
      labels:
        role: wan-simulation

  links:
    # Hub LAN
    - endpoints: ["hub-router:eth1", "hub-switch:eth1"]

    # Spoke LANs
    - endpoints: ["spoke1-router:eth1", "spoke1-switch:eth1"]
    - endpoints: ["spoke2-router:eth1", "spoke2-switch:eth1"]

    # WAN connections (hub-and-spoke)
    - endpoints: ["hub-router:eth2", "wan-cloud:eth1"]
      vars:
        bandwidth: 1Gbps
        latency: 10ms

    - endpoints: ["spoke1-router:eth2", "wan-cloud:eth2"]
      vars:
        bandwidth: 100Mbps
        latency: 30ms

    - endpoints: ["spoke2-router:eth2", "wan-cloud:eth3"]
      vars:
        bandwidth: 100Mbps
        latency: 40ms

Full Mesh WAN Design

# Full mesh WAN for critical sites
name: mesh-wan
topology:
  nodes:
    site1-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      labels:
        site: site-1

    site2-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      labels:
        site: site-2

    site3-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      labels:
        site: site-3

    site4-router:
      kind: cisco_iosxe
      image: cisco/iosxe:latest
      labels:
        site: site-4

  links:
    # Full mesh WAN connections
    - endpoints: ["site1-router:eth1", "site2-router:eth1"]
      vars: {bandwidth: "1Gbps", latency: "20ms"}
    - endpoints: ["site1-router:eth2", "site3-router:eth1"]
      vars: {bandwidth: "1Gbps", latency: "25ms"}
    - endpoints: ["site1-router:eth3", "site4-router:eth1"]
      vars: {bandwidth: "1Gbps", latency: "30ms"}
    - endpoints: ["site2-router:eth2", "site3-router:eth2"]
      vars: {bandwidth: "1Gbps", latency: "15ms"}
    - endpoints: ["site2-router:eth3", "site4-router:eth2"]
      vars: {bandwidth: "1Gbps", latency: "35ms"}
    - endpoints: ["site3-router:eth3", "site4-router:eth3"]
      vars: {bandwidth: "1Gbps", latency: "20ms"}

Design Best Practices

Scalability Considerations

  1. Modular Design
    • Use consistent naming conventions
    • Group related components
    • Plan for future expansion
  2. Resource Planning
    • Estimate container resource requirements
    • Plan for peak usage scenarios
    • Consider host system limitations
  3. Network Addressing
    • Use hierarchical IP addressing
    • Reserve address space for growth
    • Document addressing schemes

Redundancy and High Availability

  1. Link Redundancy
    • Implement dual-homed connections
    • Use different physical paths
    • Configure appropriate spanning tree
  2. Device Redundancy
    • Deploy redundant core devices
    • Use HSRP/VRRP for gateway redundancy
    • Implement load balancing
  3. Service Redundancy
    • Distribute critical services
    • Implement clustering where appropriate
    • Plan for disaster recovery

Performance Optimization

  1. Bandwidth Planning
    • Size links appropriately
    • Consider oversubscription ratios
    • Plan for traffic growth
  2. Latency Minimization
    • Optimize routing paths
    • Minimize hop counts
    • Use appropriate QoS policies
  3. Resource Allocation
    • Right-size container resources
    • Monitor resource utilization
    • Implement resource limits

Topology Validation and Testing

Pre-deployment Validation

# Validate topology syntax
containerlab validate -t topology.yml

# Check resource requirements
containerlab inspect -t topology.yml --dry-run

# Verify image availability
containerlab images -t topology.yml

Post-deployment Testing

# Deploy and test connectivity
containerlab deploy -t topology.yml

# Generate topology graph
containerlab graph -t topology.yml

# Test basic connectivity
for node in $(containerlab inspect -t topology.yml --format json | jq -r '.containers[].name'); do
    echo "Testing $node..."
    docker exec $node ping -c 1 8.8.8.8
done

# Performance testing
docker exec clab-lab-pc1 iperf3 -c clab-lab-pc2

Automated Testing

#!/bin/bash
# automated-topology-test.sh

TOPOLOGY_FILE="$1"
TEST_RESULTS="test-results-$(date +%Y%m%d-%H%M%S).log"

echo "Starting topology test for $TOPOLOGY_FILE" | tee $TEST_RESULTS

# Deploy topology
echo "Deploying topology..." | tee -a $TEST_RESULTS
if containerlab deploy -t $TOPOLOGY_FILE; then
    echo "Deployment successful" | tee -a $TEST_RESULTS
else
    echo "Deployment failed" | tee -a $TEST_RESULTS
    exit 1
fi

# Wait for containers to stabilize
sleep 30

# Test connectivity
echo "Testing connectivity..." | tee -a $TEST_RESULTS
CONTAINERS=$(containerlab inspect -t $TOPOLOGY_FILE --format json | jq -r '.containers[].name')

for container in $CONTAINERS; do
    echo "Testing $container..." | tee -a $TEST_RESULTS
    if docker exec $container ping -c 3 -W 5 8.8.8.8 > /dev/null 2>&1; then
        echo "$container: Connectivity OK" | tee -a $TEST_RESULTS
    else
        echo "$container: Connectivity FAILED" | tee -a $TEST_RESULTS
    fi
done

# Cleanup
echo "Cleaning up..." | tee -a $TEST_RESULTS
containerlab destroy -t $TOPOLOGY_FILE

echo "Test completed. Results in $TEST_RESULTS"

Summary

Network topology design is fundamental to creating effective learning environments and realistic simulations. This chapter covered various topology types, hierarchical design principles, and advanced features available in ContainerLab. Understanding these concepts enables you to create scalable, maintainable, and realistic network simulations.

Key takeaways: - Choose appropriate topology types based on requirements - Implement hierarchical designs for scalability - Use advanced features for realistic simulations - Follow best practices for maintainable designs - Validate and test topologies thoroughly

In the next chapter, we’ll explore the various network operating systems supported by ContainerLab and their specific configurations.

Review Questions

  1. What are the advantages and disadvantages of different topology types?
  2. How does the three-tier hierarchical model improve network design?
  3. What are the key considerations for multi-site topology design?
  4. How can you implement redundancy in ContainerLab topologies?
  5. What are best practices for topology validation and testing?

Hands-on Exercises

Exercise 1: Topology Comparison

  1. Implement star, ring, and mesh topologies with 4 nodes each
  2. Compare deployment time and resource usage
  3. Test connectivity and performance characteristics
  4. Document advantages and disadvantages of each

Exercise 2: Three-Tier Design

  1. Implement the three-tier campus network from this chapter
  2. Configure appropriate VLANs and IP addressing
  3. Test connectivity between all tiers
  4. Implement and test redundancy features

Exercise 3: Multi-Site WAN

  1. Create a hub-and-spoke WAN topology with 3 spoke sites
  2. Configure routing between sites
  3. Simulate WAN link failures and test failover
  4. Compare with a full mesh design

Exercise 4: Custom Topology Design

  1. Design a topology for a specific scenario (e.g., small business, data center)
  2. Implement the design in ContainerLab
  3. Create automated testing scripts
  4. Document the design decisions and trade-offs

Additional Resources