Fabric Topology Variables¶
The fabric topology variables define the connectivity between the various node types, as well as override the default switch properties.
As per the diagram above, the topology hierarchy is the following:
fabric_name > dc_name > pod_name
- Fabric Name, required to match Ansible Group name covering all devices in the Fabric | Required and must be an inventory group name.
fabric_name: < Fabric_Name >
- DC Name, required to match Ansible Group name covering all devices in the DC | Required for 5-stage CLOS (Super-spines)
dc_name: < DC_Name >
- POD Name, only used in Fabric Documentation | Optional, fallback to dc_name and then to fabric_name. Recommended to be common between Spines, Leafs within a POD (One l3ls topology)
pod_name: < POD_Name >
- Connectivity is defined from the child’s device perspective.
- Source uplink interfaces and parent interfaces are defined on the child.
- A static unique identifier (id) is assigned to each device.
- This is leveraged to derive the IP address assignment from each summary defined in the Fabric Underlay and Overlay Topology Variables.
- Within the l3_leaf and l2_leaf dictionary variables, defaults can be defined.
- This reduces user input requirements, limiting errors.
- The default variables can be overridden when defined under the node groups.
eos_designs supports multiple flavors of L3LS-EVPN topology such as 3-stage CLOS and 5-stage CLOS. Sections below highlight these 2 topologies, but you can extend
eos_designs to support your own topology by using
node_type_keys to create your own node type
3-stage CLOS Topology Support (Leaf & Spine)¶
- The eos_designs role support various deployments with layer 3 leaf and spine (3-stage CLOS) and optionally, with dedicated overlay controllers.
- 3 stage CLOS fabric can be represented as spines, L3 leafs and L2 leafs, and also referred to as a “POD”.
5-stage CLOS Topology Support (Super Spine)¶
- The eos_designs role support lager deployments with super-spines (5-stage CLOS) and optionally, with dedicated overlay controllers.
- 5 stage CLOS fabric can be represented as multiple leaf-spine structures (called PODs - Point of Delivery) interconnected by super-spines.
- The logic to deploy every leaf-spine POD fabric remains unchanged.
- Super-spines can be deployed as a single plane (typically chassis switches) or multiple planes.
5-stage CLOS limitations
- Current AVD release supports single plane deployment only.
- Only eBGP underlay is supported for super-spine deployment.
- Spines in every POD must have unique AS per POD.
Node Type Variables¶
The following table provide information on the default node types that have been pre-defined in
eos_designs/defaults/main/defaults-node-type-keys.yml. To customize or create new node types, please refer to node types definition
|Node Type Key||Underlay Router||Uplink Type||Default EVPN Role||L2 Network Services||L3 Network Services||VTEP||MLAG Support||Connected Endpoints|
The variables should be applied to all devices in the fabric.
type:variable needs to be defined for each device in the fabric.
- This is leveraged to load the appropriate template to generate the configuration.
Variables and Options¶
As explained above, you can defined your own types of devices. CLI only provides default node types.
# define the layer type type: < spine | l3leaf | l2leaf | super-spine | overlay-controller >
# Defined in SPINE.yml file # Can also be set directly in your inventory file under spine group type: spine # Defined in L3LEAFS.yml # Can also be set directly in your inventory file under l3leaf group type: l3leaf # Defined in L2LEAFS.yml # Can also be set directly in your inventory file under l2leaf group type: l2leaf # Defined in SUPER-SPINES.yml # Can also be set directly in your inventory file under super-spine group type: super-spine # Defined in ROUTE-SERVERS.yml # Can also be set directly in your inventory file under route-server group type: overlay-controller
All node types have the same structure based on
node and all variables can be defined in any section and support inheritance like this:
defaults <- node_group <- node_group.node <- node
Node type structure¶
--- <node_type_key>: defaults: # Define vars for all nodes of this type node_groups: <node group name>: # Vars related to all nodes part of this group nodes: <node inventory hostname>: # Vars defined per node nodes: <node inventory hostname>: # Vars defined per node # Unique identifier | Required. id: < integer > # Node management IP address | Optional. mgmt_ip: < IPv4_address/Mask >
Node Variables details¶
Generic configuration management¶
< node_type_key >: defaults: # Arista platform family | Required. platform: < Arista Platform Family > # Rack that the switch is located in (only used in snmp_settings location) | Optional rack: < rack_name > # EOS CLI rendered directly on the root level of the final EOS configuration | Optional raw_eos_cli: | < multiline eos cli > # Custom structured config for eos_cli_config_gen | Optional structured_config: < dictionary >
< node_type_key >: defaults: # IPv4 subnet to use to connect to uplink switches. uplink_ipv4_pool: < IPv4_address/Mask > # Local uplink interfaces (list). | Required. uplink_interfaces: [ < ethernet_interface_1 >, < ethernet_interface_2 > ] # Uplink switches (list). | Required. uplink_switches: [ < uplink_switch_inventory_hostname 01 >, < uplink_switch_inventory_hostname 02 > ] # Number of interfaces towards uplink switches | Optional max_uplink_switches: < integer > # Enable PTP on uplink links | Optional uplink_ptp: enable: < boolean > # Point-to-Point interface speed - will apply to uplinks on both ends | Optional. uplink_interface_speed: < interface_speed | forced interface_speed | auto interface_speed > # When nodes are part of node group < node-group-name >: nodes: # Uplink switches interfaces (list), interface located on uplink switch. | Required. uplink_switch_interfaces: [ < ethernet_interface_1 >, < ethernet_interface_2 > ] # When nodes are not in node_group nodes: <node inventory hostname>: # Uplink switches interfaces (list), interface located on uplink switch. | Required. uplink_switch_interfaces: [ < ethernet_interface_1 >, < ethernet_interface_2 > ]
ISIS underlay protocol management¶
< node_type_key >: defaults: # isis system-id prefix (4.4 hexadecimal) isis_system_id_prefix: < hhhh.hhhh > # Number of path to configure in ECMP for ISIS isis_maximum_paths: < integer >
Loopback and VTEP management¶
< node_type_key >: defaults: # IPv4 subnet for Loopback0 allocation loopback_ipv4_pool: < IPv4_address/Mask > # IPv4 subnet for VTEP/Loopback1 allocation. vtep_loopback_ipv4_pool: < IPv4_address/Mask > # Offset all assigned loopback IP addresses. loopback_ipv4_offset: 2 # Set VXLAN source interface. Loopback1 is default vtep_loopback: < Loopback_interface_1 >
BGP & EVPN Control plane¶
< node_type_key >: defaults: # node BGP AS | Required. bgp_as: < bgp_as > # List of EOS command to apply to BGP daemon | Optional bgp_defaults: [ < List of EOS commands> ] # Acting role in EVPN control plane. # Override role definition from node_type_keys # Can be set per node evpn_role: < client | server | none | Default -> refer to node type variable table > # List of inventory hostname acting as EVPN route-servers. evpn_route_servers: [ '< inventory_hostname_of_evpn_server >' ]
EVPN services management¶
< node_type_key >: defaults: # Possibility to prevent configuration of Tenant VRFs and SVIs # Override role definition from node_type_keys # This allows support for centralized routing. evpn_services_l2_only: < false | true > # Filter L3 and L2 network services based on tenant and tags (and operation filter) | Optional # If filter is not defined will default to all filter: tenants: [ < tenant_1 >, < tenant_2 > | default all ] tags: [ < tag_1 >, < tag_2 > | default -> all ] # Force VRFs in a tenant to be configured even if VLANs are not included in tags | Optional # Useful for "border" leaf. always_include_vrfs_in_tenants: [ < tenant_1 >, < tenant_2 >, "all" ] # Activate or deactivate IGMP snooping | Optional, default is true igmp_snooping_enabled: < true | false >
MLAG configuration management¶
< node_type_key >: defaults: # Enable / Disable auto MLAG, when two nodes are defined in node group. mlag: < true | false -> default true > # Enable / Disable MLAG dual primary detection mlag_dual_primary_detection: < true | false -> default false > # MLAG interfaces (list) | Required when MLAG leafs present in topology. mlag_interfaces: [ < ethernet_interface_3 >, < ethernet_interface_4 > ] # Underlay L3 peering SVI interface id # If set to false or the same vlan as mlag_peer_vlan the mlag_peer_vlan will be used for L3 peering. mlag_peer_l3_vlan: < 0-4094 | false | default -> 4093 > # IP address pool used for MLAG underlay L3 peering | *Required when MLAG leafs present in topology. # IP is derived from the node id. mlag_peer_l3_ipv4_pool: < IPv4_network/Mask > # MLAG Peer Link (control link) SVI interface id mlag_peer_vlan: < 0-4094 | default -> 4094 > # IP address pool used for MLAG Peer Link (control link)| *Required when MLAG leafs present in topology. # IP is derived from the node id. mlag_peer_ipv4_pool: < IPv4_network/Mask > # Spanning tree mode | Required. spanning_tree_mode: < mstp | rstp | rapid-pvst | none > # Spanning tree priority. spanning_tree_priority: < spanning-tree priority -> default 32768 > # Spanning tree priority. spanning_tree_root_super: < true | false > # Virtual router mac address for anycast gateway | Required. virtual_router_mac_address: < mac address >
Inband management VLAN¶
< node_type_key >: defaults: # Optional IP subnet assigned to Inband Management SVI on l2leafs in default VRF. # Parent l3leafs will have SVI with "ip virtual-router" and host-route injection based on ARP. This allows all l3leafs to reuse the same subnet # SVI IP address will be assigned as follows: # virtual-router: <subnet> + 1 # l3leaf A : <subnet> + 2 (same IP on all l3leaf A) # l3leaf B : <subnet> + 3 (same IP on all l3leaf B) # l2leafs : <subnet> + 3 + <l2leaf id> # GW on l2leafs : <subnet> + 1 # Assign range larger than total l2leafs + 5 inband_management_subnet: < IPv4subnet/mask > # VLAN number assigned to Inband Management SVI on l2leafs in default VRF. inband_management_vlan: < vlan-id | Default -> 4092 >