Automation – Part 1 configurator

Automation – Part 1 configurator

In the past year we have seen automation take on a new meaning in the networking industry. What use to be feared by most network engineers is been embraced by many. But for a network engineer to move away from the static mindset of configuring device to device through the CLI to a modular approach of building code is not easy. To make this transition easier and remove the fear that some network engineers have of automating themselves out of job one must think of automation as allowing a machine to do their mundane tasks. Just like we trust the ATM machine to handle our bank transactions.

One of these tasks I have seen been an easy win is device configuration. Automating a task like this means you can now enforce your gold standards and know that each device in your network will have been configured to these standards. What I have seen for so many years working on large networks were automation of this task was not enforced was the copy & paste effect, go to the last device installed copy that configuration and paste to the new device been installed. This might of been OK for the first few devices but then someone added in a “fat finger” configuration and now we have the snow flake effect. One configuration I used to see mostly was with OSPF were someone in the past put in a no passive interface on the loopback interfaces, someone not fully understanding OSPF, and that was then copied over and over again! There are plenty of other examples I can give but I know you have probably hit the same and many more on your networks.

So how does automation of this simple task help? As I have already mentioned you have a single place that stores your gold configuration standards. This now allows for version controlling as new features get added to your network or older configurations get removed from your network. How can we automate this? Today there are many ways and depending on the line of business of your company you have multiple tools to chose from, build it yourself or buy of the self. But for this post I am going to focus on Ansible. Ansible is a automation platform which has been used primarily by server admins in the past but has now seen a big uptake in the networking community as a platform of choice for network automation. I will not go into the finer details of how Ansible works as there are many other blog posts out there that do that and also the Ansible website is a great resource for their documentation.

http://docs.ansible.com/ansible/intro.html

To build a device configurator with Ansible there are really only a few components you need to focus on within Ansible;

  1. Jinja2 template file
  2. YAML file for your configuration variables
  3. YAML file for your configuration Playbook
  4. Inventory file for your devices

As you can see we will be using two scripting languages, Jinja2 & YAML. Again not going into the details of these languages and taking quotes from their respective websites – “Jinja2 is a templating engine for Python” & “YAML is a human-readable data serialization language”.

Now that I have covered the “technical” aspect of using Ansible as a configurator I will now run through an example of building a configuration for two network switches, these switches will be the Dell Networking S4048-ON. To get started I have already created the below directory’s and inventory file on my CentOS Linux host which is running Ansible 2.1.1.0;

[ansible]# tree
.
├── configs
├── group_vars
├── host_vars
├── inventory
└── templates

These directory’s & file will be used to house the following information;

  • configs – this will be the location for my final device configuration file
  • group_vars – this is where variables within my configuration might change over time, snmp string, local username & password etc… and are stored as YAML files
  • host_vars – this is the location for the host specific variables stored as YAML files
  • inventory – this is my device inventory file
  • templates – this is the location for the Jinja2 template configuration files

Lets start by building my Jinja2 template file which will contain baseline configurations which can be found on all switch devices in the network. Within this file I will call on the different variables that will be inputted from both the group_vars & host_vars YAML files. The nice thing about the Jinja2 formatting is that it is an easy to read language and if you are not familiar with using other scripting languages this is an easy one to start with.

baseline.j2;

 hostname {{ inventory_hostname }}
 !
 enable secret {{ secret }}
 !
 username {{ username }} password {{ password }} privilege 15
 !
 protocol spanning-tree rstp
 no disable
 forward-delay 4
 hello-time 1
 bridge-priority {{ switch_stp_pri }}
 !
 interface range TenGigabitEthernet 1/1 - 48
 description Server Access
 !
 protocol lldp
 advertise management-tlv system-capabilities system-description system-name
 advertise interface-port-desc
 shutdown
 !
 interface range fortyGigE 1/49 - 54
 protocol lldp
 advertise management-tlv system-capabilities system-description system-name
 advertise interface-port-desc
 shutdown
 !
 interface ManagementEthernet 0/0
 ip address {{ mgmt_ip }}
 no shutdown
 !
 !
 snmp-server location {{ site_city_country }}
 !
 ip access-list standard SNMP
 remark 1 SNMP Standard
 seq 5 permit host 10.10.10.0/24
 !
 management route 0.0.0.0/0 {{ mgmt_gw }}
 !
 ip domain-name upintheether.com
 ip name-server {{ name_svr1 }}
 ip name-server {{ name_svr2 }}
 !
 logging source-interface management
 logging {{ logging_svr }}
 !
 banner motd ^C
 ***********************************************************
 upintheether.com
 ***********************************************************
 GET OFF MY DEVICE! THE INTERNET POLICE HAVE BEEN CALLED!

***********************************************************
 upintheether.com
 ***********************************************************
 ^C
 !
 snmp-server community upintheether ro SNMP
 snmp-server community upintheether rw SNMP
 !
 !
 tacacs-server key upintheether
 tacacs-server host {{ tacacs_server1 }}
 tacacs-server host {{ tacacs_server2 }}
 !
 aaa authentication enable default tacacs+ enable
 aaa authentication enable tacuser tacacs+ enable
 aaa authentication login localmethod local
 aaa authentication login tacuser tacacs+ local
 !
 clock timezone GMT 0
 !
 protocol lldp
 advertise management-tlv system-capabilities system-description system-name
 advertise interface-port-desc
 !
 line console 0
 password {{ password }}
 line vty 0 9
 password {{ password }}
 login authentication tacuser
 exec-timeout 120 0
 logging synchronous level 2 limit 20
 !
 reload-type
 boot-type normal-reload
 config-scr-download enable
 !
 end

You will see throughout the above configuration template words that are enclosed by {{ }}. These are variables within our configuration that we will import from both the group_vars & host_vars YAML files. So lets take a look at one of these files.

As I have stated above a YMAL file within the group_vars can contain configuration that might change on a regular bases like the local username & password or the SNMP string. Keeping these located within their own directory and not apart of the host YAML file, which we will cover shortly, allows for version control on these files when the username & password is updated for example. We can then in a controlled manner update all devices calling on just this newly updated file to make the necessary changes to that subset of configuration. We can also run the YAML files through a Contentious Integration (CI) tool like Jenkins before deploying into production, hoping to cover Jenkins in an up and coming post.

For this example I have created two group_vars YAML file, one called login.yml and the other called svr.yml.

login.yml;

---

#This is the local username
username: lord

#This is the local username password
password: MyHouse

#This is the local secret password
secret: GuessWho

#This is my first TACACS server
tacacs_server1: 10.1.0.10

#This is my second TACACS server
tacacs_server2: 10.2.0.10

svr.yml;

---

#This is my logging server
logging_svr: 10.0.1.100

#This is my first DNS server
name_svr1: 10.100.100.150

#This is my second DNS server
name_svr2: 10.200.200.150

As you can see form the above YAML file examples this is an easy to read language and the naming convention is designed to how you want to call on these variables within the Jinja2 template file. You can see by breaking these variables up into multiple files allows for greater control when updates to these variables are called upon in the future.

We will now move onto the host_vars YAML files. These files will contain variables that are only common to that single device, this can usually be the Management IP or the STP priority. For this example I have two switches that will act as a ToR pair within my network.

ToR_SW1.yml

---

#switch management IP
mgmt_ip: 10.100.100.10/24

#management gateway
mgmt_gw: 10.100.100.1

#Switch STP priority. Primary should be 8192, secondary should be 16384
switch_stp_pri: 8192

#SNMP location string
site_city_country: DUB,DC1,R1R1

ToR_SW2.yml

---

#switch management IP
mgmt_ip: 10.100.100.11/24

#management gateway
mgmt_gw: 10.100.100.1

#Switch STP priority. Primary should be 8192, secondary should be 16384
switch_stp_pri: 16384

#SNMP location string
site_city_country: DUB,DC1,R1R1

Again as you can see the YAML file is easy to read and the naming convention is designed to how you want to call on these variables within the Jinja2 template file.

The final piece we need to update is our inventory file. This is the file where we will be calling on when we run our Playbook. In this file again the naming convention is left up to us on how we want to name our devices and the location/role they play within our network, not to confuse my wording of role with the Roles task which can be ran within a Playbook which I hope to cover in a later post. Below is the inventory file which I will be calling on. As you can see my “hosts” naming is [spines] & [leaves];

[spines]
SPINE_SW1
SPINE_SW2

[leaves]
ToR_SW1
ToR_SW2

It is imported to have your device names within the inventory file match those of the host_vars YAML files. When the Playbook is ran Ansible will look to match these up. If the naming is off in any way you will end up with a incorrect configuration file, that is if Ansible does not fail when you run the Playbook.

Now that we have the configuration template and the multiple variable files in place along with the correct device names within my inventory file we can now build our Playbook. The Playbook again is built in YAML;

config_build.yml;

---

- name: Build Configuration Files
 hosts: leaves
 connection: local
 gather_facts: no

tasks:
 - name: BUILD CONFIGS
 template: src=templates/baseline.j2 dest=configs/{{ inventory_hostname }}.txt

From the above the parts we need to focus on are hosts & tasks. The hosts section is where we will call on devices within the inventory file we want to run this Playbook against. In this Playbook I want to configure both my ToR_SW1 & ToR_SW2 which both are located under the host name [leaves] in my inventory file. The second part of this Playbook that is imported to us is the tasks section. This is where we are now telling Anisble what we want to do, pulling it all together.

If I take a look at my Ansible directory I can now see all the files we have created in their respective directory’s;

[ansible]# tree
.
├──config_build.yml
├── configs
├── group_vars
│   ├── svr.yml
│   └── login.yml
├── host_vars
│   ├── ToR_SW1.yml
│   └── ToR_SW2.yml
├── inventory
└── templates
│   └── baseline.j2

Now that everything looks to be in place its time to run our Playbook and start automating!

[ansible]# ansible-playbook -i inventory config_build.yml

PLAY [Build Configuration Files] ***********************************************

TASK [BUILD CONFIGS] ***********************************************************
changed: [ToR_SW1]
changed: [ToR_SW2]

PLAY RECAP *********************************************************************
ToR_SW1 : ok=1 changed=1 unreachable=0 failed=0 
ToR_SW2 : ok=1 changed=1 unreachable=0 failed=0

Success! The Playbook has ran without any errors and we now have two new configuration files for our two switches;

[ansible]# tree
.
├── config_build.yml
├── configs
│   ├── ToR_SW1.txt
│   └── ToR_SW2.txt
├── group_vars
│   ├── login.yml
│   └── svr.yml
├── host_vars
│   ├── ToR_SW1.yml
│   └── ToR_SW2.yml
├── inventory
└── templates
│   └── baseline.j2

ToR_SW1.txt;

 hostname ToR_SW1
 !
 enable secret GuessWho
 !
 username lord password MyHouse privilege 15
 !
 protocol spanning-tree rstp
 no disable
 forward-delay 4
 hello-time 1
 bridge-priority 8192
 !
 interface range TenGigabitEthernet 1/1 - 48
 description Server Access
 !
 protocol lldp
 advertise management-tlv system-capabilities system-description system-name
 advertise interface-port-desc
 shutdown
 !
 interface range fortyGigE 1/49 - 54
 protocol lldp
 advertise management-tlv system-capabilities system-description system-name
 advertise interface-port-desc
 shutdown
 !
 interface ManagementEthernet 0/0
 ip address 10.100.100.10/24
 no shutdown
 !
 !
 snmp-server location DUB,DC1,R1R1
 !
 ip access-list standard SNMP
 remark 1 SNMP Standard
 seq 5 permit host 10.10.10.0/24
 !
 management route 0.0.0.0/0 10.100.100.1
 !
 ip domain-name upintheether.com
 ip name-server 10.100.100.150
 ip name-server 10.200.200.150
 !
 logging source-interface management
 logging 10.0.1.100
 !
 banner motd ^C
 ***********************************************************
 upintheether.com
 ***********************************************************
 GET OFF MY DEVICE! THE INTERNET POLICE HAVE BEEN CALLED!

***********************************************************
 upintheether.com
 ***********************************************************
 ^C
 !
 snmp-server community upintheether ro SNMP
 snmp-server community upintheether rw SNMP
 !
 !
 tacacs-server key upintheether
 tacacs-server host 10.1.0.10
 tacacs-server host 10.2.0.10
 !
 aaa authentication enable default tacacs+ enable
 aaa authentication enable tacuser tacacs+ enable
 aaa authentication login localmethod local
 aaa authentication login tacuser tacacs+ local
 !
 clock timezone GMT 0
 !
 protocol lldp
 advertise management-tlv system-capabilities system-description system-name
 advertise interface-port-desc
 !
 line console 0
 password MyHouse
 line vty 0 9
 password MyHouse
 login authentication tacuser
 exec-timeout 120 0
 logging synchronous level 2 limit 20
 !
 reload-type
 boot-type normal-reload
 config-scr-download enable
 !
 end

I know at first this looks like a lot of effort to get up and running but once you spend that small bit of time to get setup you will reap the rewards further down the road when you have to stand up your next rack, pod or data center.

Comments are closed.