Switch configurations are important, so you should back them up someplace you can get to them when the switches are down. As a side benefit, you’ll be able to track changes to your switch configs through git.
First, a few warnings
Secrets & ways to back up switch configurations
You should create 2 git remotes. The permissions should be locked down as hard as possible as they will both contain secrets. Do not leave these repositories open to anyone other than people who need to read these files in an emergency. Also consider hosting at least the configuration files for your devices off-prem as you won’t be guaranteed to be able to reach them if they’re behind a failed piece of network equipment.
The first repository is for the scripts to create ssh sessions and have your devices push their configurations to your tftp server. If you can’t use ssh key authentication, or if your device requires you to send a password to enable rights to run a command, you may be forced to have plaintext passwords in your scripts.
The second is for the switch configs themselves. Most of the devices I have experience with won’t show everything in a
show run command - they will blank out things like passwords and keys. This is fine for casual browsing through the CLI looking at configurations, but it won’t bring your switch back from a factory reset, or set an RMA replacement device up. A config file sent to another device over tftp is usually a complete, valid copy of the device config.
For more complex devices or configurations, the configuration may be tens of thousands of lines long, and it’s not practical to try to capture the configuration over serial. In some cases (angrily staring at you, Palo Alto), you can capture the configuration over a serial connection (in at least three different formats - something that looks like JSON, XML and
set commands), but there is absolutely NO usable way to put it back over serial. You have to transfer the config in or out by tftp or scp. Even if the device has a USB port, you may not be able to use it. Some vendors do allow you to use the USB port to transfer files in or out, but only if you use their own branded USB drives. You remember those right? The ones you got years ago when you bought the switches? Don’t worry, I’m sure you haven’t lost all of them.
Windows built in tftp client
Don’t use the windows built in tftp client. It’s trash. It never actually sends any data to the tftp server after sending a request and receiving an acknowledgement.
Instead I would suggest the all in one tftpd client/server TFTPD64, it behaves properly.
You will want to restrict the ability to write to this tftp server using the host firewall, which is outside the scope of this document. If you don’t, anything that’s allowed to connect to this tftpd server can write to it, or overwrite a file you needed, or fill up the drive with whatever they want. They’ll be restricted to the tftp server root folder, but that won’t be much comfort if someone crashes your server or breaks something.
Pre-requisites for your backup target / tftp server
- An ubuntu 20.04 vm or server
- tftp server configured to allow writing
- a bunch of switches and/or other devices that listen for ssh
Configure your tftpd server
apt update && apt install -y tftpd-hpa expect
Set up the tftp folder
chown tftp /srv/tftp
# /etc/default/tftpd-hpa TFTP_USERNAME="tftp" TFTP_DIRECTORY="/srv/tftp" TFTP_ADDRESS=":69" TFTP_OPTIONS="--secure --permissive --create -v"
Here we’ve only changed the
TFTPD_OPTIONS line to allow file creation, and make the basic linux permissions the only security barrier. All files will be read and written as the user
tftp. It also makes tftpd-hpa pretty chatty in
To get an idea of what all of these options mean and what other options there are, run
systemctl restart tftpd-hpa
Firstly, we need a couple of folders.
mkdir -p /opt/script-backup-switch-configs /opt/switch-configs cd /opt/script-backup-switch-configs git init cd /opt/switch-configs git init
If you haven’t already, create your 2 repositories. They need to be empty, without a readme file as some git web apps tend to try to create for you if you don’t un-select that check box. Gitlab and Github both provide instructions for adding a remote within each local repository so that
git push will work. Again, these repositories should NOT be public.
Now we need a wrapper to run the actual individual switch backups:
#! /bin/bash # Echo a datestamp to standard out date -u # Find all files ending in ".exp" and run them for file in /opt/script-backup-switch-configs/*.exp do $file done #move the tftp files we've received to the git repo where we're tracking changes mv /srv/tftp/*.conf /opt/switch-configs/ cd /opt/switch-configs git add * sleep 1 git stage * sleep 1 git commit -m "scheduled commit" sleep 1 git push # Echo a datestamp to standard out date -u
So much for the easy part. Now you
have get to deal with
expect. Expect takes a list of tasks and looks for a result that you define before it will continue with the rest of the script. It’s super neat, but it can be frustrating at times.
Each of your scripts will connect to a device over ssh and request that the device copy the configuration file it’s using to your new tftp server.
Here is an example that assumes that your tftp server is located at 192.168.0.100, and that you have a Ruckus ICX switch that you’d like to back up that lives at 192.168.0.200.
#!/usr/bin/expect -f set timeout 60 spawn ssh [email protected] expect ">" send -- "en\r" expect "User Name:" send -- "admin\r" expect "Password:" send -- "P@ssword1\r" expect "#" send -- "copy running-config tftp 192.168.0.100 switch1.conf\r" expect "#" send -- "exit\r" expect ">" send -- "exit\r" expect eof
In each case, we’re sending input, and telling
expect what to look for in the response before continuing. If you’re not sure what your .exp file should contain, you can use
autoexpect to record an interaction - but be warned that if any of that interaction changes - say someone adds an MOTD to your switch config - the expect script will fail and eventually time out. You should try to make the expectation as simple a match as you can. Configurations created by autoexpect are very brittle.
Each device will have a different set of commands to use to send the configuration file to the tftp server. You will have to look at the documentation for your device. You should also use ssh keys wherever possible. You really don’t want to leave passwords around in these files if you can avoid it.
You should also attempt to connect to the target devices over ssh yourself manually before running your script - this allows you to do things like accept the host key from the device and practice the command set you’ll be sending.
All expect scripts should include the
#!/usr/bin/expect -f line as their first line, and be marked as executable.
Handling odd devices
If you have devices that do not allow you to set the name of the file, you can create an expect script with a name that won’t get picked up by your
find command and add a separate stanza for each one of those. Palo alto firewalls for example don’t allow you to specify a file name on the ftp server (unless I’m missing something).
#! /bin/bash date -u for file in /opt/script-backup-switch-configs/*.exp do $file done ### Add our odd devices ./palo01 mv /srv/tftp/running-config.xml /srv/tftp/palo01.xml.conf ./palo02 mv /srv/tftp/running-config.xml /srv/tftp/palo02.xml.conf ### mv /srv/tftp/*.conf /opt/switch-configs/ cd /opt/switch-configs git add * sleep 1 git stage * sleep 1 git commit -m "scheduled commit" sleep 1 git push mv /srv/tftp/*.conf /opt/switch-configs/ ...
You might also handle renaming the odd files inside your expect script if you want to figure that out and keep the wrapper simple.
Add your scripts to git
do-backups semi-automates the process of committing each version of your device configs, you’ll have to manually add your changes to
do-backups and your expect scripts so that you don’t lose your hard work if this server or vm dies. Don’t forget to add a git remote, commit your changes, and push them to your git server.
Automating the script run
It’s as simple as creating a symlink in
/etc/cron.daily that links to your
You could also create a wrapper script that does things like direct the output of all of those scripts into a log file, touches a file to update its time stamp, or email today’s log to yourself.
Well It’s complicated and messy but this procedure has saved me large amounts of stress and countless hours of work trying to track down what configuration change broke a switch, finding useful old port configs, and bringing newly RMA’d devices online from scratch.
Hopefully it’ll save you some time too, or inspire you to create your own solution. I looked around for a real product that would do this, but all I could find were vendor-specific solutions that were worse than this.
Appendix A - Device Types
|Manufacturer||Device||Pubkey Support||Other notes|
|Ruckus||ICX (fastiron OS)||Yes, in some versions||You can enable a full privilege read-only user to prevent embedding more important credentials in a script|
|Palo Alto||PanOS devices||Yes||You can save the current running config to a specific named XML file, but not if you’re using read-only credentials, which I would encourage.|
Appendix B - script snippets
- Depending on how old your ICX is and whether it has modern enough firmware, you may have trouble connecting by SSH. Older firmware doesn’t support the security modern ssh clients require by default. You can modify the spawn ssh line as follows:
spawn ssh [email protected] -oKexAlgorithms=+diffie-hellman-group1-sha1You’ll replace
diffie-hellman-group1-sha1with whatever your switch supports. You’ll know what it supports because your SSH client will tell you when you try to connect.
- You can also enable a full privilege but read-only user with a line like this in your switch config:
enable read-only-password my-read-only-password