The MikroTik RouterBOARD 1100 AHx2 (RB1100AHx2) is 1U rackmount gigabit ethernet router with a dual core CPU. It can reach up to a million packets per second and supports hardware encryption. This post describes how to access the MikroTik 1100 AHx2 using FTP (File Transfer Protocol). FTP access is desirable in order to update the RouterOS software or perform other activities. The FTP client FileZilla will be used to describe the steps necessary, but you can use any FTP client.
The versions of software discussed in this post are as follows:
Windows 10 Pro
RouterOS 6.42.10 (long-term)
Filezilla client 3.39
Lets get started…
Start by downloading and installing the FileZilla client. Once installed open the Filezilla client and navigate to File -> Site Manager, select “New Site” and create a name for the entry (e.g., “MikroTik”). Set the Protocol to “SFTP – SSH File Transfer Protocol” and the Logon Type to “Normal”. Enter the user name and password. If you have not changed these on the MikroTik 1100 AHx2 then the user name will be admin and the password field should be left blank (See Figure 1).
Figure 1
Select “Connect” and you should be connected.
That’s it. A couple of minutes with the venerable FileZilla FTP client and you have FTP access to the MikroTik 1100 AHx2.
In a previous post I described how to install and configure Tobi Oetiker’s MRTG (Multi Router Traffic Grapher) on a Ubuntu server. In this post I will describe how to install and configure it on FreeBSD. Once configured, you’ll be able to use MRTG to monitor the traffic in and out of your network using the SNMP capability in your network’s gateway/router. MRTG generates static HTML pages containing PNG images which provide a visual representation of this traffic. MRTG typically produces daily, weekly, monthly, and yearly graphs. MRTG is written in perl and works on Unix/Linux as well as Windows. MRTG is free software licensed under the GNU GPL.
Software versions used in this post were as follows:
apache24 2.4.23
FreeBSD 11.0-RELEASE
mrtg-2.17.4
The steps discussed assume that the FreeBSD Ports Collection is installed. If not, you can install it using the following command:
1
portsnap fetch extract
If the Ports Collection is already installed, make sure to update it:
1
portsnap fetch update
Okay, let’s get started. All commands are issued as user root. When building the various ports you should accept the default configuration options.
Install a http server
MRTG requires an http server to be installed and operating correctly. In our example, we’ll install and use the Apache http server. Navigate to the Apache port and build it:
1
2
cd/usr/ports/www/apache24
make config-recursive install distclean
Once Apache has been successfully installed, Use the sysrc command to add the following line to /etc/rc.conf so that the Apache server will start automatically at system boot:
1
sysrc apache24_enable="YES"
Now let’s start Apache to make sure it works:
1
service apache24 start
Point your web browser to the host name or IP address of the FreeBSD host you’ve installed Apache on and you should see the venerable “It works!”
Install and configure MRTG
Now that we have an http server up and running let’s install MRTG:
1
2
cd/usr/ports/net-mgmt/mrtg
make install clean
What does the MRTG port install and where is that stuff located?
1
2
3
4
5
6
7
8
9
10
# find / -name 'mrtg'
/var/run/mrtg
/var/mail/mrtg
/usr/ports/net-mgmt/mrtg
/usr/local/share/doc/mrtg
/usr/local/share/examples/mrtg
/usr/local/bin/mrtg
/usr/local/etc/mrtg
/usr/local/bin/cfgmaker
/usr/local/bin/indexmaker
MRTG provides the example configuration file /usr/local/etc/mrtg/mrtg.cfg.sample that describes global configuration parameters as well as various configuration options for the SNMP targets you want to monitor. If you already have some experience with MRTG and SNMP you can simply copy or move this file to /usr/local/etc/mrtg/mrtg.cfg then modify it to meet your requirements. In our example, however, we’re going to create the requisite mrtg.cfg file from scratch.
MRTG includes the script cfgmaker that will create and populate a basic mrtg.cfg file with information obtained from your gateway/router. So, before running /usr/local/bin/cfgmaker, you should activiate and configure the SNMP service in your gateway/router. This typically involves logging into the device and enabling SNMP. The default SNMP community name is typically “public.” If you change the SNMP community name to something else, make note of it. Now, let’s run cfgmaker, substituting your SNMP community name if you’ve changed it, and adding the IP address of your gateway/router:
If you would like to add more than one device to mrtg.cfg simply append the additional URL(s) to the same mrtg.cfg file. Then, when you build the web page using the indexmaker command described below, graphs associated with each device will be displayed on the same HTML page:
Next, open /usr/local/etc/mrtg/mrtg.cfg and, under Global Config Options, uncomment the line WorkDir: /home/http/mrtg and change it to WorkDir: /usr/local/www/apache24/data/mrtg. This is the directory from which the Apache http server will server the MRTG html pages. If you’re using something other than Apache as your http server then you’ll need to change this path.
Next, uncomment the line Options[_]: growright, bits. By default MRTG graphs grow to the left, so the option growright specifies that the direction of the traffic visible in MRTG’s graphs flips causing the current time to be at the right edge of the graph and the history values to the left. The option bits specifies that the monitored traffic values obtained from your device is multiplied by 8 and displayed bits per second instead of bytes per second.
MRTG includes the script indexmaker. This is what we’ll use to create the pages used to display the MRTG graphs. First, let’s create the directory from which Apache http server will serve up the pages:
1
mkdir/usr/local/www/apache24/data/mrtg
Then use indexmaker combined with our mrtg.cfg file to create and populate an index.html file in that directory:
Now we need to add an Alias and a Directory directive to Apache’s configuration file to support MRTG. Open /usr/local/etc/apache24/httpd.conf and add the following lines in the section containing similar Directory directives, or it can simply be appended to the bottom of the file:
1
2
3
4
5
6
7
Alias/mrtg"/usr/local/www/apache24/data/mrtg"
<Directory"/usr/local/www/apache24/data/mrtg">
Options None
AllowOverride None
Require all granted
</Directory>
And change the user and group for the following directories to mrtg:
1
2
chown-Rmrtg:mrtg/usr/local/etc/mrtg
chown-Rmrtg:mrtg/usr/local/www/apache24/data/mrtg
Finally, let’s restart the http server:
1
service apache24 restart
Starting MRTG
Okay, now that MRTG has been installed and configured let’s start it up and see what it displays. Use the sysrc command to add following line to /etc/rc.conf:
1
sysrc mrtg_daemon_enable="YES"
Then start the MRTG daemon:
1
service mrtg_daemon start
The MRTG daemon will now run automatically each time FreeBSD starts.
Now point your browser to http://your-http-server-address/mrtg and you should see a page that resembles Figure 1. You may have more or less graphs depending on the number of interfaces reported by your devices(s).
Figure 1
You’ll see the graph starting to “grow” to the right as the traffic is monitored over time, and the Y axis displayed as bits per second. If you click on any one of these graphs you’ll be taken to differnt page showing individual graphs for 30 minute, two hour, and daily averages, along with the maximum, average, and current bit rate in and out of that particular interface. By default, these graphs will update every 5 minutes.
Only interested in displaying one particular interface? Want to graph other SNMP data? Now that you that you have a basic mrtg.cfg file created you can modify it or incorporate some of the global and target parameter examples contained in the file /usr/local/etc/mrtg/mrtg.cfg.sample to further customize your configuration. Just remember to run indexmaker again to update the MRTG index.html file.
Conclusion
This concludes the post on how to install and configure MRTG on FreeBSD. As you can see, MRTG isn’t terribly complicated and proves to be a really nice port for monitoring and graphing traffic in and out your gateway/router. For a full list of all the configuration options and other information I encourage you to visit the MRTG web site.
In a recent post I described how I improved the reliability of my file system backups by using the data replication capabilities inherent in the FreeBSD Zettabyte File System (ZFS). In this post I will explore Tarsnap, another tool I recently started to use to perform secure offsite backups of my most important files.
The versions for the software used in this post are as follows:
FreeBSD 11.0-RELEASE
tarsnap 1.0.37
The steps discussed assume that the FreeBSD Ports Collection is installed. If not, you can install it using the following command:
1
portsnap fetch extract
If the Ports Collection is already installed, make sure to update it:
1
portsnap fetch update
Okay, let’s get started. Okay, let’s get started. All commands are issued as the user root. While building the various ports you should accept all default configuration options unless otherwise instructed.
Create a Tarsnap account
Before installing Tarsnap I visited the Tarsnap registration page and created an account. Tarsnap operates on a prepaid basis, so you have to add some money to your account before you can start using it. The minimum amount is $5.00. Money will be deducted from your pre-paid amount based on the actual number of bytes stored and bandwidth used (after compression and data deduplication). Tarsnap prices are currently $.25 per Gigabyte per month for storage, and $.25 per byte for bandwidth.
Install Tarsnap
After creating an account it was time to install Tarsnap. First I made sure the Ports Collection was up to date:
1
portsnap fetch update
Then proceeded with the install, accepting all default configuration options:
1
2
cd/usr/ports/sysutils/tarsnap
make config-recursive install distclean
Next, I ran tarsnap-keygen, a utility which registers my machine with the Tarsnap server and generates a key that is used to encrypt and sign the archives that I create. I needed to have the e-mail address and password I used to create my Tarsnap account handy when running this command. In following example I’ve registered a machine with the host name tarsnap-test:
Note that if I had multiple machines containing files I wished to backup to Tarsnap, I would want create a separate key file for each machine.
By default tarsnap-keygen will create the key file /root/tarsnap.key. This can be changed by adding the option keyfile to specify a different location and/or key name. In the example above I’ve changed the name of my key file to tarsnap-test.key to help disambiguate keys in case I add additional machines to my Tarsnap account in the future.
Tarsnap creates the file /usr/local/etc/tarsnap.conf when installed. This config file is read by the tarsnap utility and specifies a number of default options, all of which will be ignored if the options in question are specified at the command line. Since I change the name of default key file, I revised the value for the option keyfile in /usr/local/etc/tarsnap.conf:
1
keyfile/root/tarsnap-test.key
Note that you should store a copy of this key someplace safe. If you lose your Tarsnap key file(s), you will not be able to create new archives or access your archived data.
Using Tarsnap
After installing Tarsnap I was ready to create and backup my first archive. Tarsnap commands follow a syntax similar to the venerable tar utility. The -c option creates a new archive containing the specified files. The -f option specifies which file to write the archive to:
Performing subsequent backups of these files will go faster since Tarsnap’s deduplication feature will avoid sending data which was previously stored.
If I want to list all archives stored with Tarsnap I can use the following command:
1
tarsnap--list-archives|sort
Adding one or more instance of the -v option to this command will make the output more verbose. For example, if -v is specified one or more times, the creation time of each archive is printed; if it is specified two or more times, the command line with which Tarsnap was invoked to create each archive is also printed.
If I want to list the files contained within a single archive I can use the following command:
1
tarsnap-tvf backup-20150729|more
The -t option is used to print to stdout the files stored in the specified archive; the -v option of course makes the output a little more verbose.
If I wanted to delete one or more archives I can use the -d option:
1
tarsnap-df backup-20150729
When the time comes to restore one or more files from Tarsnap I have a couple of options. For example, I can recover all files contained in a particular archive using the following command. In this example, I’ve extracted all files contained in the archive backup-20150729 to /tmp where I can recover one of more files:
1
2
cd/tmp
tarsnap-xf backup-20150729
Or I can extract just one of the directories in this archive:
Note here that you must exclude the leading / from the directory you’re restoring from. So in this case, instead of /pool_0/dataset_0/some-directory/, it should be pool_0/dataset_0/backup/some-directory.
Or regress even further into the archive to recover a single file if desired:
Finally, if for whatever reason I no longer wish to use Tarsnap on this machine I can invoke the nuke option, which will delete all of the archives stored:
1
tarsnap--nuke
To make sure you’re really serious, Tarsnap will ask you to type the text “No Tomorrow” when using this command.
Okay, after getting comfortable with the Tarsnap commands and backing up files manually for a couple of days, I created this ugly little script that creates a daily archive of a specified directory; looks for any archives older than 30 days and deletes them; and, logs its output to the file /home/iceflatline/cronlog:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
#!/bin/sh
### BEGIN INFO
# PROVIDE:
# REQUIRE:
# KEYWORD:
# Description:
# This script is used to perform backups using the tarsnap utility. The number of backups to retain is defined in the variable retention.
# Author: iceflatline <iceflatline@gmail.com>
#
# OPTIONS:
# -c: Create an archive containing the specified files
# -d: Delete the specified archive
# -f: Archive name
### END INFO
### START OF SCRIPT
backup_prefix=backup
retention=30
# Full paths to the following utilities are needed when running the script from cron.
echo"Could not find any backups to destroy.">>$log
fi
# Mark the end of the script with a delimiter.
echo"**********">>$log
# END OF SCRIPT
I wrote the script to /home/iceflatline/bin/tarsnap.sh where I maintain some other scipts and made it executable:
1
chmod+x/home/iceflatline/tarsnap.sh
Then added the following cron job to the crontab under user root. The script runs every day at 0800 local time:
1
2
# Run tarsnap backup script everyday at 0800
08***/home/iceflatline/bin/tarsnap.sh
Conclusion
Well, that’s it. A short post describing my experiences using Tarsnap, an easy, secure and inexpensive solution for performing offsite backups of my most important files.
Recently I decided to improve the reliability of my file system backups by using the data replication capabilities inherent in the FreeBSD Zettabyte File System (ZFS). ZFS provides a built-in serialization feature that can send a stream representation of a ZFS file system (Which ZFS refers to as a “dataset”) to standard output. Using this technique, it is possible to not only store the dataset(s) on another ZFS storage pool (zpool) connected to the local system, but also to send it over a network to another FreeBSD system. ZFS dataset snapshots serve as the basis for this replication, and the essential ZFS commands used for replicating the data are zfs send and zfs receive.
This post describes how I used this ZFS feature to perform replication of ZFS dataset snapshots from my home FreeBSD server to another FreeBSD machine located offsite. I’ll also discuss how I manage the quantity of snapshots stored locally and offsite, as well as a couple of options for recovering my files should it become necessary.
For purposes of example, I’ll refer to the FreeBSD system hosting the snapshots I want to send as “server”, and the offsite FreeBSD system that I will send snapshots to as “backup”. Unless otherwise noted, all steps were performed as the user root. However a non-root user, “iceflatline”, was created on both machines and is used for many of the commands. The versions for the software used in this post were as follows:
FreeBSD 11.0-RELEASE
Configure server
On server I had created a simple mirror vdev for my zpool consisting of (2) two terabyte disks. The mirror and the zpool were created using the following commands:
1
2
3
4
5
6
7
gpart create-sgpt ada1
gpart create-sgpt ada2
gpart add-tfreebsd-zfs-a1mada1
gpart add-tfreebsd-zfs-a1mada2
zpool create pool_0 mirror/dev/ada1p1/dev/ada2p1
As you can see, I created one large ZFS partition (-t freebsd-zfs) on each disk. Specifying the -a option, the gpart utility tries to align the start offset and partition size on the disk to be a multiple of the alignment value. I chose 1 MiB. The advantage to this is that it is a multiple of 4096 (helpful for larger, 4 kiB sector drives), leaving the leftover fraction of a megabyte at the end of the drive. In the future, if I have to replace a failed drive containing a slightly different number of sectors, I’ll have some wiggle room in case the replacement drive is slightly larger in size. After partitioning each drive I created the zpool using these partitions. I elected to use name “pool_0” for this zpool.
To improve overall performance and usability of any datasets that I create in this zpool, I performed the following configuration changes:
1
2
3
zfs set atime=no pool_0
zfs set compression=lz4 pool_0
zfs set snapdir=visible pool_0
The zfs command property atime controls whether the access time for files is updated when the files are read. Setting this property to off avoids producing write traffic when reading files, which can result in a gain in file system performance. The lz4 property controls the compression algorithm used for the datasets. lz4 is a high-performance replacement for the older the Lempel Ziv Jeff Bonwick (lzjb) algorithm. It features faster compression and decompression, as well as a generally higher compression ratio than lzjb. The snapdir property controls whether the directory containing my snapshots (pool_0/dataset_0/.zfs) is hidden or visible. I prefer the directory to be visible so I have another way to verify the existence of snapshots. These configuration changes were made at the zpool level so that any datasets I create in this zpool will inherit these settings; however, I could configure each dataset differently if desired.
The dataset on server that I back up offsite is called “dataset_0”, and was created using the following command:
1
zfs create pool_0/dataset_0
To ensure I have still have some headroom if/when the zpool starts to get full, I set the size quota for this dataset to 80% of zpool size (1819 GiB), or 1455 GiB:
1
zfs set quota=1455Gpool_0/dataset_0
Since ZFS can send a stream representation of a dataset to standard output, it can be piped through secure shell (“SSH”) to securely send it over a network connection. By default, root user privileges are required to send and receive these streams. This requires logging into the receiving system as user root. However, logging in as the user root via a SSH is disabled by default in FreeBSD systems for security reasons. Fortunately, the necessary ZFS commands can be delegated to a non-root user on each system. The minimum delegated ZFS permissions I needed for user iceflatline to successfully send snapshots from server were as follows:
In this case I delegated the permissions at the zpool level, so any datasets I create in pool_0 will inherit them. Alternatively I could have delegated permissions at the dataset level or a combination of both if desired. There’s a lot of flexibility.
I’m able to verify which permissions were delegated anytime using the following command as either user root or iceflatline:
1
zfs allow pool_0
Finally, to avoid having to enter a password each time a backup is performed, I generated a SSH key pair as user iceflatline on server and copied the public key to /usr/home/iceflatline/.ssh/authorized_keys on backup.
Configure backup
I configured backup similar to server: a simple mirror vdev, and a zpool named pool_0 with the same configuration as the one in server. I did not create a dataset on this zpool because I will be replicating pool_0/dataset_0 on server directly to pool_0 on backup.
The minimum delegated ZFS permissions I needed for user iceflatline on backup to successfully receive these snapshots were as follows:
After configuring both machines it was time to test. First, I created a full snapshot of pool_0/dataset_0 on server using the following command as as user iceflatline:
1
zfs snapshot-rpool_0/dataset_0@snap-test-0
While not strictly needed in this case, the -r option will recursively create snapshots of any child datasets that I may have created under pool_0/dataset_0.
Now I can send this newly created snapshot to backup, which was assigned the IP address 192.168.20.6. The following command is performed as user iceflatline:
The zfs send command creates a data stream representation of the snapshot and writes it to standard output. The standard output is then piped through SSH to securely send the snapshot to backup. The -v option will print information about the size of the stream and the time required to perform the receive operation. The -u option prevents the file system associated with the received data stream (pool_0/dataset_0 in this case) from being mounted. This was desirable as I’m using backup to simply store the dataset_0 snaphots offsite. I don’t need to mount them on that machine. The -d option is used so that all but the pool name (pool_0) of the sent snapshot is appended to pool_0 on backup. Finally, the -F option is useful for destroying snapshots on backup that do not exist on server.
zfs send can also determine the difference between two snapshots and send only the differences between the two. This saves on disk space as well as network transfer time. For example, if I perform the following command as user iceflatline:
1
zfs snapshot pool_0/dataset_0@snap-test-1
A second snapshot pool_0/data_0@snap-test-1 is created. This second snapshot contains only the file system changes that occurred in pool_0/dataset_0 between the time I created this snapshot and the previous snapshot, pool_0/dataset_0@snap-test-0. Now, as user iceflatline, I can use zfs send with the -i option and indicate the pair of snapshots to generate an incremental stream containing only the data that has changed:
Note that sending an incremental stream will only succeed if an initial full snapshot already exists on the receiving side. I’ve also included the -R option with the zfs send command this time. This option will preserve the ZFS properties of any descendant datasets, snaphots, and clones in the stream. If the -F option is specified when this stream is received, any snapshots that exist on the receiving side that do not exist on the sending side are destroyed.
By the way, I can list all snapshots created of pool_0/dataset_0 using the following command as either user root or iceflatline:
1
zfs list-tsnapshot
After testing to make sure that snapshots could be successfully sent to backup, I created an ugly little script that creates a daily snapshot of pool_0/dataset_0 on server; looks for yesterday’s snapshot and, if found, sends an incremental stream containing only the file system data that has changed to backup; looks for any snapshots older than 30 days and deletes them on both server and backup; and finally, logs its output to the file /home/iceflatline/cronlog:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
#!/bin/sh
### BEGIN INFO
# PROVIDE:
# REQUIRE:
# KEYWORD:
# Description:
# This script is used to replicate incremental zfs snapshots daily from one pool/dataset(s) to another using ZFS send and receive.
# The number of snapshots to retain is defined in the variable retention.
# Note that an initial full snapshot must be created and sent to destination before this script can be successfully used.
# Author: iceflatline <iceflatline@gmail.com>
#
# OPTIONS:
# -R: Generate replication stream recursively
# -i: Generate incremental stream
# -v: Be verbose
# -u: Do not mount received stream,
# -d: Use the full sent snapshot path without the first element (without pool name) to determine the name of the new snapshot
# -F: Destroy snapshots and file systems that do not exist on the sending side.
### END INFO
### START OF SCRIPT
# These variables are named first because they are nested in other variables.
snap_prefix=snap
retention=30
# Full paths to these utilities are needed when running the script from cron.
echo"Could not find any snapshots to destroy.">>$log
fi
# Mark the end of the script with a delimiter.
echo"**********">>$log
# END OF SCRIPT
To use the script, I saved it to /home/iceflatline/bin with the name zfsrep.sh and, as user iceflatline, made it executable:
1
chmod+x/home/iceflatline/zfsrep.sh
Then added the following cron job to the crontab under the user iceflatline account. The script runs every day at 2300 local time:
1
2
# Run backup scripts every day at 2300
023***/home/iceflatline/bin/zfsrep.sh
The script works is working pretty well for me, but I soon discovered that if it missed a daily snapshot or could not successfully send a daily snapshot to backup, say because either server or backup were offline or the connection between the two was down, then an error would occur the following day when the script attempts to send a new incremental snapshot. This is because backup was missing previous day’s snapshot and so the script could not send an incremental stream. To recover from this error I needed to manually send those missing snapshots. Say, for example, I had the following snapshots on server:
Now say that the script was not able to create pool_0/dataset_0@snap-20150623 on server because it was offline for some reason. Consequently, it was not able to successfully replicated this snapshot to backup. The next day, when server is back online, the script will successfully create another daily snapshot pool_0/dataset_0@snap-20150624 but will not be able to successfully send it to backup because pool_0/dataset_0@snap-20150623 is missing. To recover from this problem I’ll need to manually perform an incremental zfs send using pool_0/dataset_0@snap-20150622 and pool_0/dataset_0@snap-20150624:
Now both server and backup have the same snapshots and the script will function normally again.
File recovery
Having now a way to reliably replicate the file system offsite on daily basis, what happens if I need to recover some files? Fortunately, there are a couple of options available to me. First, because I chose to make snapshots visible on server, I can easily navigate to /pool_0/dataset_0/.zfs/snapshot and copy any files up to 30 days in the past (given the current retention value in the script). I could also mount pool_0/dataset_0 on backup and copy these same files from there using a utility like scp if desired.
I could also send snapshot(s) from backup to back to server. To do this I would create a new dataset on pool_0 on server. In this example, the new dataset is named receive:
1
zfs create pool_0/receive
Why is creating a new dataset necessary? Because there exists already the dataset pool_0/dataset_0 on server. If I tried to send pool_0/dataset_0@some-snapshot from backup back to server there would be a conflict. I could have avoided this step if I had created a dataset on pool_0 on backup and replicated snapshots of pool_0/dataset_0 to that dataset instead of directly to pool_0.
Okay, now, as user iceflatline I can send the snapshot(s) I want from backup to server:
After the stream is fully received I switch to user root and mount the dataset:
1
zfs mount pool_0/receive/dataset_0
This will result in pool_0/dataset_0@snap-20150620 sent from backup to be mounted read only to pool_0/receive/dataset_0 on server. Now I can navigate to /pool_0/receive/dataset_0 and copy the files I need to recover, or I can clone or clone and promote pool_0/receive/dataset_0@snap-20150629, whatever.
Conclusion
Well, that’s it. A long and rambling post on how I’m using the replication features in FreeBSD’s ZFS to improve the reliability and resiliency of my file system backups. So far, it’s working rather well for me, and it’s been a great learning experience. Is it the best or only way? Likely not. Are there better (or at least more professional) utilities or scripts to use? Most assuredly. But for now I’ve met my most important requirement: reliably backing up my data offsite.
I’ve grown tired of connecting to each host individually in my network to examine their log files. In addition to logging events locally, I would like these hosts to send their logs to a designated host in my network, resulting in a single location where I can examine and analyze all logs.
This post describes how to setup and configure a machine running FreeBSD to be a system log or “syslog” server, receiving incoming log events from other hosts in the network. A second machine, also running FreeBSD, will be configured to send its log events to the syslog server.
For purposes of example, we’ll use the hostname “server” for the machine hosting our our syslog server, and “client” for the other machine – the one sending its log events to the syslog server. All steps involved assume that FreeBSD is installed and operating correctly on both machines. All commands are issued as the root user.
The versions for the software used in this post were as follows:
FreeBSD 11.0-RELEASE
Let’s get started…
Configure the syslog server
First, we need a file in server’s/var/log directory to host the log events coming from client. For our example, we’ll make this file name the same as client’s hostname. While you don’t need to use the .log extension, I find it helpful as it clearly indicates the purpose of the file:
1
touch/var/log/client.log
Next we need to add a couple of options to syslogd, the FreeBSD utility that reads and logs messages to the system console and log files. Use sysrc to add the following line to /etc/rc.conf, substituting the IP network and network mask for your own:
1
sysrc syslogd_flags="-4 -a 192.168.1.0/24 -vv"
The -4 (IPv4) option forces syslogd to listen for IPv4 addresses only.
The -a (allowed_peer) option specifies which clients are allowed to log to this syslog server. This option can take the form of IP address/mask:service, such as “-a 192.168.10.1/24:*” (the `*’ character permits packets sent from any UDP port), or hostname.domain, such as “-a client.home”, or “-a *.home” (assuming that the hostname can be successfully resolved to the correct IP address in the network). Multiple -a options may be also be specified. In this example, allowed_peer will the form of any host within an entire IP network, in this case 192.168.1.0/24.
Finally, the -v opton indicates verbose logging. If -v is specified once, the event’s numeric facility and priority will be added to the log. If specified more than once, the names of the event’s facility and priority (e.g., “user.notice”) are also added to the log.
Now we need to add some lines to server’s/etc/syslog.conf file, the configuration file for syslogd. First, the name of server’s hostname, preceeded by a + character, must be added to top of the file – before any existing syslog options (i.e., right before *.err; …, etc.) – so that those existing options will be applied only to log events generated by locally by server. If we did not add this line then all those options would also be applied to the log events that arrive from client. In other words, any options after a +(some_hostname) in the this file will apply until the next +(some_hostname) is parsed:
1
+server
Then add following lines to the bottom of /etc/syslog.conf after the last !* , substituting the .home domain for your own:
1
2
+client.home
*.*/var/log/client.log
The first line specifies that remote log events will be arriving from client. client can be specified using either its hostname or its IP address. Note that when using a hostname the domain name must be appended to it. In either case, the hostname.domain or host ip address is preceded by a + character.
The second line contains parameters to control the handling of incoming log events from client, specifically a selector field followed by an action field. The syntax of the selector field is facility.level. The facility portion describes which subsystem generated the message, such as the kernel or a daemon, while the level portion describes the severity of the event that occurred. Multiple selector fields can be used for the same action and should be separated using a semicolon (;). In our example we’ll use the * characters in the selector field to match any log events received by client.
The action field denotes where to send the log message. In our case, log events will be sent to the log file we created previously. Note that spaces are valid field separators in FreeBSD’s /etc/syslog.conf file. However, other nix-like systems still insist on using tabs as field separators. If you are sharing this file between systems, you may want to use only tabs as field separators.
Managing the log files
The file /var/log/client.log will grow over time, making it difficult to locate useful event information as well as taking up disk space. FreeBSD mitigates this problem using using newsyslog, a built-in utility that, among other things, periodically rotates and compresses log files. newsyslog is scheduled to run periodically by the system crontab (/etc/crontab). In its default configuration, it runs every hour.
newsyslog reads from its configuration file, /etc/newsyslog.conf in order to determine which actions to take. This file contains one line for each log file that newsyslog manages. Each line is comprised of various fields which control the log file’s owner and group, permissions, and when the log file should be rotated. In addition there are several optional fields for controlling log file compression and programs that should be signaled when the log file is rotated. Each field is separated with whitespace.
In order to have newsyslog recognize client’s log file, we’ll place the following line at the botton of /etc/newsyslog.conf:
1
/var/log/client.log6405100*JC
In this example, the file permission for /var/log/client.log is set to 640. newsyslog will retain up to five archive files, and rotate the file when its size reaches 100 kB. The * character in the when column instructs newsyslog to ignore a time interval, a specific time, or both and instead only consider the size of the file when determining whether or not to rotate the file. The J flag tells newsyslog compress the rotated log file using bzip2, and the C flag tells newsyslog to create the log file if it does not already exist.
Finally, let’s restart syslogd and newsyslog on server:
1
2
service-vsyslogd restart
service-vnewsyslog restart
Configure the client
Let’s move on now and configure client so that it will send its event logs to server. Open client’s/etc/syslog.conf file and add the following line after the last !*, to instruct client to send log events of any facility and level to server:
1
*.*@server
server can be specified using either its hostname, hostname.domain or its IP address, preceded by a @ character.
Now let’s restart syslogd on client:
1
service-vsyslogd restart
Finally, let’s make sure client is sending its log events to server using the logger utility. Logon to client and issue the follow command:
1
logger Thismessage isfrom client
Now login to server and and check client’s log file:
1
tail/var/log/client.log
You should see the message you sent using the logger utility:
1
Aug212:54:14<user.notice>test.home iceflatline:Thisatest message from client
Conclusion
That’s it. In addition to logging events locally, the client host will send its logs to our syslog server, resulting in a single location where log events can be examined and analyzed.