Just Enough Nagios on a Raspberry Pi
Table of Contents
- An Introduction to Nagios
- Raspberry Pi Hardware
- Operating System: Raspbian - Jessie
- Nagios Installation
- Linux Commands
Farooq Mohammed Ahmed
First and especially I would like to express my thanks to Farooq Mohammed Ahmed. In many ways he is the reason that this book has made it to a published state. For quite a while I wanted to experiment with and learn about Nagios and especially how to use it on a Raspberry Pi. Sadly I struggled and wasn’t able to find a description of the installation process that was detailed enough or which worked for the particulars of the Raspberry Pi. Then I came across Farooq’s work and it was exactly what I was looking for. In half an hour I was up and running and then I was experimenting. His site has a wealth of information on monitoring, Linux and the Raspberry Pi (amongst other things) and I thoroughly recommend it to others. He was also kind enough to offer advice and give permission for his efforts to be included in the book. You can also find him here on Linkedin.
The Nagios Community
Big thanks go out to the Nagios community. Whether providing advice on Google Groups or Stack Overflow, contributing feedback on Redit or just giving back in the form of time and effort to similar work. Well done all.
Lastly, I want to pay homage to Leanpub who have made the publishing of this document possible. They offer an outstanding service for self-publishing and have made the task of providing and distributing content achievable.
Make sure you get the most up to date copy of Just Enough Nagios on a Raspberry Pi
If you’ve received a copy of this book from any location other than Leanpub then it’s possible that you haven’t got the latest version. Go to https://leanpub.com/jenagios and download the most recent version. After all, it won’t cost you anything :-). If you find some value in the work, please consider contributing when you download it so that Leanpub get something for hosting the book (and I’ll think of you fondly while I contribute more content :-).
If you haven’t guessed already, I’m hoping that this will be a journey of discovery for both of us. Experimenting with computers and using them to make our lives better in some way is an inherently fun thing to do if approached in the right way. I hope that this book can assist you in enjoying the process of learning and provide a useful outcome at the same time. I know that there is plenty of information about how to approach this sort of effort on the Internet, but I want to go a little farther and provide some background and rationale behind the process of setting up an infrastructure monitoring system. Hopefully this will demonstrate that the knowledge required isn’t rocket science and that there is a world of computing opportunities available for discerning users :-).
Ambitious? Perhaps :-). But I’d like to think that if you’re reading this, perhaps I managed to make some headway. I dare say that like other books I have written (or are in the process of writing) they will remain a work in progress. They are living documents, open to feedback, comment, expansion, change and improvement. Please feel free to provide your thoughts on ways that I can improve things. Your input would be much appreciated.
You will find that I have typically eschewed a simple “Do this approach” for more of a story telling exercise. This means that some explanations are longer and more flowery than might be to everyone’s liking, but there you go, try to be brave :-)
I’m sure most authors try to be as accessible as possible. I’d like to do the same, but be warned… There’s a good chance that if you ask me a technical question I may not know the answer. So please be gentle with your emails :-).
What are we trying to do?
Put simply, we are going to work through the process of setting up a Raspberry Pi with Nagios to support an infrastructure monitoring service.
The intention of what I describe in this book is not to duplicate the functionality of using a Nagios based service in a setting like a data centre, but to use the service on an internal network similar to a home or shared residence. The implications of providing a service to a large centre is a non-trivial exercise and one which would need to be considered in a far more thorough way than will be presented here. While in theory, the instructions here will allow you to consider it, I really don’t recommend it because of the scale and security implications.
Who is this book for?
Just by virtue of taking an interest and getting hold of a copy of this book you have demonstrated a desire to learn, to explore and to challenge yourself. That’s the most important criteria you will want to have when trying something new. Your experience level will come second place to a desire to learn.
Having said that, it may be useful to be comfortable using the Windows operating system (I’ll be using Windows 7 for the set-up of the devices since that would probably classify as (currently) the world’s most ubiquitous operating system), you should be aware of Linux as an alternative operating system, but you needn’t have tried it before. If you’ve ever set up a computer before, you’re already a good way down the track for setting up a Raspberry Pi, but we’ll break anything tricky down into bite sized chunks. In short, we’ll make it easy and spell out what’s going on as it comes up. The best thing to remember is that before you learn anything new, it pretty much always appears indistinguishable from magic, but once you start experimenting, the mystery quickly falls away.
Other Useful Resources
This is a great site run by Farooq Mohammed Ahmed. It was the trigger point to get this book underway and includes information on;
The main Nagios site should be the default starting place to ensure that you are as up to date as possible on the most recent developments and options available in the Nagios World.
The Linux Information Project (linfo.org)
The raspberry Pi ‘mothership’ of raspberrypi.org has tons of good information on the Pi and which includes guides to using Linux with everyone’s favourite small computer.
What stuff will we need?
To accomplish our goals we are going to use a fairly small amount of hardware and software. They will all be low cost or free. The idea is that the price of the equipment should not be a major impediment to learning how all this goes together. Hopefully this book should also shorten the selection process for working out which parts we will need.
In the words of the totally awesome Raspberry Pi foundation;
The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse. It is a capable little device that enables people of all ages to explore computing, and to learn how to program in languages like Scratch and Python. It’s capable of doing everything you’d expect a desktop computer to do, from browsing the internet and playing high-definition video, to making spreadsheets, word-processing, and playing games.
It really is an extraordinary device that is all the more extraordinary for the altruistic effort that brought it into being.
There are (at time of writing) seven different models on the market. The A, B, A+, B+, B3, B3 and Zero. I recommend that we use the B+, B2 or B3 for no reason other than they offer a good range of USB ports (4), 512 or 1024 MB of RAM, an HMDI video connection, an Ethernet connection and 17 General Purpose Input / Output (GPIO) pins. For all intents and purposes the B+, B2 or B3 can be used interchangeably for the project so long as the latest version of the Raspbian operating system (the ‘Jessie’ edition) is used.
Nagios is a widely used, free, open source monitoring system that enables identification and resolution of IT infrastructure problems.
By using Nagios, you can:
- Monitor network services (SMTP, POP3, HTTP, NNTP, ICMP, SNMP, FTP, SSH).
- Monitor of host resources (processor load, disk usage, system logs).
- Monitor things like probes (temperature, alarms,etc.) which have the ability to send collected data via a network to specifically written plugins.
- Provide contact notifications when service or host problems occur and get resolved (via e-mail, pager or SMS).
- Provide a web-interface for viewing current network status, notifications, problem history, log files, etc.
- Provide a historical record of outages, events, notifications, and alert response for later review.
To install at home it’s relatively easy to set up, doesn’t require a high powered computer to operate (we’re going to use a $35 Raspberry Pi) and provides us with an incredible capacity to monitor any connected infrastructure.
An Introduction to Nagios
Nagios, or more specifically Nagios Core is an Open Source system and network monitoring application. Since it was first launched in 1999, Nagios has grown to include thousands of projects developed by the Nagios community. Nagios is officially sponsored by Nagios Enterprises, which supports the community in a number of different ways through sales of its commercial products and services.
Some of the many features of Nagios Core include:
- Monitoring of network services (SMTP, POP3, HTTP, NNTP, PING, etc.)
- Monitoring of host resources (processor load, disk usage, etc.)
- Simple plugin design that allows users to easily develop their own service checks
- Parallelized service checks
- Ability to define network host hierarchy using “parent” hosts, allowing detection of and distinction between hosts that are down and those that are unreachable
- Contact notifications when service or host problems occur and get resolved (via email, pager, or an alternative user-defined method)
- Ability to define event handlers to be run during service or host events for proactive problem resolution
- Automatic log file rotation
- Support for implementing redundant monitoring hosts
- Optional web interface for viewing current network status, notification and problem history, log files, etc.
Nagios can monitor an IT infrastructure to ensure systems, applications, services, and business processes are functioning properly. In the event of a failure, Nagios can alert technical staff of the problem, allowing them to begin remediation processes before outages affect business processes, end-users, or customers. In short, it watches hosts and services that you specify, alerting you when things go bad and when they get better.
Nagios is one of, if not the industry leading infrastructure monitoring platforms, the following book provides a useful guide to getting up and started hosting your own version on a Raspberry Pi.
Is this book associated with, supported by or in any way representing the good folks at Nagios? Nope. Don’t get me wrong. Great product, approach and I’m a big fan, but ultimately the book is a reflection of me learning how to do something and writing it down. No part of it is the responsibility of Nagios. Any and all errors will be mine. Feel free to follow my trail, but ultimately I won’t be the one typing the commands on your computer ;-).
Raspberry Pi Hardware
To make a start using the Raspberry Pi we will need to have some additional hardware to allow us to configure it and for many people to allow them to use it like a normal computer.
The Raspberry Pi needs to store the Operating System and working files on a MicroSD card (actually a full size SD card for the A or B model, but since the guide isn’t written with these in mind, let’s stay with the MicroSD card).
The MicroSD card receptacle is on the rear of the board and is of a ‘push-push’ type which means that you push the card in to insert it and then to remove it, give it a small push and it will spring out.
This is the equivalent of a hard drive for a regular computer, but we’re going for a minimal effect. We will want to use a minimum of an 8GB card (smaller is possible, but 8 is recommended). Also try to select a higher speed card if possible (class 10 or similar) as it is anticipated that this should speed things up a bit.
Keyboard / Mouse
While we will be making the effort to access our system via a remote computer, it’s useful to have a keyboard and a mouse for the initial set-up. Because the B+, B2 and B3 models of the Pi have 4 x USB ports, there is plenty of space for us to connect wired USB devices.
A wireless combination would most likely be recognised without any problem and would only take up a single USB port, but if we will build towards a remote capacity for using the Pi (using it headless, without a keyboard / mouse / display), the nicety of a wireless connection is not strictly required.
The Raspberry Pi comes with an HDMI port ready to go which means that any monitor or TV with an HDMI connection should be able to connect easily.
Because this is kind of a hobby thing you might want to consider utilising an older computer monitor with a DVI or 15 pin D connector. If you want to go this way you will need an adapter to convert the connection.
Remembering that at the end of this process it isn’t intended that we have a monitor connected permanently at all. We will run the system ‘headless’ and access it remotely via a web interface or via ssh / putty for setting up.
The B+, B2 and B3 models of the Raspberry Pi have a standard RJ45 network connector on the board ready to go. In a domestic installation this is most likely easiest to connect into a home ADSL modem or router. The B3 has wireless built into the board, but we’re going to work on the assumption that our Pi has a physical network connection.
The Pi can be powered up in a few ways. The simplest is to use the micro USB port to connect from a standard USB charging cable. You probably have a few around the house already for phones or tablets.
It is worth knowing that depending on what use we wish to put our Raspberry Pi to we might want to pay a certain amount of attention to the amount of current that our power supply can supply. The B+, B2 and B3 models will function adequately with a 700mA supply, but if we want to look towards using multiple wireless devices or supplying sensors that demand power from the Pi, we should consider a supply that is capable of an output up to 2A or 2.5A for the B3.
We should get ourselves a simple case to sit the Pi out of the dust and detritus that’s floating about. There are a wide range of options to select from. These range from cheap but effective to more costly than the Pi itself (not hard) and looking fancy.
You could use a simple plastic case that can be brought for a few dollars;
At the high end of the market is a high quality aviation grade anodized aluminium case from ebay seller sauliakasas This will cost you more than the Pi itself, but it is a beautiful case;
Or nylon stand-offs to create a simple but flexible stack o’ Pi;
You could look at the stylish Flirc Raspberry Pi Case which is very popular with media centre distributions;
For a sense of style, a very practical design and a warm glow from knowing that you’re supporting a worthy cause, you could go no further than the official Raspberry Pi case that includes removable side-plates and loads of different types of access. All for the paltry sum of about $9.
Operating System: Raspbian - Jessie
An operating system is software that manages computer hardware and software resources for computer applications. For example Microsoft Windows could be the operating system that will allow the browser application Firefox to run on our desktop computer.
Variations on the Linux operating system are the most popular on our Raspberry Pi. We will be using the ‘Raspbian’ Linux distribution which is based on Debian Linux. There are two editions of Raspbian available, ‘Wheezy’ and ‘Jessie’. Jessie is the most modern stable version and therefore it can be expected to support the widest range of hardware and software. This guide is written demonstrating the use of Jessie
Linux is a computer operating system that is can be distributed as free and open-source software. The defining component of Linux is the Linux kernel, an operating system kernel first released on 5 October 1991 by Linus Torvalds.
Linux was originally developed as a free operating system for Intel x86-based personal computers. It has since been made available to a huge range of computer hardware platforms and is a leading operating system on servers, mainframe computers and supercomputers. Linux also runs on embedded systems, which are devices whose operating system is typically built into the firmware and is highly tailored to the system; this includes mobile phones, tablet computers, network routers, facility automation controls, televisions and video game consoles. Android, the most widely used operating system for tablets and smart-phones, is built on top of the Linux kernel. In our case we will be using a version of Linux that is assembled to run on the ARM CPU architecture used in the Raspberry Pi.
The development of Linux is one of the most prominent examples of free and open-source software collaboration. Typically, Linux is packaged in a form known as a Linux distribution, for both desktop and server use. Popular mainstream Linux distributions include Debian, Ubuntu and the commercial Red Hat Enterprise Linux. Linux distributions include the Linux kernel, supporting utilities and libraries and usually a large amount of application software to carry out the distribution’s intended use.
A distribution intended to run as a server may omit all graphical desktop environments from the standard install, and instead include other software to set up and operate a solution stack such as LAMP (Linux, Apache, MySQL and PHP). Because Linux is freely re-distributable, anyone may create a distribution for any intended use.
Sourcing and Setting Up
On our Windows desktop machine we are going to download the image (*.img) files for each distribution and write it onto a MicroSD card. This will then be installed into the Raspberry Pi.
We should always try to download our image files from the authoritative source and we can normally do so in a couple of different ways. We can download via bit torrent or directly as a zip file, but whatever method is used we should eventually be left with an ‘img’ file for our distribution.
To ensure that the projects we work on can be used with the full range of Raspberry Pi models (especially the B2 and B3) we need to make sure that the versions of the image files we download are from 2015-01-13 or later. Earlier downloads will not support the more modern CPUs.
Writing the Operating System image to the SD Card
Once we have an image file we need to get it onto our SD card.
We will work through an example using Windows 7, but for guidance on other options (Linux or Mac OS) raspberrypi.org has some great descriptions of the processes here.
We will use the Open Source utility Win32DiskImager which is available from sourceforge. This program allows us to install our disk image onto our SD card. Download and install Win32DiskImager.
You will need an SD card reader capable of accepting your MicroSD card (you may require an adapter or have a reader built into your desktop or laptop). Place the card in the reader and you should see a drive letter appear in Windows Explorer that corresponds with the SD card.
Start the Win32 Disk Imager program.
Select the correct drive letter for your SD card (make sure it’s the right one) and the disk image file that you downloaded. Then select ‘Write’ and the disk imager will write the image to the SD card. It can vary a little, but it should only take about 3-4 minutes with a class 10 SD card.
Once the process is finished exit the disk imager and eject the card from the computer and we’re done.
Welcome to Raspbian (Debian Jessie)
The Raspbian Linux distribution is based on Debian Linux. There are two editions published. ‘Wheezy’ and ‘Jessie’. You might well be asking if that detail matters a great deal. Well, it kind of does since Debian is such a widely used distribution that it allows Raspbian users to leverage a huge quantity of community based experience in using and configuring the software. The Wheezy edition is the earlier of the two and has been the stock edition from the inception of the Raspberry Pi till the end of 2015. Jessie is the latest stable version and is therefore the preferred option (and the one that we will use).
The best place to source the latest version of the Raspbian Operating System is to go to the raspberrypi.org page; http://www.raspberrypi.org/downloads/.
For the Jessie version of Raspbian there are actually two alternatives available for download. There is a full desktop option that will provide us with a GUI and lots of fanciness or a ‘Lite’ version that has a minimal amount of software loaded. We will be using the ‘Lite’ version since our installation has no requirement to use a GUI and the extraneous software would just be using up space on the storage or processing power if it was operating.
You can download via bit torrent or directly as a zip file, but whatever the method you should eventually be left with an ‘img’ file for Raspbian.
To ensure that the projects we work on can be used with either the B+, B2 or B3 models we need to make sure that the version of Raspbian we download is from 2015-01-13 or later. Earlier downloads will not support the more modern CPUs of the B2 or B3.
Make sure that you’ve completed the previous section on downloading and loading the image file and have a Raspbian disk image written to a MicroSD card. Insert the card into the slot on the Raspberry Pi and turn on the power.
You will see a range of information scrolling up the screen before eventually being presented with textual prompt.
Congratulations, you have a working Raspberry Pi and are ready to start getting into the thick of things!
You are currently looking on the ‘Command Line’ or the ‘CLI’ (Command Line Interface). This is an environment that a great number of Linux users feel comfortable in and from here they are able to operate the computer in ways that can sometimes look like magic. Brace yourself… We are going to work on the command line for quite a bit while working on the Raspberry Pi. This may well be unfamiliar territory for a lot of people, but rest assured, once you get familiar with it you will realise its usefulness.
The default username and password is:
We are going to use this user as our default through the installation process. From here you can login as ‘pi’ by entering the username (‘pi’) and password (‘raspberry’).
Once logged in we’ll do a bit of house keeping.
It is a good idea to run the Raspberry Pi Software Configuration tool once we’re logged in as it enables full use of the storage on the SD card and changes in the locale and keyboard configuration.
Type in the following line which will start the configuration tool;
Using this tool you can first ensure that all of the SD card storage is available to the Operating System. Using the tab key, select the ‘Expand Filesystem’ option which will be quickly completed and we will be informed that it will become operative upon the next reboot. You may want to change the ‘Internationalisation Options’ depending on your location. This can be done using the arrow keys to move to this option and then using the tab key again to select. Follow the prompts to set up the Pi appropriately.
Once this has been completed select finish. This will allow you reboot the Pi and take advantage of the full capacity of the SD card.
Once the reboot is complete you will be presented with the console prompt to login again.
Now we need to make sure that we have the latest software for our system. This is a useful thing to do as it allows any additional improvements to the software you will be using to be enhanced or security of the operating system to be improved. This is probably a good time to mention that you will need to have an Internet connection available.
To do this make sure you are logged in and type in the following line which will find the latest lists of available software;
You should see a list of text scroll up while the Pi is downloading the latest information.
Then we want to upgrade our software to latest versions from those lists using;
The Pi should tell you the lists of packages that it has identified as suitable for an upgrade and along with the amount of data that will be downloaded and the space that will be used on the system. It will then ask you to confirm that you want to go ahead. Tell it ‘Y’ and we will see another list of details as it heads off downloading software and installing it.
At this point we have Raspbian installed, updated and ready to go.
Static IP Address
To make our use of Nagios on a Raspberry Pi truly flexible we will want to untether it from the keyboard / mouse and screen. This means that we will be accessing the Pi remotely from another computer.
Enabling remote access is a really useful thing. To do so we will want to assign our Raspberry Pi a static IP address.
An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer, printer) participating in a computer network that uses the Internet Protocol for communication.
There is a strong likelihood that our Raspberry Pi already has an IP address and it should appear in conjunction with the ‘login’ prompt when you first boot up;
My IP address... part may appear just above or around 15 lines above the login line, depending on what boad or version of ‘Debian Jessie’ we are using. In this example the IP address 10.1.1.29 belongs to the Raspberry Pi.
This address will probably be a ‘dynamic’ IP address and could change each time the Pi is booted. For the purposes of using the Raspberry Pi as a web platform a database or with remote access we need to set a fixed IP address.
This description of setting up a static IP address makes the assumption that we have a device running on the network that is assigning IP addresses as required. This sounds like kind of a big deal, but in fact it is a very common service to be running on even a small home network and it will be running on the ADSL modem or similar. This function is run as a service called DHCP (Dynamic Host Configuration Protocol). You will need to have access to this device for the purposes of knowing what the allowable ranges are for a static IP address. The most likely place to find a DHCP service running in a normal domestic situation would be an an ADSL modem or router.
A common feature for home modems and routers that run DHCP devices is to allow the user to set up the range of allowable network addresses that can exist on the network. At a higher level you should be able to set a ‘netmask’ which will do the job for you. A netmask looks similar to an IP address, but it allows you to specify the range of addresses for ‘hosts’ (in our case computers) that can be connected to the network.
A very common netmask is 255.255.255.0 which means that the network in question can have any one of the combinations where the final number in the IP address varies. In other words with a netmask of 255.255.255.0 the IP addresses available for devices on the network 10.1.1.x range from 10.1.1.0 to 10.1.1.255 or in other words any one of 256 unique addresses.
An alternative to specifying a netmask in the format of ‘255.255.255.0’ is to use a system called Classless Inter-Domain Routing, or CIDR. The concept is that you can add a specification in the IP address itself that indicates the number of significant bits that make up the netmask.
For example, we could designate the IP address 10.1.1.17 as associated with the netmask 255.255.255.0 by using the CIDR notation of 10.1.1.17/24. This means that the first 24 bits of the IP address given are considered significant for the network routing.
Using CIDR notation allows us to do some very clever things to organise our network, but at the same time it can have the effect of freaking people out by introducing a pretty complex topic when all they want to do is get their network going :-). So for the sake of this explanation we can assume that if we wanted to specify an IP address and a netmask, it could be accomplished by either specifying each seperatly (IP address = 10.1.1.17 and netmask = 255.255.255.0) or in CIDR format (10.1.1.17/24)
Distinguish Dynamic from Static
The other service that our DHCP server will allow is the setting of a range of addresses that can be assigned dynamically. In other words we will be able to declare that the range from 10.1.1.20 to 10.1.1.255 can be dynamically assigned which leaves 10.1.1.0 to 10.1.1.19 which can be set as static addresses.
You might also be able to reserve an IP address on your modem / router. To do this you will need to know what the MAC (or hardware address) of the Raspberry Pi is. To find the hardware address on the Raspberry Pi type;
This will produce an output which will look a little like the following;
00:08:C7:1B:8C:02 are the Hardware or MAC address.
Because there are a huge range of different DHCP servers being run on different home networks, I will have to leave you with those descriptions and the advice to consult your devices manual to help you find an IP address that can be assigned as a static address. Make sure that the assigned number has not already been taken by another device. In a perfect World we would hold a list of any devices which have static addresses so that our Pi’s address does not clash with any other device.
For the sake of the upcoming projects we will assume that the address 10.1.1.230 is available.
Before we start configuring we will need to find out what the default gateway is for our network. A default gateway is an IP address that a device (typically a router) will use when it is asked to go to an address that it doesn’t immediately recognise. This would most commonly occur when a computer on a home network wants to contact a computer on the Internet. The default gateway is therefore typically the address of the modem / router on your home network.
We can check to find out what our default gateway is from Windows by going to the command prompt (Start > Accessories > Command Prompt) and typing;
This should present a range of information including a section that looks a little like the following;
The default router gateway is therefore ‘10.1.1.1’.
On the Raspberry Pi at the command line we are going to start up a text editor and edit the file that holds the configuration details for the network connections.
The file is
/etc/dhcpcd.conf. That is to say it’s the
dhcpcd.conf file which is in the
etc directory which is in the root ((
To edit this file we are going to type in the following command;
The nano file editor will start and show the contents of the
dhcpcd.conf file which should look a little like the following;
We are going to add the information that tells the network interface to use eth0 at our static address that we decided on earlier (
10.1.1.230) along with information on the netmask to use (in CIDR format) and the default gateway of our router. To do this we will add the following lines to the end of the information in the
Here we can see the IP address and netmask (
static ip_address=10.1.1.230/24), the gateway address for our router (
static routers=10.1.1.1) and the address where the computer can also find DNS information (
Once you have finished press ctrl-x to tell nano you’re finished and it will prompt you to confirm saving the file. Check your changes over and then press ‘y’ to save the file (if it’s correct). It will then prompt you for the file-name to save the file as. Press return to accept the default of the current name and you’re done!
To allow the changes to become operative we can type in;
This will reboot the Raspberry Pi and we should see the (by now familiar) scroll of text and when it finishes rebooting and we log in you should see;
Which tells us that the changes have been successful (bearing in mind that the IP address above should be the one you have chosen, not necessarily the one we have been using as an example).
To allow us to work on our Raspberry Pi from our normal desktop we can give ourselves the ability to connect to the Pi from another computer. The will mean that we don’t need to have the keyboard / mouse or video connected to the Raspberry Pi and we can physically place it somewhere else and still work on it without problem. This process is called ‘remotely accessing’ our computer .
To do this we need to install an application on our windows desktop which will act as a ‘client’ in the process and software on our Raspberry Pi to act as the ‘server’. The way that we are going to accomplish this task is via what’s called SSH access.
Remote access via SSH
Secure Shell (SSH) is a network protocol that allows secure data communication, remote command-line login, remote command execution, and other secure network services between two networked computers. It connects, via a secure channel over an insecure network, a server and a client running SSH server and SSH client programs, respectively (there’s the client-server model again).
In our case the SSH program on the server is running
sshd and on the Windows machine we will use a program called ‘PuTTY’.
Setting up the Server (Raspberry Pi)
This is definitely one of the easiest set-up steps since SSH is already installed on Raspbian.
To check that it is there and working type the following from the command line;
The Pi should respond with the message that the program
sshd is running.
Installing SSH on the Raspberry Pi.
If for some reason SSH is not installed on your Pi, you can easily install with the command;
Once this has been done SSH will start automatically when the Raspberry Pi boots up.
Setting up the Client (Windows)
On the download page there are a range of options available for use. The best option for us is most likely under the ‘For Windows on Intel x86’ heading and we should just download the ‘putty.exe’ program.
Save the file somewhere logical as it is a stand-alone program that will run when you double click on it (you can make life easier by placing a short-cut on the desktop).
Once we have the file saved, run the program by double clicking on it and it will start without problem.
The first thing we will set-up for our connection is the way that the program recognises how the mouse works. In the ‘Window’ Category on the left of the PuTTY Configuration box, click on the ‘Selection’ option. On this page we want to change the ‘Action of mouse’ option from the default of ‘Compromise (Middle extends, Right paste)’ to ‘Windows (Middle extends, Right brings up menu)’. This keeps the standard Windows mouse actions the same when you use PuTTY.
Now select the ‘Session’ Category on the left hand menu. Here we want to enter our static IP address that we set up earlier (10.1.1.230 in the example that we have been following, but use your one) and because we would like to access this connection on a frequent basis we can enter a name for it as a saved session (In the screen-shot below it is imaginatively called ‘Nagios Pi’). Then click on ‘Save’.
Now we can select our Raspberry Pi Session (per the screen-shot above) and click on the ‘Open’ button.
The first thing you will be greeted with is a window asking if you trust the host that you’re trying to connect to.
In this case it is a pretty safe bet to click on the ‘Yes’ button to confirm that we know and trust the connection.
Once this is done, a new terminal window will be shown with a prompt to
login as: . Here we can enter our user name (‘pi’) and then our password (if it’s still the default it is ‘raspberry’).
There you have it. A command line connection via SSH. Well done.
As I mentioned at the end of the section on remotely accessing the Raspberry Pi’s GUI, if this is the first time that you’ve done something like this it can be a very liberating feeling.
Historically, installation of Nagios on a Raspberry Pi has been somewhat ‘problematic’. The reason being that while Nagios has been compiled to work with a wide range of different computer types, installation onto the ARM architecture used by the Pi has not been common. There has been good work done to pre-compile and distribute images for use, but in may cases these do not support the later versions of Raspbian and they take some of the ‘mystery’ and learning away from the process.
First we need to set ourselves as the root user for all the following commands;
We should be able to see the immediate difference as the command prompt changes to indicate that we are now operating as root;
Get the required supporting packages
We need to download and install the required packages via
We will be informed of the additional packages that will also need to be installed and asked to agree to the process. Once we agree it will start and work its way through the various applications.
Now we create a group ‘nagcmd’ (using
groupadd) to facilitate external commands via the Web User Interface and then add both Nagios (‘nagios’) and Apache (‘www-data’) users to it with the
Now we can download Nagios Core and associated plugins using
wget to the
/tmp directory. The latest current versions are Nagios 4.1.1 and Nagios Plugins 2.1.1. We’ll do this in the
/tmp directory to keep out of the way of the rest of the system during the installation (hence the
Once these have finished downloading we need to un-bundle (
tar) and decompress the files;
Once this is complete we will have two folders with the name
Compile and configure Nagios Core
First we will start with Nagios Core install by changing into the appropriate directory;
Now we’re going to compile the source and install it. All the files from this process will go into
This process will whistle through a considerable amount of processing and then display a configuration summary.
As the final portion of the output states, assuming that the configuration looks OK we can then go ahead and start the
make process. This will carry out the process of building the executable programs and libraries from the available source code.
This is a complex process that will take some time to complete (especially on a ‘Pi’) so prepare to be patient while it works through the required computation.
Once successfully complete we will continue with installing Nagios as follows;
make install: This installs the main program, CGIs, and HTML files
make install-init: This installs the init script in /etc/init.d
make install-config: This installs sample config files in /usr/local/nagios/etc
make install-commandmode: This installs and configures permissions on the directory for holding the external command file
Now we need to install and configure web access to Nagios via our web server apache. First copy the appropriate files and adjust permissions;
Then make the directory for the
httpd.conf file by creating the
make install-webconf to install the Apache config file for the Nagios web interface
Now we can create a user and a password to use to access the Nagios Web User Interface via the Apache HTTP server. The name ‘nagiosadmin’ used below can be substituted for an alternative if desired.
We will be prompted (twice) for a new password for the ‘nagiosadmin’ user. Note it down somewhere appropriate.
Once that is done we need to restart the Apache service
Compile and configure Nagios plugins
Now we need to install the plugins for Nagios by first changing to the
Then we compile and install the plugins in a similar way to the process we used earlier on Nagios Core. Firstly we compile the appropriate executables from the source;
As with the previous effort this will proceed through a considerable amount of processing. Then we can carry out the
make process on the installation;
Once complete we need to ensure that the Nagios service starts up at runtime when the system boots up by creating a link from
nagios in the init.d directory;
At this point we should have successfully installed Nagios. We should verify this by checking for errors;
An output should be produced that looks a little like the following;
Per the final line, it looks as if everything has gone smoothly.
Configure Nagios as a service
Now we can create a
nagios.service file with the file editor ‘nano’ (or the editor of your choice) with the following content. This will allow us to start and stop Nagios in a standardised way.
To enable the cgi links from the Nagios Web UI we move the
cgi.load file as follows;
Then we can restart the Apache web service;
If no errors are reported then we can start the Nagios service.
At this point Nagios should be running, but it’s worth checking to see if there are any reported problems;
If everything has gone smoothly we should see a report similar to the following;
Access the Nagios Web Interface
We are now ready to access the Nagios Web User Interface. We can do this by using the static IP address that we set up for our Pi at the following URL http://10.1.1.230/nagios. This can be accessed from a machine (our Windows desktop for example) by simply typing it into the browser.
When we first enter this in our browser we will be asked to authenticate ourselves;
We need to enter the username and password that we set up earlier to access the Web interface. The one we used in the example was ‘nagiosadmin’. Once entered we should see the front page of our Nagios page looking a little like the following;
We’ve managed to successfully install Nagios on a Raspberry Pi! Now all we need to do is to use it :-).
apt-get command is a program, that is used with Debian based Linux distributions to install, remove or upgrade software packages. It’s a vital tool for installing and managing software and should be used on a regular basis to ensure that software is up to date and security patching requirements are met.
There are a plethora of uses for
apt-get, but we will consider the basics that will allow us to get by. These will include;
- Updating the database of available applications (
- Upgrading the applications on the system (
- Installing an application (
apt-get install *package-name*)
- Un-installing an application (
apt-get remove *package-name*)
apt part of
apt-get stands for ‘advanced packaging tool’. The program is a process for managing software packages installed on Linux machines, or more specifically Debian based Linux machines (Since those based on ‘redhat’ typically use their
rpm (red hat package management (or more lately the recursively named ‘rpm package management’) system). As Raspbian is based on Debian, so the examples we will be using are based on
APT simplifies the process of managing software on Unix-like computer systems by automating the retrieval, configuration and installation of software packages. This was historically a process best described as ‘dependency hell’ where the requirements for different packages could mean a manual installation of a simple software application could lead a user into a sink-hole of despair.
apt-get usage we will be prefixing the command with
sudo to give ourselves the appropriate permissions;
This will resynchronize our local list of packages files, updating information about new and recently changed packages. If an
apt-get upgrade (see below) is planned, an
apt-get update should always be performed first.
Once the command is executed, the computer will delve into the internet to source the lists of current packages and download them so that we will see a list of software sources similar to the following appear;
apt-get upgrade command will install the newest versions of all packages currently installed on the system. If a package is currently installed and a new version is available, it will be retrieved and upgraded. Any new versions of current packages that cannot be upgraded without changing the install status of another package will be left as they are.
As mentioned above, an
apt-get update should always be performed first so that
apt-get upgrade knows which new versions of packages are available.
Once the command is executed, the computer will consider its installed applications against the databases list of the most up to date packages and it will prompt us with a message that will let us know how many packages are available for upgrade, how much data will need to be downloaded and what impact this will have on our local storage. At this point we get to decide whether or not we want to continue;
Once we say yes (‘Y’) the upgrade kicks off and we will see a list of the packages as they are downloaded unpacked and installed (what follows is an edited example);
There can often be alerts as the process identifies different issues that it thinks the system might strike (different aliases, runtime levels or missing fully qualified domain names). This is not necessarily a sign of problems so much as an indication that the process had to take certain configurations into account when upgrading and these are worth noting. Whenever there is any doubt about what has occurred, Google will be your friend :-).
apt-get install command installs or upgrades one (or more) packages. All additional (dependency) packages required will also be retrieved and installed.
If we want to install multiple packages we can simply list each package separated by a space after the command as follows;
apt-get remove command removes one (or more) packages.
- How could you install a range of packages with similar names (for example all packages starting with ‘mysql’)?
- When removing a package, is the configuration for that package retained or deleted (Google will be your friend)?
- How could you remove multiple packages with a single command?
cd command is used to move around in the directory structure of the file system (change directory). It is one of the fundamental commands for navigating the Linux directory structure.
cd [options] directory : Used to change the current directory.
For example, when we first log into the Raspberry Pi as the ‘pi’ user we will find ourselves in the
/home/pi directory. If we wanted to change into the
/home directory (go up a level) we could use the command;
Take some time to get familiar with the concept of moving around the directory structure from the command line as it is an important skill to establish early in Linux.
cd command will be one of the first commands that someone starting with Linux will use. It is used to move around in the directory structure of the file system (hence
cd = change directory). It only has two options and these are seldom used. The arguments consist of pointing to the directory that we want to go to and these can be absolute or relative paths.
cd command can be used without options or arguments. In this case it returns us to our home directory as specified in the
If we cd into any random directory (try
cd /var) we can then run cd by itself;
… and in the case of a vanilla installation of Raspbian, we will change to the
In the example above, we changed to
/var and then ran the
cd command by itself and then we ran the
pwd command which showed us that the present working directory is
/home/pi. This is the Raspbian default home directory for the pi user.
As mentioned, there are only two options available to use with the
cd command. This is
-P which instructs
cd to use the physical directory structure instead of following symbolic links and the
-L option which forces symbolic links to be followed.
For those beginning Linux, there is little likelihood of using either of these two options in the immediate future and I suggest that you use your valuable memory to remember other Linux stuff.
As mentioned earlier, the default argument (if none is included) is to return to the users home directory as specified in the
When specifying a directory we can do this by absolute or relative addressing. So if we started in the
/home/pi directory, we could go the
/home directory by executing;
… or using relative addressing and we can use the
.. symbols to designate the parent directory;
Once in the
/home directory, we can change into the
/home/pi/Desktop directory using relative addressing as follows;
We can also use the
- argument to navigate to the previous directory we were in.
Change into the root (
- Having just changed from the
/home/pidirectory to the
/homedirectory, what are the five variations of using the
cdcommand that will take the pi user to the
- Starting in the
/home/pidirectory and using only relative addressing, use
cdto change into the
groupadd command is used by a superuser (typically via
sudo) to add a group account. It’s a fundamental command in the sense that Linux as a complex multi user system required a mechanism for adding groups. Not that this command will get used every day, but it’s important to know to enable us to administer users on the computer.
groupadd[options] group : add a group account
The following example is the simplest application of the command;
This creates the group ‘pigroup’ using the default values from the system.
groupadd command is not utilised on a regular basis but it is useful to recognise its role. It adds a group account using the default values from the system. The new group will be entered into the system files as needed.. It can only be used by the root user, so it is often prefixed by the
sudo command. Group names may only be up to 32 characters long and if the group name already exists in an external group database such as NIS or LDAP,
groupadd will deny the group creation request.
groupadd is executed it does so utilising the following files;
/etc/group: Group account information.
/etc/gshadow: Secure group account information.
/etc/login.defs: Shadow password suite configuration.
- What other command that we have looked at utilises the files above?
ifconfig command can be used to view the configuration of, or to configure a network interface. Networking is a fundamental function of modern computers.
ifconfig allows us to configure the network interfaces to allow that connection.
ifconfig[arguments] interface [options]
Used with no ‘interface’ declared
ifconfig will display information about all the operational network interfaces. For example running;
… produces something similar to the following on a simple Raspberry Pi.
The output above is broken into three sections; eth0, lo and wlan0.
eth0is the first Ethernet interface and in our case represents the RJ45 network port on the Raspberry Pi (in this specific case on a B+ model). If we had more than one Ethernet interface, they would be named
lois the loopback interface. This is a special network interface that the system uses to communicate with itself. You can notice that it has the IP address 127.0.0.1 assigned to it. This is described as designating the ‘localhost’.
wlan0is the name of the first wireless network interface on the computer. This reflects a wireless interface (if installed). Any additional wireless interfaces would be named
ifconfig command is used to read and manage a servers network interface configuration (hence
ifconfig = interface configuration).
We can use the
ifconfig command to display the current network configuration information, set up an ip address, netmask or broadcast address on an network interface, create an alias for network interface, set up hardware addresses and enable or disable network interfaces.
To view the details of a specific interface we can specify that interface as an argument;
Which will produce something similar to the following;
The configuration details being displayed above can be interpreted as follows;
Link encap:Ethernet- This tells us that the interface is an Ethernet related device.
HWaddr b8:27:eb:2c:bc:62- This is the hardware address or Media Access Control (MAC) address which is unique to each Ethernet card. Kind of like a serial number.
inet addr:10.1.1.8- indicates the interfaces IP address.
Bcast:10.1.1.255- denotes the interfaces broadcast address
Mask:255.255.255.0- is the network mask for that interface.
UP- Indicates that the kernel modules for the Ethernet interface have been loaded.
BROADCAST- Tells us that the Ethernet device supports broadcasting (used to obtain IP address via DHCP).
RUNNING- Lets us know that the interface is ready to accept data.
MULTICAST- Indicates that the Ethernet interface supports multicasting.
MTU:1500- Short for for Maximum Transmission Unit is the size of each packet received by the Ethernet card.
Metric:1- The value for the Metric of an interface decides the priority of the device (to designate which of more than one devices should be used for routing packets).
RX packets:119833 errors:0 dropped:0 overruns:0 frame:0and
TX packets:8279 errors:0 dropped:0 overruns:0 carrier:0- Show the total number of packets received and transmitted with their respective errors, number of dropped packets and overruns respectively.
collisions:0- Shows the number of packets which are colliding while traversing the network.
txqueuelen:1000- Tells us the length of the transmit queue of the device.
RX bytes:8895891 (8.4 MiB)and
TX bytes:879127 (858.5 KiB)- Indicates the total amount of data that has passed through the Ethernet interface in transmit and receive.
The main option that would be used with
-a which will will display all of the interfaces on the interfaces available (ones that are ‘up’ (active) and ‘down’ (shut down). The default use of the
ifconfig command without any arguments or options will display only the active interfaces.
We can disable an interface (turn it down) by specifying the interface name and using the suffix ‘down’ as follows;
Or we can make it active (bring it up) by specifying the interface name and using the suffix ‘up’ as follows;
To assign a IP address to a specific interface we can specify the interface name and use the IP address as the suffix;
To add a netmask to a a specific interface we can specify the interface name and use the
netmask argument followed by the netmask value;
To assign an IP address and a netmask at the same time we can combine the arguments into the same command;
- List all the network interfaces on your server.
- Why might it be a bad idea to turn down a network interface while working on a server remotely?
- Display the information about a specific interface, turn it down, display the information about it again then turn it up. What differences do you see?
ln command is used to make links between files. This allows us to have a single file or directory referred to by different names.
ln[options] originalfile linkfile : Create links between files or directories
ln[options] originalfile : Create a link to the original file in the current directory
ln command will create a hard link by default and a soft link (symlink) by using the
For example to create a hard link in the folder
/home/pi/foobar/ to the file
foo.txt which is in
/home/pi/ we could use;
The target directory for the new link must exist for the command to be successful.
Once the link is created if we were to edit the file from either location it will be the same file that is being changed.
ln command is used to make links between files (hence
ln = link). By default the links will be ‘hard’ meaning that the links point to the same inode and therefore are pointing to the same data on the hard drive. By using the
-s option a soft link (also known as a symbolic link or a symlink) can be created. A soft link has it’s own inode and can span partitions.
This allows us to have a single file of directory referred to by different names.
- Will only link to a file (no directories)
- Will not link to a file on a different hard drive / partition
- Will link to a file even when it is moved
- Links to an inode and a physical location on the disk
Soft links (or symbolic links or symlinks)
- Will link to directories or files
- Will link to a file or directory on a different hard drive / partition
- Links will remain if the original file is deleted
- Links will not connect to the file if it is moved
- Links connect via abstract (hence symbolic) conventions, not physical locations on the disk. They have their own inode
There are several different options, but the main one that will be used the most is the
-s option to create a soft link.
If we repeat our example from earlier where we wanted to create a link in the folder
/home/pi/foobar/ to the file
foo.txt which is in
/home/pi/, by including the
-s option we can make the link soft instead;
If we then list the contents of the
foobar directory we will see the following;
The read/write/execute descriptors for the permissions are prefixed by an
l (for link) and there is a stylised arrow (
->) linking the files.
- Can the ‘root’ user create a hard link for a directory?
- What command would be used to create links for multiple ‘txt’ files at the same time?
mv command is used to rename and move files or directories. It is one of the basic Linux commands that allow for management of files from the command line.
mv[options] source destination : Move and/or rename files and directories
For example: to rename the file
foo.txt and to call it
foo-2.txt we would enter the following;
This makes the assumption that we are in the same directory as the file
foo.txt, but even if we weren’t we could explicitly name the file with the directory structure and thereby not just rename the file, but move it somewhere different;
To move the file without renaming it we would simply omit the new name at the destination as so;
mv command is used to move or rename files and directories (
mv is an abbreviated form of the word move). This is a similar command to the
cp (copy) command but it does not create a duplicate of the files it is acting on.
If we want to move multiple files, we can put them on the same line separated by spaces.
While there are a few options available for the
mv command the one most commonly ones used would be
uThis updates moved files by only moving the file when the source file is newer than the destination file or when the destination file does not exist.
iInitiates an interactive mode where we are prompted for confirmation whenever the move would overwrite an existing target.
To move all the files from
directory2 must initially exist);
To rename a directory from
directory2 must not already exist);
To move the files
bar.txt to the directory
To move all the ‘txt’ files from the users home directory to a directory called
backup but to only do so if the file being moved is newer than the destination file;
- How can we move a file to a new location when that act might overwrite an already existing file?
- What characters cannot be used when naming directories or files?
passwd command is a vital system administrator tool to manage the passwords for user accounts. We can change our own account, or as the root user we can change other accounts.
passwd[options] username : change the user password
To change the current user account all we need to do is execute the
To allow our password to be changed, first we need to enter the password we’re going to change;
The we enter the new password;
Then we enter the new password again to confirm that we typed it right the first time;
Then we’re told that the update has been successful;
This is one of the simplest tasks using the
passwd command, but there are others that engage a greater degree of complexity for user and password management.
passwd command allows us to change passwords for user accounts. As a normal user we can only change the password for our own account, while as the superuser (‘root’) we can change the password for any account. The
passwd command can also change the properties of an accounts password via options such as;
-n: set the minimum number of days between password changes
-x: set the maximum number of days a password remains valid
-w: Set the number of days of warning before a password change is required
To view the properties for our password we can use the
chage command with the
-l option to list the details of the user ‘newpi’s’ password;
This will show us details similar to the following;
We can set the minimum number of days between password changes (via
-n), the maximum number of days a password remains valid (via
-x) or the number of days of warning before a password change is required (via
-w) as follows;
The other administrative functions revolve around administration of other users passwords via the following options;
l: locking a users password
sudo passwd -l newpi
u: unlocking a users password
sudo passwd -u newpi(the password used previous to locking is available again)
e: expire a users password
sudo passwd -l newpi(forces them to change their password at next login)
- Can you change your own number of days of warning before a password change if you aren’t on the sudo-ers list?
- What is the maximum number of days between a password change?
ssh command is used to securely access a remote machine so that a user can log on and execute commands. In spite of its significance, the
ssh command is simple to use and is a vital tool for managing computers that are connected on a network. In situations where a computer is running ‘headless’ (without a screen or keyboard) it is especially useful.
ssh[options] [login name] server : securely access a remote computer.
For example, operating as the user ‘pi’ we want to login to the computer at the network address 10.1.1.33
We initiate the process by running the command;
If we have never tried to securely login to that computer before, we are presented with a warning asking if we’re sure and is this actually the computer we’re intending to login to;
Assuming that this is correct we can respond with ‘y’ and the process will continue and will ask for the password for the ‘pi’ user (since we executed the command as ‘pi’, the default action is to assume that we are trying to login to the remote computer as the user ‘pi’);
Since we have confirmed this as the correct host it gets added to a master list of know hosts so that we don’t get asked if we know about it in the future. Then we are asked for the ‘pi’ users password on the remote host. Once entered we are logged in and presented with some details of the machine we are connecting to;
At this point we are logged into the computer at 10.1.1.33 and can execute commands as the user ‘pi’ on that machine.
To close this connection we can simply press the key combination CTRL-d and the connection will close with the following message;
The next time we login to the same machine we won’t be asked if we’re sure about the connection and we will be directed straight to the password request.
ssh command is designed to provide a user with secure access to a shell on a remote computer. There are a number of different ways that users can interact with networked computers, but one of the key functions that needs to be enabled is the ability to execute commands as if you were sitting at a terminal with a keyboard directly attached to the machine.
Often this isn’t possible because the computer is located in another room or even another country. Sometimes the machine has no keyboard or monitor connected and is sitting in a rack or on a shelf. Sometimes the server will be a virtual machine with no ‘real’ hardware at all. For these instances we still want to be able to connect to the machine in some way and execute commands.
There are different commands and programs that will provide this remote access, but one of the increasingly prevailing themes is the need to provide a secure connection between the remote host and the user. This is because control of those remote machines needs to be limited to those who are authorised to carry out work on the machines for the safe operation of the functions they are carrying out (think of remotely controlling an electricity supply or traffic control computers). Without going into the mechanics of it (this would be a book in itself) the
ssh command provides a connection between the user and a remote host that has a high degree of security against someone intercepting the information being transmitted.
The example above demonstrated the connection to a remote server as the currently logged in user. However, if we wanted to log in as a different user (let’s call the user ‘newpi’) we would execute the command as follows;
The two examples thus far have used IP addresses to designate the host-name, however, if the remote host had a designated name ‘pinode’ the same command could be entered as follows;
There are a huge number of uses which the ssh command makes possible, but for a new user, the takeaway should be that access between two servers can be carried out easily and the information that is transferred can be done so securely using the
- What are the two main functions of the
sudo command allows a user to execute a command as the ‘superuser’ (or as another user). It is a vital tool for system administration and management.
sudo[options] [command] : Execute a command as the superuser
For example, if we want to update and upgrade our software packages, we will need to do so as the super user. All we need to do is prefix the command
sudo as follows;
sudo command is shorthand for ‘superuser do’.
When we use
sudo an authorised user is determined by the contents of the file
As an example of usage we should check out the file
/etc/sudoers. If we use the
cat command to list the file like so;
We get the following response;
That’s correct, the ‘pi’ user does not have permissions to view the file
Let’s confirm that with
Which will result in the following;
It would appear that only the root user can read the file!
So let’s use
cat the file as follows;
That will result in the following output;
There’s a lot of information in the file, but there, right at the bottom is the line that determines the privileges for the ‘pi’ user;
We can break down what each section means;
ALL=(ALL) NOPASSWD: ALL
pi portion is the user that this particular rule will apply to.
=(ALL) NOPASSWD: ALL
ALL portion tells us that the rule applies to all hosts.
) NOPASSWD: ALL
ALL tells us that the user ‘pi’ can run commands as all users and all groups.
pi ALL=(ALL) NOPASSWD
NOPASSWD tells us that the user ‘pi’ won’t be asked for their password when executing a command with
pi ALL=(ALL) NOPASSWD: ALL`
ALL tells us that the rules on the line apply to all commands.
Under normal situations the use of
sudo would require a user to be authorised and then enter their password. By default the Raspbian operating system has the ‘pi’ user configured in the
/etc/sudoers file to avoid entering the password every time.
If your curious about what privileges (if any) a user has, we can execute
sudo with the
-l option to list them;
This will result in output that looks similar to the following;
The ‘sudoers’ file
As mentioned above, the file that determines permissions for users is
/etc/sudoers. DO NOT EDIT THIS BY HAND. Use the
visudo command to edit. Of course you will be required to run the command using
There is a degree of confusion about the roles of the
sudo command vs the
su command. While both can be used to gain root privileges, the
su command actually switches the user to another user, while sudo only runs the specified command with different privileges. While there will be a degree of debate about their use, it is widely agreed that for simple on-off elevation,
sudo is ideal.
- Write an entry for the
sudoersfile that provides sudo privileges to a user for only the
- Under what circumstances can you edit the
sudoersfile with a standard text editor.
tar command is designed to facilitate the creation of an archive of files by combining a range of files and directories into a single file and providing the ability to extract these files again. While
tar does not include compression as part of its base function, it is available via an option.
tar is a useful program for archiving data and as such forms an important command for good maintenance of files and systems.
tar[options] archivename [file(s)] : archive or extract files
tar is renowned as a command that has a plethora of options and flexibility. So much so that it can appear slightly arcane and (dare I say it) ‘over-flexible’. This has been well illustrated in the excellent cartoon work of the xkcd comic strip (Buy his stuff, it’s awesome!).
However, just because it has a lot of options does not mean that it needs to be difficult to use for a standard set of tasks and at its most basic is the creation of an archive of files as follows;
Here we are creating an archive in a file called
foobar.tar of the files
The options used allow us to;
c: create a new archive
v: verbosely list files which are processed.
f: specify the following name as the archive file name
The output of the command is the echoing of the files that are placed in the archive;
The additional result is the creation of the file containing our archive
To carry the example through to its logical conclusion we would want to extract the files from the archive as follows;
The options used allow us to;
x: extract an archive
v: verbosely list files which are processed.
f: specify the following name as the archive file name
The output of the command is the echoing of the files that are extracted from the archive;
tape archive, or
tar for short, is a command for converting files and directories into a single data file. While originally written for reading and writing from sequential devices such as tape drives, it is nowadays used more commonly as a file archive tool. In this sense it can be considered similar to archiving tools such as WinZip or 7zip. The resulting file created as a result of using the
tar command is commonly called a ‘tarball’.
tar does not provide any compression, so in order to reduce the size of a tarball we need to use an external compression utility such as
gzip. While this is the most common, any other compression type can be used. These switches are the equivalent of piping the created archive through
One advantage to using
tar over other archiving tools such as
zip is that
tar is designed to preserve Unix filesystem features such as user and group permissions, access and modification dates, and directory structures.
Another advantage to using
tar for compressed archives is the fact that any compression is applied to the tarball in its entirety, rather than individual files within the archive as is the case with zip files. This allows for more efficient compression as the process can take advantage of data redundancy across multiple files.
c: Create a new tar archive.
x: Extract a tar archive.
f: Work on a file.
z: Use Gzip compression or decompression when creating or extracting.
t: List the contents of a tar archive.
The tar program does not compress the files, but it can incorporate the
gzip compression via the
-z option as follows;
If we want to lit the contents of a tarball without un-archiving it we can use the
-t option as follows;
tar to distribute files to others it is considered good etiquette to have the tarball extract into a directory of the same name as the tar file itself. This saves the recipient from having a mess of files on their hands when they extract the archive. Think of it the same way as giving your friends a set of books. They would much rather you hand them a box than dump a pile of loose books on the floor.
For example, if you wish to distribute the files in the
foodir directory then we would create a tarball from the directory containing these files, rather than the files themselves:
tar operates recursively by default, so we don’t need to specify all of the files below this directory ourselves.
- Do you need to include the
zoption when decompressing a
- Enter a valid tar command on the first try. No Googling. You have 10 seconds.
useradd command is used by a superuser (typically via
sudo) to add a user system account. It’s a fundamental command in the sense that Linux as a multi user system required a mechanism for adding users. Not that this command will get used every day, but it’s important to know to enable us to administer users on the computer.
useradd[options] username : add a user account
The following example is the simplest application of the command;
This creates the user, but in doing so it is set in a ‘locked’ state because it doesn’t have a password associated with it. This can be set using the
useradd command is not utilised on a regular basis, in fact, the man page for the command recommends using
adduser which is a friendlier front end to this low level tool. It adds a user to the list of entities able to access the computer system although used without any options, the command leaves the new account locked until a password is set with
passwd. It can only be used by the root user, so it is often prefixed by the
There are several useful options that can be used with
useradd, but the commonly used ones we will examine are;
-m: Add the user’s home directory if it doesn’t exist
-s: Specify the user’s login shell
To add the home directory for the ‘newpi’ user and set their login shell to bash we would use the following command;
- What is the more user friendly alternative to
usermod command is used by a superuser (typically via
sudo) to change a user’s system account settings. It’s not a command that will get used every day, but it’s important to know that it’s there to enable us to administer user details on the computer.
usermod[options] username : Modify a users account configuration
The following example is one used in the early stages of setting up the Raspberry Pi where we want to make our user ‘pi’ a member of the group ‘www-data’ which will allow (in conjunction with other steps) the user to edit the files in a web server;
-a option adds the user to an additional ‘supplementary’ group (
-a must only be used in a situation where it is followed by the use of a
-G option). The
-G option lists the supplementary group to which a user will be added. In this case the group is
www-data. Finally the user which is getting added to the supplementary group is
usermod command is not utilised on a regular basis, but when required to administer a users administrative details it is useful to know what it is able to do and how to configure the command. It modifies a users settings (hence
usermod = user modify) and can be used for such tasks as changing a users home directory, password expiry date, their default shell and the groups that the user belongs to. It can only be used by the root user, so it is often prefixed by the
usermod changes a users settings it does so utilising the following files;
/etc/group: Group account information.
/etc/gshadow: Secure group account information.
/etc/login.defs: Shadow password suite configuration.
/etc/passwd: User account information.
/etc/shadow: Secure user account information.
There are about 15 options that can be used with
usermod, but the commonly used ones we will examine are;
-d: Change the user’s home directory
-g: Change the user’s primary group
-a -Ggroup : Add the user to a supplemental group
-s: Change the users default shell
-L: Lock the users account by disabling the password
-U: Unlock the users account by re-enabling the previous password
To change the home directory for the ‘pi’ user to the directory
/home/newpi we would use the following command;
To make ‘pigroup’ the primary group for the user ‘pi’;
Different users will be comfortable in different shells. To change the users default shell we can use the
-s option. The following example changes the default shell for the ‘pi’ user to the ‘Korn’ shell;
We can lock a users account using the
-L option which modifies the /etc/shadow file by putting an exclamation mark in front of the encrypted password. The following example does this for the ‘pi’ user;
Then we will want to unlock the locked user account with the
-U option as follows;
- What might we need to do before changing the primary group of a user to a new group?
- How might we manually unlock a users account without using the
wget command (or perhaps a better description is ‘utility’) is a program designed to make downloading files from the Internet easy using the command line. It supports the HTTP, HTTPS and FTP protocols and is designed to be robust to accomplish its job even on a network connection that is slow or unstable. It is similar in function to
curl for retrieving files, but there are some key differences between the two, with the main one for
wget being that it is capable of downloading files recursively (where resources are linked from web pages).
wget[options] [URL] : download or upload files from the web non-interactivly.
In it’s simplest example of use it is only necessary to provide the URL of the file that is required and the download will begin;
The program will then connect to the remote server, confirm the file details and start downloading the file;
As the downloading process proceeds a simple text animation advises of the progress with an indication of the amount downloaded and the rate
Once complete the successful download will be reported accompanied by some statistics of the transfer;
The file is downloaded into the current working directory.
wget is a utility that exists slightly out of the scope of a pure command in the sense that it is an Open Source program that has been complied to work on a range of operating systems. The name is a derivation of web get where the function of the program is to ‘get’ files from the world wide web.
It does this via support for the HTTP, HTTPS and FTP protocols such that if you could paste a URL in a browser and have it subsequently download a file, the same file could be downloaded from the command line using
wget is not the only file downloading utility that is commonly used in Linux.
curl is also widely used for similar functions. However both programs have different strengths and in the case of
wget that strength is in support of recursive downloading where an entire web site could be downloaded while maintaining its directory structure and links. There are other differences as well, but this would be the major one.
There is a large range of options that can be used to ensure that downloads are configured correctly. We will examine a few of the more basic examples below and after that we will check out the recursive function of
--limit-rate: limit the download speed / download rate.
-O: download and store with a different file name
-b: download in the background
-i: download multiple files / URLs
--ftp-password: FTP download using wget with username and password authentication
Rate limit the bandwidth
There will be times when we will be somewhere that the bandwidth is limited or we want to prioritise the bandwidth in some way. We can restrict the download speed with the option
--limit-rate as follows;
Here we’ve limited the download speed to 20 kilo bytes per second. The amount may be expressed in bytes, kilobytes with the
k suffix, or megabytes with the
Rename the downloaded file
If we try to download a file with the same name into the working directory it will be saved with an incrementing numerical suffix (i.e.
.2 etc). However, we can give the file a different name when downloaded using the
-O option (that’s a capital ‘o’ by the way). For example to save the file with the name
alpha.zip we would do the following;
Download in the background
Because it may take some considerable time to download a file we can tell the process to run in the background which will release the terminal to carry on working. This is accomplished with the
-b option as follows;
While the download continues, the progress that would normally be echoed to the screen is passed to the file
wget-log that will be in the working directory. We can check this file to determine progress as necessary.
Download multiple files
While we can download multiple files by simply including them one after the other in the command as follows;
While that is good, it can start to get a little confusing if a large number of URL’s are included. To make things easier, we can create a text file with the URL’s/names of the files we want to download and then we specify the file with the
For example, if we have a file named
files.txt in the current working directory that has the following contents;
Then we can run the command…
… and it will work through each file and download it.
Download files that require a username and password
The examples shown thus far have been able to be downloaded without providing any form of authentication (no user / password). However this will be a requirement for some downloads. To include a username and password in the command we include the
--ftp-password options. For example if we needed to use the username ‘adam’ and the password ‘1234’ we would form or command as follows;
You may be thinking to yourself “Is this secure?”. To which the answer should probably be “No”. It is one step above anonymous access, but not a lot more. This is not a method by which things that should remain private should be secured, but it does provide a method of restricting anonymous access.
Download files recursively
One of the main features of
wget is it’s ability to download a large complex directory structure that exists on many levels. The best example of this would be the structure of files and directories that exist to make up a web site. While there is a wide range of options that can be passed to make the process work properly in a wide variety of situations, it is still possible to use a fairly generic set to get us going.
For example, to download the contents of the web site at
dnoob.runkite.com we can execute the following command;
The options used here do the following;
-e robots=off: the execute option allows us to run a separate command and in this case it’s the command
robots=offwhich tells the web site that we are visiting that it should ignore the fact that we’re running a command that is acting like a robot and to allow us to download the files.
-r: the recursive option enables recursive downloading.
-np: the no-parent option makes sure that a recursive retrieval only works on pages that are below the specified directory.
-c: the continue option ensures that any partially downloaded file continues from the place it left off.
-nc: the no-clobber option ensures that duplicate files are not overridden
Once entered the program will display a running listing of the progress and a summary telling us how many file and the time taken at the end. The end result is a directory called
dnoob.runkite.com in the working directory that has the entire website including all the linked pages and files in it. If we examine the directory structure it will look a little like the following;
wget for recursive downloading should be used appropriately. It would be considered poor manners to pillage a web site for anything other than good reason. When in doubt contact the person responsible for a site or a repository just to make sure there isn’t a simpler way that you might be able to accomplish your task if it’s something ‘weird’.
- Craft a
wgetcommand that downloads a file to a different name, limiting the download rate to 10 kilobytes per second and which operates in the background.
- Once question 1 above is carried out, where do we find the output of the downloads progress?