Quantcast
Channel: Monitoring – LinOxide
Viewing all 58 articles
Browse latest View live

8 Options to Trace/Debug Programs using Linux strace Command

$
0
0

The strace is the tool that helps in debugging issues by tracing system calls executed by a program. It is handy when you want to see how the program interacts with the operating system, like what system calls are executed in what order.

This simple yet very powerful tool is available for almost all the Linux based operating systems and can be used to debug a large number of programs.

1. Command Usage

Let's see how we can use strace command to trace the execution of a program.

In the simplest form, any command can follow strace. It will list a whole lot of system calls. Not all of it would make sence at first, but if you're really looking for something particular, then you should be able to figure something out of this output.
Lets see the system calls trace for simple ls command.

raghu@raghu-Linoxide ~ $ strace ls

Stracing ls command

This output shows the first few lines for strace command. The rest of the output is truncated.

Strace write system call (ls)

The above part of the output shows the write system call where it outputs to STDOUT the current directory's listing. Following image shows the listing of the directoy by ls command (without strace).

raghu@raghu-Linoxide ~ $ ls

ls command output

1.1 Find configuration file read by program

One use of strace (Except debugging some problem) is that you can find out which configuration files are read by a program. For example,

raghu@raghu-Linoxide ~ $ strace php 2>&1 | grep php.ini

Strace config file read by program

1.2 Trace specific system call

The -e option to strace command can be used to display certain system calls only (for example, open, write etc.)

Lets trace only 'open' system call for cat command.

raghu@raghu-Linoxide ~ $ strace -e open cat dead.letter

Stracing specific system call (open here)

1.3 Stracing a process

The strace command can not only be used on the commands, but also on the running processes with -p option.

raghu@raghu-Linoxide ~ $ sudo strace -p 1846

Strace a process

1.4 Statistical summary of strace

The summary of the system calls, time of execution, errors etc. can be displayed in a neat manner with -c option:

raghu@raghu-Linoxide ~ $ strace -c ls

Strace summary display

1.5 Saving output

The output of strace command can be saved into a file with -o option.

raghu@raghu-Linoxide ~ $ sudo strace -o process_strace -p 3229

Strace a process

The above command is run with sudo as it will display error in case the user ID does not match with the process owner.

1.6 Displaying timestamp

The timestamp can be displayed before each output line with -t option.

raghu@raghu-Linoxide ~ $ strace -t ls

Timestamp before each output line

1.7 The Finer timestamp

The -tt option displays timestamp followed by microsecond.

raghu@raghu-Linoxide ~ $ strace -tt ls

Time - Microseconds

The -ttt displays microseconds like above, but instead of printing surrent time, it displays the number of seconds since the epoch.

raghu@raghu-Linoxide ~ $ strace -ttt ls

Seconds since epoch

1.8 Relative Time

The -r option displays the relative timestamp between the system calls.

raghu@raghu-Linoxide ~ $ strace -r ls

Relative Timestamp

The post 8 Options to Trace/Debug Programs using Linux strace Command appeared first on LinOxide.


GoAccess - A Real Time Apache Web Access Log Analyzer

$
0
0

GoAccess is a free real-time web log analyzer and interactive viewer that runs in a terminal on Linux or BSD distribution. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly. It parses the specified web log file and outputs the data to the terminal. You can find more information on GoAccess website.

Install GoAccess

First you will have to install the needed dependencies based on your Linux distribution:

For Debian/Ubuntu Linux distribution you will have to run the following commands:

# apt-get install libncursesw5-dev libglib2.0-dev libgeoip-dev libtokyocabinet-dev

For Fedora/RedHat/CentOS Linux distribution you can install them like this:

# yum install ncurses-devel glib2-devel geoip-devel tokyocabinet-devel

Next you can go to GoAccess Download website to get the latest version and download it using wget. After that you just need to decompress it and install it using the usual: ./configure, make and make install, like this:

# wget http://tar.goaccess.io/goaccess-0.8.3.tar.gz
# tar zxvf goaccess-0.8.3.tar.gz
# cd goaccess-0.8.3/
# ./configure --enable-geoip --enable-utf8
# make
# make install

goaccess install

How to Use GoAccess

To use GoAccess you will have to use the command with the -f option pointing to the log file like this:

# goaccess -f /var/log/apache/access.log

And it will open a window that will ask you to select the format of the log file, move with the arrow keys to select a log format, press space to select it, and enter to start processing the file.

goaccess open logo

Next it will display the interactive interface, where you can use the following keys to navigate the reports:

q - Quit the program, current window or collapse active module
ENTER - Expand selected module or open window
0-9 and Shift + 0 - Set selected module to active
j - Scroll down within expanded module
k - Scroll up within expanded module
TAB - Iterate modules
/ - Search across all modules (regex allowed)
F1 - help

goaccess interactive

Each report is pretty self-explanatory, you have unique visitors count, requested pages, 404 not found errors, hosts, OSs, browsers and location the visitors have, referrals from other sites or search engines. The main idea behind GoAccess is being able to quickly analyze and view web server statistics in real time, so it provides a fast way too look at those different statistics.

You could also generate a HTML report if you wish so using the following command:

# goaccess -f access.log -a > report.html

GoAccess is a nice utility to have around for when you need a fast view on what is currently happening, it does not provide a lot of details like AWStats, but it's fast and easy to setup and use.

The post GoAccess - A Real Time Apache Web Access Log Analyzer appeared first on LinOxide.

Install Iperf and Test Network Throughput,Speed/Other Statistics

$
0
0

Iperf is a network testing tool that can create TCP and UDP data connections and measure the throughput of a network that is carrying them. It supports tuning of various parameters related to timing, protocols, and buffers. For each test it reports the bandwidth, loss, and other parameters.

The current version, sometimes referred to as iperf3, is a redesign of an original version developed at NLANR/DAST. iperf3 is a new implementation from scratch, with the goal of a smaller, simpler code base, and a library version of the functionality that can be used in other programs. It is mainly developed on CentOS Linux, FreeBSD and MacOS X, but works well on other Linux distributions as well.

Install Iperf

You can get the latest version of iperf3 from http://downloads.es.net/pub/iperf using wget and extract it with tar.

# wget http://downloads.es.net/pub/iperf/iperf-3.0.6.tar.gz
# tar zxvf iperf-3.0.6.tar.gz

Then you just need to configure it and compile it like this:

# cd iperf-3.0.6
# ./configure
# make
# make install

Now you should have iperf3 installed on your system.

How to Use iperf3

To test the performance of a network with iperf you will need 2 computers, one that will act as a server and one that will act as a client, this will help you test the network segment between the 2 host computers.

In the most simple form you can run iperf3 as root with the -s on one of the computers that will act as a server, it will open a port and wait for connections from a client. Please check your firewall or iptables and be sure that the port iperf3 server opens on is not blocked in any way. The output should look like this:

iperf server run

Then on the second computer that is connected to the same network as the server we can perform a basic test by running the -c switch and the IP address of the server. The output will look like this:

iperf client

From this output we can see that we have a 80MBits/sec speed over TCP.

Using the client you can use different flags to test various network scenarios, you can use the -P flag to test a few parallel connections to the server like this:

# iperf3 -c 192.168.1.1 -P 5

and the output will look like this:

iperf parallel

This will show you what happens when more applications from the client connects to the the server.

You can test the performance of the UDP protocol using the -u flag like this:

# iperf3 -c 192.168.1.1 -u

and the output should look like this:

iperf udp

As you can see 2 new fields have appeared in the output, jitter witch shows the latency of the packets that have been send and Lost/Total Diagram witch shows the total number of packets lost from the number of packets send.

Other useful flags:

-b, --bandwidth n[KM] - set target bandwidth to n bits/sec (default 1 Mbit/sec for UDP, unlimited for TCP).
-t, --time n - time in seconds to transmit for (default 10 secs)
-n, --bytes n[KM] - number of bytes to transmit (instead of -t)
-k, --blockcount n[KM] - number of blocks (packets) to transmit (instead of -t or -n)
-l, --length n[KM] - length of buffer to read or write (default 128 KB for TCP, 8KB for UDP)
-R, --reverse - run in reverse mode (server sends, client receives)

This flags are mainly useful when you want to test a particular scenario between the client and the server.

iperf3 is a small but useful tool, that can be quickly installed on the client and the server to test various network aspects and applications.

The post Install Iperf and Test Network Throughput,Speed/Other Statistics appeared first on LinOxide.

Monit - Monitor Linux Daemon, Filesystem, CPU , Files and Network

$
0
0

Monit is a small linux utility designed to manage and monitor processes, programs, filesystems, directories and files. You can have it run automatic maintenance and repair and can execute meaningful causal actions in error situations. You can use Monit to monitor files, directories and filesystems for changes, such as timestamps changes, checksum changes or size changes. Monit logs to syslog or to its own log file and notifies you about error conditions via customizable alert messages. It can also perform various TCP/IP network checks, protocol checks and can utilize SSL for such checks.

Monit can be used via a web interface that you can access via your favorite web browser.

How to Install Monit

To install monit on Debian / Ubuntu distribution you can use apt-get like so:

# apt-get install monit

On Fedora you can use yum to install it from the repository:

# yum install monit

To install it on CentOS / RHEL you will have to use Dag Rpmforge and then install it with the same yum command.

Configuration File

Monit is configured and controlled via a control file called monitrc. The default location for this file is ~/.monitrc if unavailable it will use /etc/monit/monitrc. The execution script in /etc/init.d/monit will also use /etc/monit/monitrc. To protect the security of your control file and passwords the control file must have permissions no more than 0700; Monit will complain and exit otherwise.

Currently, eight types of check statements are supported:

CHECK PROCESS <unique name> <PIDFILE <path> | MATCHING <regex>>
<path> is the absolute path to the program's pidfile.

CHECK FILE <unique name> PATH <path>
<path> is the absolute path to the file.

CHECK FIFO <unique name> PATH <path>
<path> is the absolute path to the fifo.

CHECK FILESYSTEM <unique name> PATH <path>
<path> is the path to the filesystem block special device, mount point, file or a directory which is part of a filesystem.

CHECK DIRECTORY <unique name> PATH <path>
<path> is the absolute path to the directory.

CHECK HOST <unique name> ADDRESS <host address>
The host address can be specified as a hostname string or as an ip-address string on a dotted decimal format.

CHECK SYSTEM <unique name>
The system name is usually hostname, but any descriptive name can be used. This test allows one to check general system resources such as CPU usage (percent of time spent in user, system and wait), total memory usage or load average.

CHECK PROGRAM <unique name> PATH <executable file> [TIMEOUT <number> SECONDS]
<path> is the absolute path to the executable program or script. The status test allows one to check the program's exit status.

Using Monit web interface

Monit comes with an easy to use web interface you can access in your browser, to enable it you will have to add the following lines to your monitrc file:

set httpd port 2812
allow myuser:mypassword

Then you can use the IP of the server to access it, it should look like this:

monit

Examples : Monitor Daemon, Filesystem, CPU , Files and Network

1. To monitor a daemon you can add the following lines to your monitrc file:

check process apache with pidfile /var/run/apache2/apache2.pid
start program = "/etc/init.d/apache2 start" with timeout 60 seconds
stop program = "/etc/init.d/apache2 stop"

2. To send an alert in case of high CPU usage you can use this in your monitrc file:

check process apache with pidfile /var/run/apache2/apache2.pid
start program = "/etc/init.d/apache2 start" with timeout 60 seconds
stop program = "/etc/init.d/apache2 stop"
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart

3. Restart in case of high memory usage:

check process apache with pidfile /var/run/apache2/apache2.pid
start program = "/etc/init.d/apache2 start" with timeout 60 seconds
stop program = "/etc/init.d/apache2 stop"
if totalmem > 200.0 MB for 5 cycles then restart

4. To check a filesystem:

check filesystem datafs with path /dev/sda1
start program = "/bin/mount /data"
stop program = "/bin/umount /data"

5. To check a directory:

check directory bin with path /bin
if failed permission 755 then alert

6. To check a host on the network

check host server2 with address 192.168.1.2
if failed icmp type echo count 3 with timeout 3 seconds then alert

All the services that Monit monitors will be included on the web interface and it will look like this:

monit services

Also if you click a service name you will get even more details about it:

monit details

The post Monit - Monitor Linux Daemon, Filesystem, CPU , Files and Network appeared first on LinOxide.

Collectl Examples - An Awesome Performance Analysis Tool in Linux

$
0
0

Collectl is a light-weight performance monitoring tool capable of reporting interactively as well as logging to disk. It reports statistics on CPU, disk, infiniband, lustre, memory, network, nfs, process, quadrics, slabs and more in easy to read format. Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interactively or as a daemon but not both, collectl can monitor different parameters at the same time and report them in a suitable manner.

This guide will show you how to install and use collectl on CentOS.

How to Install Collectl

You can always download the latest version from the Collectl webpage or using wget.

# wget http://sourceforge.net/projects/collectl/files/collectl/collectl-3.7.3/collectl-3.7.3.src.tar.gz

Next you will have to untar the file and install using the INSTALL script provided:

# tar -xvzf collectl-3.7.3.src.tar.gz
# cd collectl-3.7.3
# ./INSTALL

The service collectl can manage using /etc/init.d/collectl script.

Different types of system resources that can be measured are called subsystems. Like cpu, memory, network bandwidth and so on. If you just run the command without any parameters, it will show the cpu, disk and network subsystems in a batch mode output.

Using the tool without any option will give the following output:

collectl command

These are the brief categories that can be measured using the -s flag:

collectl all category

Monitor CPU usage

You can use the c option to get a summary of CPU usage like this:

# collectl -sc

collectl cpu usage command

You can monitor each cpu individually using the C option like this:

# collectl -sC

collectl each cpu details

Monitor Memory usage

The m option will give you the summary for memory usage:

# collectl -sm

collectl memory usage

Using the M option you will get even more details like memory node data, which is also known as numa data:

# collectl -sM

collectl memory details command

Monitor Disk usage

To see disk usage we will use the d option:

# collectl -sd

collectl disk command

The D option will show you even more details about the Disk usage.

# collectl -sD

linux collectl disk usage

More Examples

You can also monitor all these resources together and get a mixed report like this:

# collectl -scmd

collectl monitor all options

To display the time in each line along with the measurements, use the T option. And over that, to specify options, you need to use the "-o" switch.

# collectl -scmd -oT

collectl summary with time

To use collectl as “top” command.

# collectl --top

collectl top command

Collectl Utilities

This tool functionality can be extended by using the collectl-utilities. You can find it on the collect-utils webpage. Part of these utilities is Colplot, a simple and easy tool to present its output in a web page.

You can use the noarch (collectl-utils-4.7.1-1.noarch.rpm) RPM from the collectl-utilities URL or you can install it easily from source. Before that, be sure that you have Apache installed, as well as gnuplot (both can be installed using yum). Colplot rpm installation will install everything in the right place. After the installation restart apache service. If the installation is completed successfully, you can see a default page like this while accessing http://localhost/colplot (or a valid IP address of your server).

collectl Colplot browser

How to generate plots

First of all, make sure collectl service is running by using “/etc/init.d/collectl status”, then create plots using “collectl -P -f /usr/share/collectl/plotfiles/” command. This will create plots in /usr/share/collectl/plotfiles. Plots will be in zip format. To view the plots in a web page change directory to above location in top of the web page and select any of the plot or all plots and then click on Generate plot button. This will give you detailed graphical view of resource usage.

colplot graph

Colplot graph details

The post Collectl Examples - An Awesome Performance Analysis Tool in Linux appeared first on LinOxide.

Linux ss Tool to Identify Sockets / Network Connections with Examples

$
0
0

ss is part of the iproute2 (utilities for controlling TCP/IP networking and traffic) package. iproute2 is intended to replace an entire suite of standard Unix networking tools (often called "net-tools") that were previously used for the tasks of configuring network interfaces, routing tables, and managing the ARP table. The ss utility is used to dump socket statistics, it allows showing information similar to netstat and its able display more TCP and state information. It should also be faster as it gets its information directly from kernel space. The options used with the ss commands are very similar to netstat making it an easy replacement.

Usage and common options

ss is very similar to netstat, by default it will show you a list of open non-listening TCP sockets that have established connection and you can shape the output with the following options:

-n - Do now try to resolve service names.
-r - Try to resolve numeric address/ports.
-a - Display all sockets.
-l - Display listening sockets.
-p - Show process using socket.
-s - Print summary statistics.
-t - Display only TCP sockets.
-u - Display only UDP sockets.
-d - Display only DCCP sockets.
-w - Display only RAW sockets.
-x - Display only Unix domain sockets.
-f FAMILY - Display sockets of type FAMILY. Currently the following families are supported: unix, inet, inet6, link, netlink.
-A QUERY - List of socket tables to dump, separated by commas. The following identifiers are understood: all, inet, tcp, udp, raw, unix, packet, netlink, unix_dgram, unix_stream, packet_raw, packet_dgram.

ss command examples

1. Display all open TCP ports and the process that uses them:

# ss -tnap

ss tnap

2. You can use -4 flag to display the IPv4 connections and the -6 flag to display IPv6 connections, for example:

# ss -tnap6

ss tnap6

3. In the same manner, to show all open UDP ports you just have to replace t with n.

# ss -tnap

ss unap

4. To print various useful statistics you can use the -s flag:

# ss -s

ss stats

5. To check all connections in a different state you can use the -o flag, for example to display all the established connection:

# ss -tn -o state established -p

ss est

The post Linux ss Tool to Identify Sockets / Network Connections with Examples appeared first on LinOxide.

pidstat - Monitor and Find Statistics for Linux Procesess

$
0
0

The pidstat command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task managed by the Linux kernel. The pidstat command can also be used for monitoring the child processes of selected tasks. The interval parameter specifies the amount of time in seconds between each report. A value of 0 (or no parameters at all) indicates that tasks statistics are to be reported for the time since system startup (boot).

How to Install pidstat

pidstat is part of the sysstat suite that contains various system performance tools for Linux, it's available on the repository of most Linux distributions.

To install it on Debian / Ubuntu Linux systems you can use the following command:

# apt-get install sysstat

If you are using CentOS / Fedora / RHEL Linux you can install the packages like this:

# yum install sysstat

Using pidstat

Running pidstat without any argument is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.

# pidstat

pidstat

In the output you can see:
PID - The identification number of the task being monitored.
%usr - Percentage of CPU used by the task while executing at the user level (application), with or without nice priority. Note that this field does NOT include time spent running a virtual processor.
%system - Percentage of CPU used by the task while executing at the system level.
%guest - Percentage of CPU spent by the task in virtual machine (running a virtual processor).
%CPU - Total percentage of CPU time used by the task. In an SMP environment, the task's CPU usage will be divided by the total number of CPU's if option -I has been entered on the command line.
CPU - Processor number to which the task is attached.
Command - The command name of the task.

I/O Statistics

We can use pidstat to get I/O statistics about a process using the -d flag. For example:

# pidstat -d -p 8472

pidstat io

The IO output will display a few new columns:
kB_rd/s - Number of kilobytes the task has caused to be read from disk per second.
kB_wr/s - Number of kilobytes the task has caused, or shall cause to be written to disk per second.
kB_ccwr/s - Number of kilobytes whose writing to disk has been cancelled by the task.

Page faults and memory usage

Using the -r flag you can get information about memory usage and page faults.

pidstat pf mem

Important columns:

minflt/s - Total number of minor faults the task has made per second, those which have not required loading a memory page from disk.
majflt/s - Total number of major faults the task has made per second, those which have required loading a memory page from disk.
VSZ - Virtual Size: The virtual memory usage of entire task in kilobytes.
RSS - Resident Set Size: The non-swapped physical memory used by the task in kilobytes.

Examples

1. You can use pidstat to find a memory leek using the following command:

# pidstat -r 2 5

This will give you 5 reports, one every 2 seconds, about the current page faults statistics, it should be easy to spot the problem process.

2. To show all children of the mysql server you can use the following command

# pidstat -T CHILD -C mysql

3. To combine all statistics in a single report you can use:

# pidstat -urd -h

The post pidstat - Monitor and Find Statistics for Linux Procesess appeared first on LinOxide.

How to Easy Install Nagios using FAN Tool

$
0
0

FAN stands for Fully Automated Nagios and it is designed as a quick and easy installation of Nagios along with the most used tools in the Nagios community. It is designed as a stand-alone installation based on CentOS 5.9. FAN is available in iso image format and it’s easy to burn on a CD or DVD. A large number of tools are also being distributed, which makes the implementation of an efficient monitoring platform much easier.

Installing Fully Automated Nagios

You can install FAN either on a server you design for monitoring or on a virtual machine that is part of a more complex server. Either way first you will have to download and burn FAN on a CD / DVD, you can download the iso image from the download section of the FAN website. After that you can boot from it to start the installation.

The installation is straightforward, the first screen you will see will look like this:

FAN boot menu

After picking an install option the installer will start and ask you to choose your language and keyboard layout and proceed to the partition configuration:

FAN partition

Next it will ask for your time zone and root password and start the installation process:

FAN installing

The installation process won't take very much as it's just a minimal CentOS 5.9 packed with the appropriate packages:

FAN finish

Using FAN

After you finished the installation the new system will boot up and you can access it remotely with a web browser using the IP of the server or VM you have installed it on.

Note: the default username and password for all Nagios tools is nagiosadmin / nagiosadmin.

The first page will look like this:

FAN-page

From here you should start with Centreon where you can configure every aspect of Nagios in a simple manner from the web interface, it is very intuitive and easy to use. You can use the Configuration menu to add and configure Hosts, Services, Notification, Commands and most aspects of Nagios.

FAN centreon

Afterwards you can monitor everything from the nagios interface:

FAN nagios

The real benefit of having such an installation is that everything is tightly configured and made to work together, you just have to easily install it and then configure it from the intuitive web interface.

The post How to Easy Install Nagios using FAN Tool appeared first on LinOxide.


Install Munin - A Network Resource Monitoring Tool in Ubuntu 14.04

$
0
0

Munin is a free and open-source networked resource monitoring tool. It offers monitoring and alerting services for servers, switches, applications, and services. It alerts the users when things go wrong and alerts them a second time when the problem has been resolved. It is written in Perl and uses the RRDtool to create graphs, which are accessible over a web interface. Its emphasis is on plug and play capabilities.

The team that develops Munin aim for a very "plug and play" experience, providing a lot of graphs and features out of the box but also providing over 500 plugins to easy enhance the product.

How to Install Munin

Munin is a very popular application and it's present in the repositories of most Linux distributions. It requires Perl and rrdtool to run, but you shouldn't worry about those dependencies since they will get fixed by the package manager. Also if you would like to access the graphics with your web browser you should also install a web server like apache or nginx.

You can use the apt-get tool in Ubuntu or other Debian based distribution to install Munin:

$ sudo apt-get install munin

Using Munin

As advertised Munin works good out of the box, after installation you can go to http://localhost/munin with your favorite browser and access the web interface:

munin

And as you can see by clicking on the hostname you will have a nice number of graphs right out of the box:

munin gprahs

You can also add more plugins easy in by installing the user contributed plugins in Ubuntu like this:

$ sudo apt-get install munin-plugins-extra

If you wish to tweak various aspects like the number of graphs to show, the location the html files are saved to or the name of the host you can do so in the main configuration file located in "/etc/munin/munin.conf"

Important note: At the time we are writing this article there is a bug in the default installation of munin on Ubuntu 14.04. The distribution comes with Apache 2.4 but the configuration file for munin is for the old Apache 2.2 server (You can see bug 1258026 here). To fix this open the configuration file located in "/etc/apache2/conf-enabled/munin.conf" and apply the following changes:

Replace the following text:

Order allow,deny
Allow from localhost 127.0.0.0/8 ::1

With this valid format for Apache 2.4:

Require host localhost
Require ip 127.0.0.0/8 ::1

Or if you wish to provide access from remote computers:

Require all granted

The post Install Munin - A Network Resource Monitoring Tool in Ubuntu 14.04 appeared first on LinOxide.

Amazing ! 25 Linux Performance Monitoring Tools

$
0
0

Over the time our website has shown you how to configure various performance tools for Linux and Unix-like operating systems. In this article we have made a list of the most used and most useful tools to monitor the performance for your box. We provided a link for each of them and split them into 2 categories: command lines one and the ones that offer a graphical interface.

Command line performance monitoring tools

1. dstat - Versatile resource statistics tool

A versatile combination of vmstat, iostat and ifstat. It adds new features and functionality allowing you to view all the different resources instantly, allowing you to compare and combine the different resource usage. It uses colors and blocks to help you see the information clearly and easily. It also allows you to export the data in CVS format to review it in a spreadsheet application or import in a database. You can use this application to monitor cpu, memory, eth0 activity related to time.

dstat

 

2. atop - Improved top with ASCII

A command line tool using ASCII to display a performance monitor that is capable of reporting the activity of all processes. It shows daily logging of system and process activity for long-term analysis and it highlights overloaded system resources by using colors. It includes metrics related to CPU, memory, swap, disks and network layers. All the functions of atop can be accessed by simply running:

# atop

And you will be able to use the interactive interface to display and order data.

atop

3. Nmon - performance monitor for Unix-like systems

Nmon stands for Nigel's Monitor and it's a system monitor tool originally developed for AIX. If features an Online Mode that uses curses for efficient screen handling, which updates the terminal frequently for real-time monitoring and a Capture Mode where the data is saved in a file in CSV format for later processing and graphing.

nmon

 More info in our nmon performance track article.

4. slabtop - information on kernel slab cache

This application will show you how the caching memory allocator manages in the Linux kernel caches various type of objects. The command is a top like command but is focused on showing real-time kernel slab cache information. It displays a listing of the top caches sorted by one of the listed sort criteria. It also displays a statistics header filled with slab layer information. Here are a few examples:

# slabtop --sort=a
# slabtop -s b
# slabtop -s c
# slabtop -s l
# slabtop -s v
# slabtop -s n
# slabtop -s o

More info is available kernel slab cache article

5. sar - performance monitoring and bottlenecks check

The sar command writes to standard output the contents of selected cumulative activity counters in the operating system. The accounting system, based on the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds. If the interval parameter is set to zero, the sar command displays the average statistics for the time since the system was started. Useful commands:

# sar -u 2 3
# sar –u –f /var/log/sa/sa05
# sar -P ALL 1 1
# sar -r 1 3
# sar -W 1 3

6. Saidar - simple stats monitor

Saidar is a simple and lightweight tool for system information. It doesn't have major performance reports but it does show the most useful system metrics in a short and nice way. You can easily see the up-time, average load, CPU, memory, processes, disk and network interfaces stats.

Usage: saidar [-d delay] [-c] [-v] [-h]

-d Sets the update time in seconds
-c Enables coloured output
-v Prints version number
-h Displays this help information.

saidar

7. top - The classical Linux task manager

top is one of the best known Linux utilities, it's a task manager found on most Unix-like operating systems. It shows the current list of running processes that the user can order using different criteria. It mainly shows how much CPU and memory is used by the system processes. top is a quick place to go a check what process or processes hangs your system. You can also find here a list of examples of top usage . You can access it by running the top command and entering the interactive mode:

Quick cheat sheet for interactive mode:

  • GLOBAL_Commands: <Ret/Sp> ?, =, A, B, d, G, h, I, k, q, r, s, W, Z
  • SUMMARY_Area_Commands: l, m, t, 1
  • TASK_Area_Commands Appearance: b, x, y, z Content: c, f, H, o, S, u Size: #, i, n Sorting: <, >, F, O, R
  • COLOR_Mapping: <Ret>, a, B, b, H, M, q, S, T, w, z, 0 - 7
  • COMMANDS_for_Windows:  -, _, =, +, A, a, G, g, w

top

8. Sysdig - Advanced view of system processes

Sysdig is a tool that gives admins and developers unprecedented visibility into the behavior of their systems. The team that develops it wants to improve the way system-level monitoring and troubleshooting is done by offering a unified, coherent, and granular visibility into the storage, processing, network, and memory subsystems making it possible to create trace files for system activity so you can easily analyze it at any time.

Quick examples:

# sysdig proc.name=vim
# sysdig -p"%proc.name %fd.name" "evt.type=accept and proc.name!=httpd"
# sysdig evt.type=chdir and user.name=root
# sysdig -l
# sysdig -L
# sysdig -c topprocs_net
# sysdig -c fdcount_by fd.sport "evt.type=accept"
# sysdig -p"%proc.name %fd.name" "evt.type=accept and proc.name!=httpd"
# sysdig -c topprocs_file
# sysdig -c fdcount_by proc.name "fd.type=file"
# sysdig -p "%12user.name %6proc.pid %12proc.name %3fd.num %fd.typechar %fd.name" evt.type=open
# sysdig -c topprocs_cpu
# sysdig -c topprocs_cpu evt.cpu=0
# sysdig -p"%evt.arg.path" "evt.type=chdir and user.name=root"
# sysdig evt.type=open and fd.name contains /etc

sysdig

More info is available in our article on how to use sysdig for improved system-level monitoring and troubleshooting

9. netstat - Shows open ports and connections

Is the tool Linux administrators use to show various network information, like what ports are open and what network connections are established and what process runs that connection. It also shows various information about the Unix sockets that are open between various programs. It is part of most Linux distributions A lot of the commands are explained in the article on netstat and its various outputs. Most used commands are:

$ netstat | head -20
$ netstat -r
$ netstat -rC
$ netstat -i
$ netstat -ie
$ netstat -s
$ netstat -g
$ netstat -tapn

10. tcpdump - insight on network packets

tcpdump can be used to see the content of the packets on a network connection. It shows various information about the packet content that pass. To make the output useful, it allows you to use various filters to only get the information you wish. A few examples on how you can use it:

# tcpdump -i eth0 not port 22
# tcpdump -c 10 -i eth0
# tcpdump -ni eth0 -c 10 not port 22
# tcpdump -w aloft.cap -s 0
# tcpdump -r aloft.cap
# tcpdump -i eth0 dst port 80

You can find them described in detail in our article on tcpdump and capturing packets

11. vmstat - virtual memory statistics

vmstat stands for virtual memory statistics and it's a memory monitoring tool that collects and displays summary information about memory, processes, interrupts, paging and block I/O. It is an open source program available on most Linux distributions, Solaris and FreeBSD. It is used to diagnose most memory performance problems and much more.

vmstat

More info in our article on vmstat commands.

12. free - memory statistics

Another command line tool that will show to standard output a few stats about memory usage and swap usage. Because it's a simple tool it can be used to either find quick information about memory usage or it can be used in different scripts and applications. You can see that this small application has a lot of uses and almost all system admin use this tool daily :-)

free

13. Htop - friendlier top

Htop is basically an improved version of top showing more stats and in a more colorful way allowing you to sort them in different ways as you can see in our article. It provides a more a more user-friendly interface.

htop

You can find more info in our comparison of htop and top

14. ss - the modern net-tools replacement

ss is part of the iproute2 package. iproute2 is intended to replace an entire suite of standard Unix networking tools that were previously used for the tasks of configuring network interfaces, routing tables, and managing the ARP table. The ss utility is used to dump socket statistics, it allows showing information similar to netstat and its able display more TCP and state information. A few examples:

# ss -tnap
# ss -tnap6
# ss -tnap
# ss -s
# ss -tn -o state established -p

15. lsof - list open files

lsof is a command meaning "list open files", which is used in many Unix-like systems to report a list of all open files and the processes that opened them. It is used by most Linux distributions and other Unix-like operating systems by system administrators to check what files are open by various processes.

# lsof +p process_id
# lsof | less
# lsof –u username
# lsof /etc/passwd
# lsof –i TCP:ftp
# lsof –i TCP:80

You can find more examples in the lsof article

16. iftop - top for your network connections

iftop is yet another top like application that will is based on networking information. It shows various current network connection sorted by bandwidth usage or the amount of data uploaded or downloaded. It also provides various estimations of the time it will take to download them.

iftop

For more info see article on network traffic with iftop

17. iperf - network performance tool

iperf is a network testing tool that can create TCP and UDP data connections and measure the performance of a network that is carrying them. It supports tuning of various parameters related to timing, protocols, and buffers. For each test it reports the bandwidth, loss, and other parameters.

iperf

If you wish to use the tool check out our article on how to install and use iperf

18. Smem - advanced memory reporting

Smem is one of the most advanced tools for Linux command line, it offers information about the actual memory that is used and shared in the system, attempting to provide a more realistic image of the actual memory being used.

$ smem -m
$ smem -m -p | grep firefox
$ smem -u -p
$ smem -w -p

Check out our article on Smem for more examples

GUI or Web based performance tools

19. Icinga - community fork of Nagios

Icinga is free and open source system and network monitoring application. It’s a fork of Nagios retaining most of the existing features of its predecessor and building on them to add many long awaited patches and features requested by the user community.

Icinga

More info about installing and configuring can be found in our Icinga article.

20. Nagios - the most popular monitoring tool.

The most used and popular monitoring solution found on Linux. It has a daemon that collects information about various process and has the ability to collect information from remote hosts. All the information is then provided via a nice and powerful web interface.

nagios

You can find information on how to install Nagios in our article

21. Linux process explorer - procexp for Linux

Linux process explorer is a graphical process explorer for Linux. It shows various process information like the process tree, TCP/IP connections and performance figures for each process. It's a replica of procexp found in Windows and developed by Sysinternals and aims to be more user friendly then top and ps.

Check our linux process explorer article for more info.

22. Collectl - performance monitoring tool

This is a performance monitoring tool that you can use either in an interactive mode or you can have it write reports to disk and access them with a web server. It reports statistics on CPU, disk, memory, network, nfs, process, slabs and more in easy to read and manage format.

collectl

More info in our Collectl article

23. MRTG - the classic graph tool

This is a network traffic monitor that will provide you graphs using the rrdtool. It is one of the oldest tools that provides graphics and is one of the most used on Unix-like operating systems. Check our article on how to use MRTG for information on the installation and configuration process

mrtg

 

24. Monit - simple and easy to use monitor tool

Monit is an open source small Linux utility designed to monitor processes, system load, filesystems, directories and files. You can have it run automatic maintenance and repair and can execute actions in error situations or send email reports to alert the system administrator. If you wish to use this tool you can check out our how to use Monit article.

monit

25. Munin - monitoring and alerting services for servers

Munin is a networked resource monitoring tool that can help analyze resource trends and see what is the weak point and what caused performance issues. The team that develops it wishes it for it to be very easy to use and user-friendly. The application is written in Perl and uses the rrdtool to generate graphs, which are with the web interface. The developers advertise the application "plug and play" capabilities with about 500 monitoring plugins currently available.

More info can be found in our article on Munin

The post Amazing ! 25 Linux Performance Monitoring Tools appeared first on LinOxide.

How to Monitor Network Usage with nload in Linux

$
0
0

nload is a free linux utility that can help the linux user or sysadmin to monitor network traffic and bandwidth usage in real time by providing two simple graphs: one per incoming traffic and one for outgoing traffic.

I really like to use nload to display information on my screen about the current download speed, the total incoming traffic, and the average download speed. The graphs reported by nload tool are very easy to interpret and what is the most important thing they are very helpful.

According to the manual pages it monitors all network devices by default, but you can easily specify the device you want to monitor and also switch between different network devices using the arrow keys. There are many options avaliable such as -t to determine refresh interval of the display in milliseconds (the default value of interval is 500), -m to show multiple devices at the same time(traffic graphs are not shown when this option is used), -u to set the type of unit used for the display of traffic numbers and many others that we are going to explore and practise in this tutorial.

How to install nload on your linux machine

Ubuntu and Fedora users can easily install nload from the default repositories.

Install nload on Ubuntu by using the following command.

sudo apt-get install nload

Install nload on Fedora by using the following command.

sudo yum install nload

What about CentOS users? Just type the following command on your machine and you will get nload installed.

sudo yum install nload

The following command will help you to install nload on OpenBSD systems.

sudo pkg_add -i nload

A very effective way to install software on linux machine is to compile by source as you can download and install the latest version which usually means better performance, cool features and less bugs.

How to install nload from source

The first thing you need to do before installing nload from source you need to download it and to do this I like to use the wget uility which is available by default on many linux machines. This free utility helps linux users to download files from the web in a non-interactive way and has support for the following protocols.

  • HTTP
  • HTTPS
  • FTP

Change directory to /tmp by using the following command.

cd /tmp

Now type the following command in your terminal to download the latest version of nload on your linux machine.

wget http://www.roland-riegel.de/nload/nload-0.7.4.tar.gz

If you don't like to use the linux wget utility you can easily download it from the official source by just a mouse click.

The download will finish in no time as it is a small software. The next step is to untar the file you downloaded with the help of the tar utility.

The tar archiving utility can be used to store and extract files from a tape or disk archive. There are many options available in this tool but we need the followings to perform our operation:

  1. -x to extract files from an archive
  2. -v to run in verbose mode
  3. -f to specify the files

For example:

tar xvf example.tar

Now that you learned how to use the tar utility I am very sure you know how to untar .tar archives from the commandline.

tar xvf nload-0.7.4.tar.gz

Then use the cd command to change directory to nload*.

cd nload*

It looks like this on my system.

oltjano@baby:/tmp/nload-0.7.4$

Now run the command

./configure

to to configure the package for your system.

./configure

Alot of stuff is going to be displayed on your screen. The following screenshot demonstrates how it is going to look like.

configuring packages for nload

Then compile the nload with the following command.

make

compiling nload

And finally install nload on your linux machine with the following command.

sudo make install

installing nload from source

Now that the installation of nload is finished it is time for you to learn how to use it.

How to use nload

I like to explore so type the following command on your terminal.

nload

What do you see?

I get the following.

running nload

As you can see from the above screenshot I get information on:

Incoming Traffic

Current download speed

nload running on linux

Average download speed

nload running on linux

Minimum download speed

nload running on linux

Maximum download speed

nload running on linux

Total incoming traffic in bytes by default

Outgoing Traffic

The same goes for outgoing traffic.

Some useful options of nload

Use the option

-u

to set set the type of unit used for the display of traffic numbers.

The following command will help you to use the MBit/s unit.

nload -u m

The following screenshot shows the result of the above command.

nload running on linux

Try the following command and see the results.

nload -u g

nload running on linux

There is also the option -U. According to the manual pages it is same as the option -u but only for an amount of data. I tested this option and to be honest it very helpful when you want to check the total amount of traffic be it incoming or outgoing.

nload -U G

nload running on linux

As you can see from the above screenshot the command nload -U G helps to display the total amount of data (incoming or outgoing) in Gbyte.

Another useful option I like to use with nload is the option -t. This option is used to refresh interval of display in milliseconds which is 500 by default.

I like to experiment a little by using the following command.

nload -t 130

So what the above command does is that it sets the display to refresh every 130 milliseconds. It is recommended to no specify refresh intervals shorter than about 100 milliseconds as nload will generate reports with mistakes during the calculations.

Another option is -a. It is used when you want to set the length in seconds of the time window for average calculation which is 300 seconds by default.

What if you want to monitor a specific network device? It is very easy to do that, just specify the device or the list of devices you want to monitor like shown below.

nload wlan0

nload monitoring wlan0 on linux

The following syntax can help to monitor specific multiple devices.

nload [options] device1 device2 devicen

For example use the following command to monitor eth0 and wlan0.

nload wlan0 eth0

And if you run the command nload without any option it will monitor all auto-detected devices, you can display graphs for each one of them by using the right and left arrow keys.

The post How to Monitor Network Usage with nload in Linux appeared first on LinOxide.

pyDash - A Python App For Monitoring Your Linux Server

$
0
0

The python programming language is very useful to system administrators as it offers rapid development and one can easily write scripts in a very short time to automate daily tasks. There are many python tools for linux system admins out there, one of them is pyDash which is a small web-based monitoring dashboard for linux in python and django.

I really like using pyDash as it gives me information about my linux system such as cpu usage, memory usage, internet traffic, ip addresses, disk usage, processes currently running, users and general info like the name and version of Operating System being used. In the general info tab you can also learn about the CPUs and uptime.

In short words the pyDash app helps the linux user to monitor servers. According to the official author on his github page the app supports the following OSes:

  • Centos
  • Fedora
  • Ubuntu
  • Debian
  • Raspbian
  • Pidora
  • Arch Linux

A very cool feature pyDash has is the ability to retrieve data remotely in JSON format which can be easily retrieved as long as the user agent has been authenticated by the web application.

How to install pyDash using django development server

What is django

Django is a free opensource web application development framework built in Python programming language by some developers during their work at a local newspaper. It focuses on rapid development, pragmatic design and follows the DRY(Do not repeat yourself philosophy).

I am not going to explain how django works in this tutorial but only teach you how to install it for running pyDash on your local linux machine.

Note: Make sure if you have installed git on your system. You can read some tutorial on google on how to install it for your linux distro.

Now open your terminal and run the following command to clone the pyDash repository from github on your local machine.

git clone of pydash

Now that you have finished cloning the repo we need to install the following tools:

  • pip
  • virtualenv

What is pip

pip is a commandline tool which is used to easily install and manage python packages in your machine.

Ubuntu, Debian

Ubuntu and Debian users can install pip using the following command.

sudo apt-get install pip

RHEL, CentOS, Fedora

RHEL, CentOS, Fedora users can install pip on their machine using the following command.

sudo yum -y install python-pip

What is virtualenv

virtualenv is the perfect tool when it comes to solving dependecy problems in your python projects. This tool creates virtual environments on your machine allowing the user to keep the dependencies required by different projects in separate places.

For example project x uses django 1.6.x but your boss is asking you to work on project y where django 1.7.x needs to be used. What is your solution to this problem? Which version of django are you going to keep?

virtualenv is the solution. Now that you have pip installed on your machine you can easily install it by typing the following command on your terminal.

pip install virtualenv

I get the following output when running the above command on my machine because I have already installed virtualenv on my machine.

Requirement already satisfied (use --upgrade to upgrade): virtualenv in /usr/local/lib/python2.7/dist-packages
Cleaning up...

Finally you have finished with installing new tools on your system. Feel free to take a deep breach because we have a long way to achive what we want.

Change directory to pydash using the cd command.

cd ../../pydash

My pydash is located on my Desktop folder so I type the following command to cd there.

cd home/oltjano/Desktop/pydash

The next step consists in creating a virtual environment for our project with the help of the virtualenv.

Use the following command to do this. You can name the virtual environment you are creating anything you want but I personally like to name it pydashvenv.

virtuelnv pydashvenv

Then we need to activate our virtual environment using the following command.

source pydashvenv/bin/activate

If after running the above command the output on your console looks like the following it means the virtual environment is activated and it is ready for you to work on it.

(pydashvenv)oltjano@baby:~/Desktop/pydash$

Now install the requirements of your project by using the following command. It look for a file called requirements.txt in your project. This is the file where the developer defines the packages which are required to run the project.

Install requirements for the project

pip install -r requirements.txt

If you do cat requirements.txt you will see the following output.

django==1.6.8

So it is very easy to understand that pip installed django 1.6.8 on the virtual environment you created with virtualenv. Yes you can install other packages but we don't need them for this project.

If you want to verify that django 1.6.8 is installed on your machine then fire up a python interpreter and run the following commands.

import django
print django.get_version()

Everything should be ok if the following is displayed on your console.

1.6.8

Configure and run this django project

On your pydash directory do cd pydash and open settings.py file and look for a string called SECRET_KEY like shown in the following screenshot.

Make sure you change the secret key and please keep it secret as it should be.

Run the following django command.

python manage.py syncdb

Make sure to select yes when it asks you if you want to create a superuser or no. Then run the app with the following command.

python manage.py runserver

Got and visit

http://www.127.0.0.1/login/

Type the username and password that you created with the superuser.

Then the following will appear.

pydash running

The post pyDash - A Python App For Monitoring Your Linux Server appeared first on LinOxide.

How to Monitor Network Traffic in Linux With nethogs

$
0
0

I love monitoring the network traffic on my linux machine, especially when I want to know the speed at which the data is currently being transferred. Is any process overusing network bandwidth on my Ubuntu system? What is a nice tool to solve this problem?

Have you ever used nethogs? If not it is ok because I will explain to you how to use it in this tutorial.

What is nethogs

nethogs is a very helpful tool when it comes to find out which PID is causing the trouble with your network traffic as it groups bandwidth by process instead of breaking the traffic down per protocol or per subnet, like most tools do. It is feature rich, supports both IPv4 and IPv6 and in my opinion is the best utility when you want to identify programs that are consuming all your bandwidth on your linux machine.

nethogs has some cool features

Some important features of nethogs are listed below.

  1. Shows TCP download- and upload-speed per process
  2. Supports both Ethernet and PPP
  3. Supports both IPv4 and IPv6

Install nethogs

Before using nethogs you need to install libncurses5-dev and libpcap0.8-dev. The following command can be used to install libpcap and ncurses on Debian based machines such as Ubuntu.

sudo apt-get install libncurses5-dev libpcap0.8-dev

The use the apt package manager to download nethogs like shown below.

sudo apt-get install nethogs

Fedora users can type the following commands on their terminal.

sudo yum install ncurses ncurses-devel

sudo yum install libpcap libcap-devel

Then use the following command to install nethogs on a RHEL or CentOS or Fedora Linux.

yum install nethogs

Why do we need to install libcap and ncurses modules on our machine? The reason of this installation is that we need user-level network packet capture information and statistics. We also need an API programming library like libpcap for capturing network traffic.

How to use nethogs

Run nethogs with the following command on your terminal.

nethogs

What do you see?

I get the following output when running the command nethogs on my terminal.

You need to be root to run NetHogs!

Now that you have finished installing nethogs on your machine it is time for some practical commands and cool tips.

Run nethogs again by typing the command nethogs on your terminal.

nethogs

Note: If you get the following error while running nethogs on your linux system it usually means that you are trying to monitor an interface which has no IP address assigned or probably not conncted.

ioctl failed while establishing local IP for selected device eth0. You may specify the device on the command line.

To solve this problem run the command ip addr on your terminal to find out interfaces have an IP address.

sudo ip addr

The following output is displayed on my screen when running the above command.

1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 50:46:5d:2c:82:bf brd ff:ff:ff:ff:ff:ff

3: wlan0: mtu 1500 qdisc mq state UP qlen 1000
link/ether dc:85:de:42:40:d3 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/24 brd 192.168.0.255 scope global wlan0
inet6 fe80::de85:deff:fe42:40d3/64 scope link
valid_lft forever preferred_lft forever

4: vmnet1: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:50:56:c0:00:01 brd ff:ff:ff:ff:ff:ff
inet 172.16.98.1/24 brd 172.16.98.255 scope global vmnet1
inet6 fe80::250:56ff:fec0:1/64 scope link
valid_lft forever preferred_lft forever

5: vmnet8: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:50:56:c0:00:08 brd ff:ff:ff:ff:ff:ff
inet 172.16.183.1/24 brd 172.16.183.255 scope global vmnet8
inet6 fe80::250:56ff:fec0:8/64 scope link
valid_lft forever preferred_lft forever

Now if I want to monitor wla0 I just run the command nethogs wlan0 on my terminal.

nethogs wlan0

The following screenshot shows the output of the above command.

how to monitor wlan0 with nethogs

As you can see from the above screenshot nethogs gives a very clear report on the program that is consuming my network bandwidth. At the moment I took the screenshot google chrome was playing a song on youtube.

Everyone with a little technical background can easily understand from the above screenshot that nethogs gives us details about the process id of the program that is using bandwidth, program that is consuming this bandwidth, the device which is being monitored, sent data and received data.

There are many useful options that one can use with nethogs. For example you can use the option -d to set the delay for refresh rate.

For example if you like to set 3 seconds as your fresh rate then type the following command on your terminal.

nethogs -d 3 wlan0

The option -p helps to sniff in promiscious mode but according to the manuale pages of nethogs it is not recommended.

nethogs -p wlan0

Are you curious to know the version of the nethogs tool you are using on your machine. Then use the option -V.

nethogs -V

I get the following output when trying to get the version of nethogs.

version 0.8.0

And if you like to monitor a specific device then use the following command.

sudo nethogs eth0

You can also monitor the network bandwidth of multiple network interfaces. For example try to run the following command on your terminal.

sudo nethogs eth0 eth1

I like to use nethogs in tracemode as it outptuts the connections one by one.

nethogs -t wlan0

nethogs running in tracemode

The post How to Monitor Network Traffic in Linux With nethogs appeared first on LinOxide.

Bringing a Bunch of Best Known Linux Network Tools

$
0
0

It is very useful to use command line tools to monitor the network on your system and there are a tons of them out there available for the linux user such as nethogs, ntopng, nload, iftop, iptraf, bmon, slurm, tcptrack, cbm, netwatch, collectl, trafshow, cacti, etherape, ipband, jnettop, netspeed and speedometer.

Since there are many linux gurus and developers out there it is obvious that other network monitoring tools exist but I am not going to cover all of them in this tutorial.

Each one of the above tools has its own specifics but at the end all they do is monitor network traffic and there is not really only one way to do the job. For example nethogs can be used to show bandwidth per process in case you want to know the application which is consuming your entire network resources, iftop can be used to show bandwidth per socket connection and tools like nload help to get information about the overall bandwidth.

1) nethogs

nethogs is a free tool that is very handy when it comes to find out which PID is causing the trouble with your network traffic as it groups bandwidth by process instead of breaking the traffic down per protocol or per subnet, like most tools do. It is feature rich, supports both IPv4 and IPv6 and in my opinion is the best utility when you want to identify programs that are consuming all your bandwidth on your linux machine.

A linux user can use nethogs to show TCP download and upload-speed per process, monitor a specific device by using the command nethogs eth0 where eth0 is the name of the device you want to get information from and also get information on the speed at which the data is currently being transferred.

To me nethogs is very easy to use, maybe because I like it so much that I use it all the time to monitor network bandwidth on my Ubuntu 12.04 LTS machine.

For example to sniff in promiscious the option -p is used like shown in the following command.

nethogs -p wlan0

If you like to learn more about nethogs and explore it in a very deep way than don't hesitate to read our full tutorial on this network bandwidth monitoring tool.

2) nload

nload is a console application which can be used to monitor network traffic and bandwidth usage in real time and it also visualizes the traffic by providing two easy to understand graphs. This cool network monitoring tool can also be used to switch between devices while monitoring and this can be done by pressing the left and right arrow keys.

network monitoring tools in linux

As you can see from the above screenshot graphs provided by the nload tool are very easy to understand, provide useful information and also display additional info like total amount of transferred data and min/max network usage.

And what is even cooler is the fact that you can run the tool nload with the help of the following command which seems to be very short and easy to remember.

nload

I am very sure that our detailed tutorial on how to use nload will help new linux users and even experienced ones that are looking for more information on it.

3) slurm

slurm is another network load monitoring tool for linux which shows results in a nice ascii grap and it also supports many keys for interaction such as c to switch to classic mode, s to switch to split graph mode, r to redraw the screen, L to enable TX/RX led, m to switch between classic split and large view, and q to quit slurm.

linux network load monitoring tools

There are also some other keys available in the network load monitoring tool slurm and you can easily study them in the manual page by using the following command.

man slurm

slurm is available in the official repos of Ubuntu and Debian so users of these distros can easy download it by using the apt-get install command like shown below.

sudo apt-get install slurm

We have covered slurm usage on a tutorial so please visit it and do not forget to share the knowledge with other linux friends.

4) iftop

iftop is a very useful tool when you want to display bandwidth usage on an interface by host. According to the manual page iftop listens to network traffic on a named interface, or on the first
interface it can find which looks like an external interface if none is specified, and displays a table of current bandwidth usage by pairs of hosts.

Ubuntu and Debian users can easily install iftop on their machines by using the following command on a terminal.

sudo apt-get install iftop

Use the following command to install iftop on your machine using yum

yum -y install iftop

5) collectl

collectl can be used to collect data that describes the current system status and it supports the following modes:

  • Record Mode
  • Playback Mode

Record Mode allows to take data from a live system and either display it on a terminal or writte to one or more files or a socket.

Playback Mode

According to the manual pages in this mode data is read from one or more data files that were generated in Record Mode.

Ubuntu and Debian users can use their default package manager to install collectl on their machines. The following command will do the job for them.

sudo apt-get install collectl

Use the following command because these distros have collectl in their official repos too.

yum install collectl

6) Netstat

Netstat is a command line tool for monitoring incoming and outgoing network packets statistics as well as interface statistics. It displays network connections for the Transmission Control Protocol (both incoming and outgoing),routing tables, and a number of network interface (network interface controller or software-defined network interface) and network protocol statistics.

Ubuntu and Debian users can use the default package manager to install netstat on their box. Netstat software includes inside the package net-tools. And can be installed by running the below commands in a shell or terminal:

sudo apt-get install net-tools

CentOS, Fedora, RHEL users can use the default package manager to install netstat on their box. Netstat software includes inside the package net-tools. And can be installed by running the below commands in a shell or terminal:

yum install net-tools

Simply, run the following to monitor the network packet statistic with Netstat:

netstat

Netstat

For more information or manual about netstat, we can simply type man netstat in a shell or terminal:

man netstat

man netstat

 7) Netload

The netload command just displays a small report on the current traffic load, and the total number of bytes transferred since the program start. No more features are there. Its part of the netdiag.

We can install Netload using yum in fedora as it is in the default repository. But if you're running CentOS or RHEL, we'll need to install rpmforge repository .

# yum install netdiag

Netload is available in the default repository as a part of netdiag so, we can easily install netdiag using apt manager using the command below.

$ sudo install netdiag

To run netload, we must make sure to choose a working network interface name like eth0, eh1, wlan0, mon0, etc. And run the following command accordingly in a shell or a terminal.

$ netload wlan2

Note: Please replace wlan2 with the network interface name you wanna use. If you wanna scan for your network interface name run ip link show in a terminal or shell.

8) Nagios

Nagios is a leading open source powerful monitoring system that enables network/system administrators to identify and resolve server related problems before they affect major business processes. With the Nagios system, administrators can able to monitor remote Linux, Windows, Switches, Routers and Printers on a single window. It shows critical warnings and indicates if something went wrong in your network/server which indirectly helps you to begin remediation processes before they occur.

Nagios has a web interface in which there is a graphical monitor of activities. One can login to the web interface by browsing to the url http://localhost/nagios/ or http://localhost/nagios3/ . Please replace localhost with your IP-address if on remote machine. Then enter the username and pass then, we'll get to see the information like shown below.

Nagios3 on Chromium

9) EtherApe

EtherApe is a graphical network monitor for Unix modeled after etherman. Featuring link layer, IP and TCP modes and support interfaces Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP and WLAN devices, plus several encapsulation formats. Hosts and links change in size with traffic and color coded protocols display. It can filter traffic to be shown, and can read packets from a file as well as live from the network.

It is easy to install etherape in CentOS, Fedora, RHEL distributions of Linux cause they are available default on their official repository. We can use yum manager to install it with the command shown below:

 yum install etherape

We can install EtherApe on Ubuntu, Debian and their derivatives using apt manager with the below command.

sudo apt-get install etherape

After EtherApe is installed on the system, we'll need to run etherape in root permission as:

sudo etherape

Then, the GUI of etherape will be executed. Then, up in the menu we can select the Mode (IP, Link Layer, TCP) and Interface under Capture. After everything are set, we'll need to click Start button. Then, we'll gonna see something like this.

EtherApe

 

10) tcpflow

tcpflow is a command line utility that captures data transmitted as part of TCP connections (flows), and stores the data in a way that is convenient for protocol analysis or debugging. It reconstructs the actual data streams and stores each flow in a separate file for later analysis. It understands TCP sequence numbers and will correctly reconstruct data streams regardless of retransmissions or out-of-order delivery .

Installing tcpflow in Ubuntu, Debian system is easy via apt manager as it is available by default in the official repository.

$ sudo apt-get install tcpflow

We can install tcpflow in Fedora, CentOS, RHEL and their derivatives from repository using yum manager as shown below.

# yum install tcpflow

If it is not available in the repository or can't be installed via yum manager, we need to install manually from http://pkgs.repoforge.org/tcpflow/ as shown below.

If you are running 64 bit PC:

# yum install --nogpgcheck http://pkgs.repoforge.org/tcpflow/tcpflow-0.21-1.2.el6.rf.x86_64.rpm

If you are running 32 bit PC:

# yum install --nogpgcheck http://pkgs.repoforge.org/tcpflow/tcpflow-0.21-1.2.el6.rf.i686.rpm

We can use tcpflow to capture all/some tcp traffic and put it in an easy to read file. The below command does what we want but we'll need to run that command in an empty directory as it creates files of the format x.x.x.x.y-a.a.a.a.z and after done, just press Control-C that command to stop it.

 $ sudo tcpflow -i eth0 port 8000

Note: Please replace eth0 with the interface of the card you are trying to capture.

11) IPTraf

IPTraf is a console-based network statistics utility for Linux. It gathers a variety of figures such as TCP connection packet and byte counts, interface statistics and activity indicators, TCP/UDP traffic breakdowns, and LAN station packet and byte counts.

IPTraf is available in the default repository so, we can easily install IPTraf using apt manager using the command below.

$ sudo apt-get install iptraf

IPTraf is available in the default repository so, we can easily install IPTraf using yum manager using the command below.

# yum install iptraf

We need to run TPTraf in administration permission with a valid network interface name. Here, we have wlan2 so, we'll be using wlan2 as interface name.

$ sudo iptraf

IPTraf

To start the general interface statistics, enter:

# iptraf -g

To see the detailed statistics facility on an interface called eth0

# iptraf -d wlan2

To see the TCP and UDP monitor on an interface called eth0

# iptraf -z wlan2

To displays the packet size counts on an interface called eth0

# iptraf -z wlan2

Note: Please replace wlan2 with your interface name. You can check your interface by running command ip link show .

12) Speedometer

Speedometer is a small and simple tool that just draws out good looking graphs of incoming and outgoing traffic through a given interface.

Speedometer is available in the default repository so, we can easily install Speedometer using yum manager using the command below.

# yum install speedometer

Speedometer is available in the default repository so, we can easily install Speedometer using apt manager using the command below.

$ sudo apt-get install speedometer

Speedometer can simply be run by executing the following command in a shell or a terminal.

 $ speedometer -r wlan2 -t wlan2

Speedometer

Note: Please replace wlan2 with the network interface name you would like to use.

13) Netwatch

Netwatch is part of the netdiag collection of tools, and it too displays the connections between local host and other remote hosts, and the speed at which data is transferring on each connection.

We can install Netwatch using yum in fedora as it is in the default repository. But if you're running CentOS or RHEL, we'll need to install rpmforge repository .

# yum install netwatch

Netwatch is available in the default repository as a part of netdiag so, we can easily install netdiag using apt manager using the command below.

$ sudo install netdiag

To run netwatch, we'll need to execute the following command in a terminal or shell.

$ sudo netwatch -e wlan2 -nt

Netwatch

Note: Please replace wlan2 with the network interface name you wanna use. If you wanna scan for your network interface name run ip link show in a terminal or shell.

14) Trafshow

Trafshow reports the current active connections like netwatch and pktstat, trafshow, their protocol and the data transfer speed on each connection. It can filter out connections using pcap type filters.

We can install Netwatch using yum in fedora as it is in the default repository. But if you're running CentOS or RHEL, we'll need to install rpmforge repository .

# yum install trafshow

Trafshow is available in the default repository so, we can easily install it using apt manager using the command below.

$ sudo install trafshow

To monitor using trafshow, we'll need to run the following command in a shell or terminal.

$ sudo trafshow -i wlan2

Trafshow

To monitor specifically tcp connections add tcp as shown below.

 $ sudo trafshow -i wlan2 tcp

Trafshow tcp

Note: Please replace wlan2 with the network interface name you wanna use. If you wanna scan for your network interface name run ip link show in a terminal or shell.

15) Vnstat

Vnstat is bit different from most of the other tools. It actually runs a background service/daemon and keeps recording the size of data transfer all the time. Next it can be used to generate a report of the history of network usage.

We'll need to turn on EPEL Repository then run yum manager to install vnstat.

# yum install vnstat

Vnstat is available in the default repository. So, we can run apt manager to install it using the following command.

$ sudo apt-get install vnstat

Running vnstat without any options would simply show the total amount of data transfer that took place since the date the daemon is running.

$ vnstat

vnstat

To monitor the bandwidth usage in realtime, use the '-l' option (live mode). It would then show the total bandwidth used by incoming and outgoing data, but in a very precise manner without any internal details about host connections or processes.

 

 $ vnstat -l

Vnstat live mode

 

After done, press Ctrl-C to stop which will result the following type of output

Vnstat Live Result

16) tcptrack

tcptrack displays the status of TCP connections that it sees on a given network interface. tcptrack monitors their state and displays information such as state, source/destination addresses and bandwidth usage in a sorted, updated list very much like the top command.

As tcptrack is in the repository , we can simply install tcptrack in Debian, Ubuntu from their repository using apt manager. To do so, we'll need to execute the following command in a shell or terminal:

$ sudo apt-get install tcptrack

We can install it using yum in fedora as it is in the default repository. But if you're running CentOS or RHEL, we'll need to install rpmforge repository . To do so, we'll need to run the following commands.

# wget http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

# rpm -Uvh rpmforge-release*rpm

# yum install tcptrack

Note: Here, we have downloaded current latest version of rpmforge-release ie 0.5.3-1 . You can always get the latest version from rpmforge repository and do replace with that you downloaded in the above command.

tcptrack needs to be run in root permission or superuser. We'll need to execute tcptrack with the network interface name we wanna monitor the TCP connections of. Here, we've wlan2 so will be using that as:

sudo tcptrack -i wlan2

tcptrack

If you wanna monitor of specific ports then:

# tcptrack -i wlan2 port 80

tcptrack port 80

Please replace 80 with the port number you wanna monitor .Note: Please replace wlan2 with the network interface name you wanna use. If you wanna scan for your network interface name run ip link show in a terminal or shell.

 17) CBM

The CBM or Color Bandwidth Meter displays current traffic of all network device. This program is so simple that is should be self-explanatory. Source code and newer versions of CBM are available at http://www.isotton.com/utils/cbm/ .

As CBM is in the repository , we can simply install CBM in Debian, Ubuntu from their repository using apt manager. To do so, we'll need to execute the following command in a shell or terminal:

$ sudo apt-get install cbm

We simply need to run cbm in a shell or terminal as shown below:

$ cbm

Color Bandwidth Meter

18) bmon

Bmon or Bandwidth Monitoring is a tool that intended for debugging and monitor bandwidth in real-time access. This tool is capable to retrieving statistics from various input modules. It provides various output methods including a curses based interface,lightweight HTML output but also formatable ASCII output.

bmon is available in the repository, so we can install it in Debian, Ubuntu from their repository using apt manager. To do so, we'll need to run the following command in a shell or terminal.

$ sudo apt-get install bmon

We can run bmon and monitor our bandwidth status using the command below.

$ bmon

bmon

19) tcpdump

TCPDump is a tool for network monitoring and data acquisition. It can save lots of time and can be used for debugging network or server related problems. It prints out a description of the contents of packets on a network interface that match the boolean expression.

tcpdump is available in the default repository of Debian, Ubuntu so, we can simply use apt manager to install it under sudo privilege . To do so, we'll need to run the following command in a shell or terminal.

$ sudo apt -get install tcpdump

tcpdump is also available in the repository of Fedora, CentOS, RHEL so, we can install it via yum manager as:

# yum install tcpdump

tcpdump needs to be run in root permission or superuser. We'll need to execute tcpdump with the network interface name we wanna monitor the TCP connections of. Here, we've wlan2 so will be using it as:

$ sudo tcpdump -i wlan2

tcpdump

If you want to monitor to a specific port only, then can run the command as follows. Here is the example for port 80 (webserver).

$ sudo tcpdump -i wlan2 'port 80'

tcpdump port

20) ntopng

ntopng is the next generation version of the original ntop. It is a network probe that shows network usage in a way similar to what top does for processes. ntopng is based on libpcap and it has been written in a portable way in order to virtually run on every Unix platform, MacOSX and on Win32 as well.

To install ntopng in Debian, Ubuntu system, we'll first need to install the required dependencies packages to compile ntopng.  You can install them all by running the below command in a shell or a terminal.

$ sudo apt-get install libpcap-dev libglib2.0-dev libgeoip-dev redis-server wget libxml2-dev build-essential checkinstall

Now, we'll need to manually compile ntopng for our system as:

$ sudo wget http://sourceforge.net/projects/ntop/files/ntopng/ntopng-1.1_6932.tgz/download
$ sudo tar zxfv ntopng-1.1_6932.tgz
$ sudo cd ntopng-1.1_6932
$ sudo ./configure
$ sudo make
$ sudo make install

Now, you should have your ntopng installed in your Debian or Ubuntu system.

We have already covered tutorial on ntopng usages. It is available in both command line and web interface. We can go ahead to get knowledge on it.

Conclusion

In this first part we covered some network load monitoring tools for linux that are very helpful to a sysadmin and even a novice user. Each one of the tools covered in this article has its own specifics, different options but at the end they all help you to monitor your network traffic.

The post Bringing a Bunch of Best Known Linux Network Tools appeared first on LinOxide.

Install OpenNMS - One Place to Monitor All Network Devices On Ubuntu

$
0
0

Hi everyone, this tutorial is all about how to setup OpenNMS on Ubuntu Server. Here, we will be running Ubuntu Server 14.04 LTS "Trusty". This tutorial also works for the older version of Ubuntu like 13.10, 13.04, 12.10) and also for its derivatives Linux Mint, etc

OpenNMS is an award winning network management application platform with a long track record of providing solutions for enterprises and carriers. It is  a free, Open Source, and world’s first enterprise grade network monitoring system that can be used to monitor tens of thousands of unlimited devices with a single instance. OpenNMS will discover and monitor the services or nodes automatically in your network, or you can assign a particular service to monitor by OpenNMS.

So, here are the steps below on how to install OpenNMS in our Ubuntu Server. You can connect to your Ubuntu Server via SSH remote access if you don't have control over the Server physically.

1. Updating the Repository Index

To update the package list in our machine's repo index with opennms, we'll need to add the repo url lines inside a file called“opennms.list” within the “/etc/apt/sources.list.d” directory. To do this, we'll want to enter the following commands in a shell or terminal.

$ sudo cat << EOF | sudo tee /etc/apt/sources.list.d/opennms.list
deb http://debian.opennms.org stable main
deb-src http://debian.opennms.org stable main
EOF

Then, we need to add OpenNMS key.

$ wget -O - http://debian.opennms.org/OPENNMS-GPG-KEY | sudo apt-key add -

Now, lets update our package index of OpenNMS .

$ sudo apt-get update

updating repository index

2. Configuring the Database

Installing PostgreSQL

Before installing OpenNMS , we will wanna install PostgreSQL and do a few things to make sure PostgreSQL is working fine.

$ sudo apt-get update
$ sudo apt-get install postgresql

installing postgresql

Checking the version of PostgreSQL

Now, after postgresql has been installed. We'll wanna check the version of PostgreSQL installed. Make sure you install the latest PostgreSQL 9.4 on your Ubuntu Server.

$ echo $PGVERSION

or

$ psql --version

Alowing User Access to the Database

To allow connections as the postgres user to authenticate without a password, we must change options in the pg_hba.conf file. On Debian based systems, this will be located at /etc/postgresql/$PGVERSION/main/pg_hba.conf, where “$PGVERSION” is the environment variable we set earlier containing the version of your PostgreSQL database.

Edit your “/etc/postgresql/$PGVERSION/main/pg_hba.conf” file as root permission. It should have entries similar to the following at the bottom.

$ sudo nano /etc/postgresql/9.1/main/pg_hba.conf

Find the following lines:

local   all         all                               peer
host    all         all         127.0.0.1/32          ident
host    all         all         ::1/128               ident

And, replace peer and ident to trust which will finally look like the following:

local   all         all                               trust
host    all         all         127.0.0.1/32          trust
host    all         all         ::1/128               trust

postgresql configuration

Once you have finished making changes, restart the database (as root):

$ sudo service postgresql restart

3. Installing JDK 7

To install JDK, we'll execute the following commands in a shell or terminal.

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java7-installer

add java repo

Important note: As OpenNMS doesn’t support Java 8 yet, It is extremely recommended to use Java 7. While future versions of OpenNMS will support Java 8, the current stable 1.12 series of releases is not supported on Java 8. 

4. Verification of a local mail transfer agent

OpenNMS sends out e-mail by default through a local mail transfer agent listening on port 25. We'll need to confirm that a MTA (e.g. exim or postfix) is installed. One way to check for this is to telnet to port 25 on the server and ensure a SMTP banner is displayed.

If a MTA is not installed, we can install it with the following command:

$ sudo apt-get install default-mta

installing mta

For Debian, the default MTA is exim. Accept the default debconf configuration responses when configuring exim.

5. Installing OpenNMS

Now, after we have installed and fulfilled all the prerequisites, we'll finally install our OpenNMS in our Ubuntu Server 14.04 LTS . To do so, we'll need to execute the following command.

$ sudo apt-get install opennms

install opennms

Also, the installer will tell you that IPLIKE installation has been failed.

You can install IPLIKE manually using this command:

$ sudo /usr/sbin/install_iplike.sh

6. Configuring Java

If we wanna do manual upgrade instead of automatic update and wanna disable update for opennms then, we'll wanna edit file /etc/apt/sources.list.d/opennms.list and comment all lines in it. Or simply just delete that file with the command below.

$ sudo rm -rf /etc/apt/sources.list.d/opennms.list

removing opennms sources

Then, Update the local repository index using command:

$ sudo apt-get update

Then, we wanna direct OpenNMS to the Java version we wanna use. If we installed the recommended Sun/Oracle JDK, all we need to do is point it to /usr/java/latest:

$ sudo /usr/share/opennms/bin/runjava -s

Creating Database for OpenNMS

$ sudo /usr/share/opennms/bin/install -dis

creating opennms database

Here,

-d – to update the database.
-i – to insert any default data that belongs in the database.
-s – to create or update the stored procedures OpenNMS uses for certain kinds of data access.

Finally, start OpenNMS service:

$ sudo service opennms start

7. OpenNMS Management Interface

Finally, our OpenNMS Management has been successfully installed and running awesome. Now, open up your browser, and point it to http://ip-address:8980/opennms. The following screen should appear. Enter the username and password. The default username and password is admin/admin.

Conclusion

Finally, we installed and configured OpenNMS on our Ubuntu Server 14.04 LTS "Trusty".  OpenNMS is really a great tool for Network Monitoring. It is a Free and Open Source Software and world’s first enterprise grade network monitoring system that can be used to monitor tens of thousands of unlimited devices with a single instance. Enjoy OpenNMS. If you have any questions, comments or feedback please do comment them below. Your comments will help us improve our contents. Thank You !

The post Install OpenNMS - One Place to Monitor All Network Devices On Ubuntu appeared first on LinOxide.


How to Install Puppet Master and Client in Ubuntu 14.04

$
0
0

Hi there, Greetings. Today, we'll be learning how to install and configure both the puppet master and puppet client in our latest stable release of Ubuntu ie. Ubuntu 14.04 LTS "Trusty".

Puppet is a configuration management system that allows you to define the state of your IT infrastructure, then automatically enforces the correct state. Whether you're managing just a few servers or thousands of physical and virtual machines, Puppet automates tasks that sysadmins often do manually, freeing up time and mental space so sysadmins can work on the projects that deliver greater business value. It ensures consistency, reliability and stability. It also facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code.

1. Configuring hosts

We have 2 machines:

Master puppet with IP 192.168.58.153 and hostname : puppetmaster
Puppet client with IP 192.168.58.150 and hostname : puppetclient

Now add these 2 lines to /etc/hosts on both machines

nano /etc/hosts

192.168.1.103 puppetclient.example.com puppetclient

192.168.1.102 puppetmaster.example.com puppetmaster

adding puppet to hosts

In addition to that both client and server must have time sync, it will processed in both client and server machines as follows:

ntpdate pool.ntp.org ; apt-get update && sudo apt-get -y install ntp ; service ntp restart

2. Installing Puppet packages (Client and Master server)

$ sudo apt-get update

Client

$ sudo apt-get install puppet

install puppet client

Master server

$ sudo apt-get install puppet puppetmaster

install puppetmaster

Now Define the manifest on the Server.

$ sudo nano /etc/puppet/manifests/site.pp

{codecitation}package {

‘apache2′:

ensure => installed

}

service {

‘apache2′:

ensure => true,

enable => true,

require => Package['apache2']

}

package {

‘vim’:

ensure => installed

}

# Create “/tmp/testfile” if it doesn’t exist.

class test_class {

file { “/tmp/testfile”:

ensure => present,

mode => 600,

owner => root,

group => root

}

}

# tell puppet on which client to run the class

node puppetclient {

include test_class

}

{/codecitation}

puppetmaster configuration

From this configuration the puppet master will deploy the installation of apache and will create /tmp/testfile with the above ownership.

Now start the Puppet master:

sudo /etc/init.d/puppetmaster start

Define the Server in the Puppet Client :

edit /etc/puppet/puppet.conf and add

sudo nano /etc/puppet/puppet.conf

{codecitation}[puppet]

server = puppetmaster.example.com

# Make sure all log messages are sent to the right directory

# This directory must be writable by the puppet user

logdir=/var/log/puppet

vardir=/var/lib/puppet

rundir=/var/run

{/codecitation}

Now, run the command below and start the deployment.

# puppetd -server puppetmaster.example.com -waitforcert 60 -test

Get back to The Server and check who is waiting

# puppetca --list

Now back to The Client you will see this :

Check from the Client that the Test files has been created with the same ownership 600 defined on the Master

$ sudo ls -ltr /tmp/testfile

check if the apache is running with

$ ps -ef | grep apache2

Reload the puppet client

# puppetd -v -o

check now if apache is installed and running

ps -ef | grep apache2

Conclusion

Above We have just mentioned how we can revoke the cert and disconnect the Desktop from Puppet master server. As mentioned above we can connect N number of desktops and do the administrations centrally through the Puppet master server. Congratulations! Now we have a fully functional Puppet instance on our Ubuntu 14.04.

The post How to Install Puppet Master and Client in Ubuntu 14.04 appeared first on LinOxide.

Install NagiosQL - GUI interface to Configure Nagios Core

$
0
0

Nagios is an open source monitoring tool for the network devices. It uses snmp protocol for the monitoring of  network devices. Nagios Core supports configuration from CLI which is not easy for new users. NagiosQL is plugin which  provides GUI interface for the configuration of Nagios Core. In this article, our focus is installation of NagiosQL and we assume that Nagios Core and net-snmp is already installed on the monitoring server.

Perquisite for NagiosQL are

  • Web server (Apache2 and www-data user/group)
  • MySQL   (NagiosQL stores all configuration in DB)
  • Nagios Core (Installation using source code)
  • PHP latest (with all necessary modules)

We have VM ( 3GB RAM and 80GB HD space) with Ubuntu 14.01 LTS OS installed on it. Nagios Core  and Net-snmp package are already installed on it. In this article, we will learn installation of NagiosQL , its integration with Nagios Core and status of different objects of VM on Nagios Core.

Install NagiosQL

First of all download package from nagiosql.org website

NagiosQL Download Page

Use following command to download it inside the terminal at /home/test/Download path

$ sudo wget http://kaz.dl.sourceforge.net/project/nagiosql/nagiosql/NagiosQL%203.2.0/nagiosql_320.tar.gz

nagiosQL downloading process

Now copy this .tar.gz file in the /var/www directory (we assumed that apache web server is already install on the machine)  and extract it with following command.

$ cp nagiosql_321.tar.gz /var/www

$ sudo tar -xvzf nagiosql_320.tar.gz

Extraction of Package

After extraction of compressed package, another new directory created under /var/www folder which is shown below. Change the ownership of this new extracted folder using following command

$ sudo chown -R www-data:www-data nagiosql32

Inside naqiosQL

Type  following address in the web browser  and web page similar shown below will appear in the browser.

http://localhost/nagiosql32/

NagiosQL interface

For the installation of NaqiosQL,  click on the "Start Installation" button at the bottom of NagiosQL main page which is also shown in the following figure.

Start Installation Next page checks the requirements of NaqiosQL plugin. Usually it shows errors on this page regarding the permission of Nagios Core files, time zone setting in php.ini file etc.

Error NsqiosQL configuration tool required  certain permission  to change the Naqios Core configuration files from the web interface. Following commands will give proper permission to  NagiosQL plugin for the successful installation.

Apache user name is  :  www-data

Nagios main configuration files are under /usr/local/nagios.

#chgrp www-data /usr/local/nagios/etc/

#chgrp www-data /usr/local/nagios/etc/nagios.cfg

#chgrp www-data /usr/local/nagios/etc/cgi.cfg

#chmod 775 /usr/local/nagios/etc/

#chmod 664 /usr/local/nagios/etc/nagios.cfg

#chmod 664 /usr/local/nagios/etc/cgi.cfg

The Nagios binary must be executable by the Apache user

#chown nagios:www-data /usr/local/bin/nagios

#chmod 750 /usr/local/bin/nagios

Time zone setting in PHP.ini

Time zone error is shown in the above figure. It can be fixed by changing following  line in /etc/php4/apache2 .Remove the  comment ";"  from  date.timezone =America/Chicago  option and save it .

After fix on this page, a green arrow button will appear at the bottom right corner of the page. Click Next to move on the next stage.

Requirements completNext stage is the creation of database for the NagiosQL plugin. It uses the database for the storage of configuration for Nagios Core. On this page, set log in details for database ,enter credentials for  NaqiosQL admin user and set  configuration path for Naqios Core and also create directory for NaqiosQL configuration.

database setting

Following window will appear after the successful creation of DB on the machine.

successfull installation

Click on the Finish button and log in NagiosQL site by pointing your browser to http://server-ip-address/nagiosql32/ (server_ip_address is the address of server). Log in with the credentials you have entered during installation process. Enter user name and password to get access to web interface of NagiosQL which is shown below

NagiosQL Main Page NagiosQL administration main interface page after successful log in is shown below for further configuration of Nagios Core.

NagiosQL interface

Introducing some of the nagiosQL menus for your reference.

Supervision

This menu provides the configuration of Hosts and Services for Nagios Core which is shown in the following figure.

Supervision MenuAlerting

In this menu, user can configure the contact information of System Administrator and time periods for alerts.

Alerts Commands

This menu provides the format and parameters of different command which are used by the Nagios Core monitoring software.

commands

Tool

Importing of Data, backup files, configuration of Nagios Core , CGI configuration  and syntax checking is available in this menu.

toolsAdministration

This menu provides the setting for the NagiosQL plugin. User can change the password, add user for maintenance of server, logs and setting etc.

Administration

In the config target sub menu, we have to set  NagiosQL  files as a Nagios Core configuration which is shown below. we also set the path of Nagios command file, binary file, process file and main configuration file (nagios.cfg).

config target sub menu

After  path changes in NagiosQL front end, verify the configuration files from command line and restart the nagios daemon.

main configuration verification

restart nagios

Conclusion

In this article, we explain the installation of NagiosQL which is  graphical configuration tool for well-known monitoring tool Nagios Core. Nagios uses SNMP  protocol (support all versions) for monitoring of network devices such as servers, router and switches. NagiosQL web interface provides an easy way for complex configuration  of Nagios.  NagiosQL uses database on the back end for the permanent storage of  configuration.

The post Install NagiosQL - GUI interface to Configure Nagios Core appeared first on LinOxide.

Install Vector an Opensource Performance Monitoring Tool from Netflix

$
0
0

vector

Today we will present Vector, an open source performance monitoring framework which exposes hand picked system and application metrics to your web browser. Having the right metrics available on-demand and at a high resolution is key to understand how a system behaves and correctly troubleshoot performance issues. It's released under Apache License, Version 2.0.

At the time of writing this article the first version of Vector was just released, as such, you can expect to find bugs and issues.

Installing PCP

Before installing Vector you will first have to install Performance Co-Pilot (PCP). It's an open source toolkit designed for monitoring and managing system-level performance. It offer support for a wide variety of Operating Systems, including Linux, MacOSX, FreeBSD, IRIX, Solaris and Windows. PCP Is available in all popular distributions.

You can install it in Debian/Ubuntu with:

$ sudo apt-get install pcp

And in Fedore/CentOS with:

$ sudo yum install pcp

You can also install it on Os X, for more information you can check the PCP website.

Installing Vector

First we will need to install NPM to be able to install bower that vector will be using to install. You can do this on Ubuntu by using the package manager or using the following command:

$ sudo apt-get install npm

Next install bower, an open source package manager for your web projects, you will also need nodejs for it to work, you can install them using the following commands:

$ sudo apt-get install nodejs-legacy
$ sudo npm install -g bower

Now we can start dowloading vector, you can do so in any user directory you wish using git as here:

$ git clone https://github.com/Netflix/vector.git
$ cd vector

We will now use bower package manager to install it:

$ bower install

Next you will need a web server to run the files in app, Vector team suggests gulp to do so, you can install gulp using the npm package manger you installed earlier, to install it and run vector use the following commands from the vector folder:

$ npm install --global gulp
$ npm install
$ gulp

You should get the following output:

vector

You can now access your Vector insulation by accessing http://localhost:8080 in your favorite Web browser.

vector-ui

At the moment Vector comes with the following list of widgets and dashboards that can be easily extended. Here is a short list of metrics available by default.

CPU

  • Load Average
  • Runnable
  • CPU Utilization
  • Per-CPU Utilization
  • Context Switches

Memory

  • Memory Utilization
  • Page Faults

Disk

  • Disk IOPS
  • Disk Throughput
  • Disk Utilization
  • Disk Latency

Network

  • Network Drops
  • TCP Retransmits
  • TCP Connections
  • Network Throughput
  • Network Packets

Conclusion

Vector work on the top of pcp which is really light weight. It provided system admins to analyse system and application level stats in a very real time manner.  Good luck and enjoy your metrics.

The post Install Vector an Opensource Performance Monitoring Tool from Netflix appeared first on LinOxide.

Command Line Tool to Monitor Linux Containers Performance

$
0
0

ctop is a new command line based tool available to monitor the processes at the container level. Containers provide operating system level virtualization environment by making use of the cgroups resource management functionality. This tool collects data related to memory, cpu, block IO and metadata like owner, uptime etc from cgroups and presents it in a user readable format so that one can quickly asses the overall health of the system. Based on the data collected, it tries to guess the underlying container technology.  ctop is useful in detecting who is using large amounts of memory under low memory situations.

Capabilities

Some of the capabilities of ctop are:

  • Collect metrics for cpu, memory and blkio

  • Gather information regarding owner, container technology, task count

  • Sort the information using any column

  • Display the information using tree view

  • Fold/unfold cgroup tree

  • Select and follow a cgroup/container

  • Select a timeframe for refreshing the displayed data

  • Pause the refreshing of data

  • Detect containers that are based on systemd, Docker and LXC

  • Advance features for Docker and LXC based containers

    • open / attach a shell for further diagnosis

    • stop / kill container types

Installation

ctop is written using Python and there are no other external dependencies other than having to use Python version 2.6 or greater (with built-in cursor support).   Installation using Python's pip is the recommended method. Install pip if not already done and install ctop using pip.

Note: The examples shown in this article are from an Ubuntu (14.10) system

$ sudo apt-get install python-pip

Installing ctop using pip:

poornima@poornima-Lenovo:~$ sudo pip install ctop

[sudo] password for poornima:

Downloading/unpacking ctop

Downloading ctop-0.4.0.tar.gz

Running setup.py (path:/tmp/pip_build_root/ctop/setup.py) egg_info for package ctop

Installing collected packages: ctop

Running setup.py install for ctop

changing mode of build/scripts-2.7/ctop from 644 to 755

changing mode of /usr/local/bin/ctop to 755

Successfully installed ctop

Cleaning up...

If using pip is not an option, you can also install it directly from the github using wget:

poornima@poornima-Lenovo:~$ wget https://raw.githubusercontent.com/yadutaf/ctop/master/cgroup_top.py -O ctop

--2015-04-29 19:32:53-- https://raw.githubusercontent.com/yadutaf/ctop/master/cgroup_top.py

Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.78.133

Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.78.133|:443... connected.

HTTP request sent, awaiting response... 200 OK Length: 27314 (27K) [text/plain]

Saving to: ctop

100%[======================================>] 27,314 --.-K/s in 0s

2015-04-29 19:32:59 (61.0 MB/s) - ctop saved [27314/27314]

poornima@poornima-Lenovo:~$ chmod +x ctop

You might get an error message while launching ctop if cgroup-bin package is not installed.  It can be resolved by installing the required package.

poornima@poornima-Lenovo:~$ ./ctop

[ERROR] Failed to locate cgroup mountpoints.

poornima@poornima-Lenovo:~$ sudo apt-get install cgroup-bin

Here is a sample output screen of ctop:

ctop output

ctop screen

Usage options

ctop [--tree] [--refresh=] [--columns=] [--sort-col=] [--follow=] [--fold=, ...] ctop (-h | --help)

Once you are inside the ctop screen, use the up (↑) and down(↓) arrow keys to navigate between containers. Clicking on any container will select that particular container. Pressing q or Ctrl+C quits the container.

Let us now take a look at how to use each of the options listed above.

-h / --help  - Show the help screen

poornima@poornima-Lenovo:~$ ctop -h
Usage: ctop [options]

Options:
-h, --help show this help message and exit
--tree show tree view by default
--refresh=REFRESH Refresh display every <seconds>
--follow=FOLLOW Follow cgroup path
--columns=COLUMNS List of optional columns to display. Always includes
'name'
--sort-col=SORT_COL Select column to sort by initially. Can be changed
dynamically.

--tree - Display tree view of the containers

By default, list view is displayed

Once you are inside the ctop window, you can use the F5 button to toggle tree / list view.

--fold=<name> - Fold the <name> cgroup path in the tree view.

   This option needs to be used in combination with --tree.

Eg:   ctop --tree --fold=/user.slice

ctop --fold output

Output of 'ctop --fold'

Inside the ctop window, use the + / - keys to toggle child cgroup folding.

Note: At the time of writing this article, pip repository did not have the latest version of ctop which supports '--fold' option via command line.

--follow= - Follow/Highlight the cgroup path.

Eg: ctop --follow=/user.slice/user-1000.slice

As you can see in the screen below, the cgroup with the given path "/user.slice/user-1000.slice" gets highlighted and makes it easier for the user to follow it even when the display position gets changed.

'ctop --follow' output

Output of 'ctop --follow'

You can also use the 'f' button to allow the highlighted line to follow the selected container. By default, follow is off.

--refresh= - Refresh the display at the given rate. Default 1 sec

This is useful in changing the refresh rate of the display as per user requirement.  Use the 'p' button to pause the refresh and select the text.

--columns=<columns> - Can limit the display to selected <columns>. 'name' should be the first entry followed by other columns. By default, the columns include owner, processes,memory, cpu-sys, cpu-user, blkio, cpu-time.

Eg: ctop --columns=name,owner,type,memory

'ctop --column' output

Output of 'ctop --column'

-sort-col=<sort-col> - column using which the displayed data should be sorted. By default it is sorted using cpu-user

Eg: ctop --sort-col=blkio

If there are additional containers supported like Docker and LXC, following options will also be available:

press 'a' - attach to console output

press 'e' - open a shell in the container context

press 's' – stop the container (SIGTERM)

press 'k' - kill the container (SIGKILL)

ctop is currently in active development by Jean-Tiare Le Bigot. Hopefully we would see more features in this tool like our native top command :-).

The post Command Line Tool to Monitor Linux Containers Performance appeared first on LinOxide.

How to Install PandoraFMS and Setup Whatsapp Alerts

$
0
0

Pandora FMS is a monitoring software chosen by several companies all around the world to manage their IT infrastructures. Besides ensuring high performance and maximum flexibility, it has a large amount of features making Pandora FMS one of the most complete solutions in the market.

So we will use this software monitoring for monitoring our server and add feature for alert if anything bad is happen with our server via whatsapp.

We will install Pandora FMS first, in this tutorial we will install on Ubuntu 14.04, but before you must install MySQL first for database.

Install MySQL

Before we install Pandora FMS package we need to install MySQL as database we will use for Pandora FMS store the data, its easy, just running this command as root :

$apt-get install mysql-server

Fill in MySQL password for we use later if asked in installation progress.

Install Pandora FMS

First we need add artice repo for install Pandora FMS package, you can edit repo file to add it.

$vim /etc/apt/sources.list

Add deb http://www.artica.es/debian/squeeze/ in last line, save and update the source.

$apt-get update

After update we can install Pandora FMS package.

$apt-get install pandorafms-console pandorafms-server pandorafms-agent-unix

Next step you can edit /etc/apache2/sites-available/000-default.conf and add

Alias /pandora_console /var/www/pandora_console/ below DocumentRoot.

Open http://ip/pandora_console/install.php in your broswer.

Do installation progress and fill your database information when asked.

Now you must fill database information what you fill when installation progress in /etc/pandora/pandora_server.conf.

Remove or rename /var/www/pandora_console/install.php and now you can login to http://ip/pandora_console with username is admin and password is pandora.

Pandora FMS can be start,stop and restart via init service.

$/etc/init.d/pandora_server start.

Pandora FMS Server 5.1SP2 Build 150223 Copyright (c) 2004-2014 ArticaST
This program is OpenSource, licensed under the terms of GPL License version 2.
You can download latest versions and documentation at http://www.pandorafms.org

[*] Backgrounding Pandora FMS Server process.

Pandora Server is now running with PID 13164

Setup Whatsapp Alert In Pandora FMS

Before I already search and found some tutorial also in pandorafms blog it self, but nothing is working, so i use my dirty way to get it work.

First we need clone whatsAPI from github, thanks venomousox for push it to github.

$git clone https://github.com/venomous0x/WhatsAPI

next go to test directory in whatsapi folder, you can see whatsapp.php file there, that what we need later for send alert.

Before that we must register our number to use in whatsapp, you can use yowsup for register, we can clone or download from github.

$wget https://github.com/tgalal/yowsup/archive/master.zip

$unzip master.zip

#install yowup dependencies

$apt-get install python python-dateutil python-argparse

#go to src folder

$cd yowsup-master/src

#create config from default config

$cp config.example yowsup-cli.config

Fill cc,phone,id and password with yours. cc is your code area , phone is your number phone include your code area, id is your di, and paswword is pass you want set in your whatsapp account.

#give yowsup-cli file permission to execute.

$chmod +x yowsup-cli

#request whatsapp code registration

$./yowsup-cli --request-code sms --config yowsup-cli.config

status: sent

retry_after: 3605

length: 6

method: sms

#register with registration code

$./yowsup-cli --register <registration-code> --config yowsup-cli.config

status: ok

kind: free

pw: <your password with base64 encode>

price: 0,99

price_expiration: 1662803446

currency: USD

cost : 0.99

expiration: 1691344106

login: <your phone number>

type: new

#Fill yowsup-cli.config with what you get in register output.

$cat yowsup-cli.config

cc=<code area>

phone=<number phone>

id=

password=<your password>

Test send message before use whatsapi from venomous0x

$./yowsup-cli --send <destination number phone> "Test" --wati --config yowsu-cli.config

Connecting to c.whatsapp.net

Authed <your number phone>

Sent message

Got sent receipt

Get messages

$./yowsup-cli --listen --autoack --keeplive --config yowsup-cli.config

Connecting to c.whatsapp.net

Authed <your number phone>

62111222333@s.whatsapp.net [05-03-2015 11.48]: I have received test from you

Interactive Mode: Send and Get messages

$./yowsup-cli --interactive 62111222333 --wait --autotack --keepalive --config yowsup-cli.config

Connecting to c.whatsapp.net

Authed <your number phone>

Starting Interactive chat with 62111222333

Enter Message or command: (/available, /lastseen, /unavailable)

Hi, whatsapp bro

<your number phone> [05-03-2015 11:54: HI, whatsapp bro

Enter Message or command: (/available, /lastseen, /unavailable)
62111222333@s.whatsapp.net [05-03-2015 11:55]:What are you doing?
@s.whatsapp.net [02-02-2013 14:16]:What are you doing?

Enter Message or command: (/available, /lastseen, /unavailable)

Chat with you

<your number phone> [05-03-2015 11:56]: Chat with you

Enter Message or command: (/available, /lastseen, /unavailable)

/unavailable.

If everything is well done, we continue to next step with WhatsAPI.

Fill in information what we got in yowsup to WhatsAPI.

Edit whatsapp.php file in whatsapi/test/ folder.

Fill $nickname in line 19, change with whatever nick you want to display or you can save number phone and give name in your contact as you alert bot server.

Fill value $sender variable with your phone number bot alert server, fill empty in $imei variable and fill $password variable with base64 password from yowsup, save and test send whatsapp message with it.

$php5 /path/whatsapi/test/whatsapp.php -s <destination number> "test message"

If well done too, so we can go to final step, config in pandora FMS panel.

Login to you Pandora FMS web pannel http://ip/pandora_console/ with default login admin:pandora.

Go to Manage alerts -> Command to create command for alert, click create button in right bottom.

Fill name form with whatever you want to give a name and fill the Command form with the command for sending message to whatsapp number via WhatsAPI like before.

php5 <path>/whatsapp.php -s <destination number> "_field3_"

then click create button. You will see your name command in list alert command, next we create action for alert.

Go to Manage alerts -> Actions, Click again create button on the right bottom, fill Name form with whatever you want, let the Group option still selected to all for let the action is can be use for all server, next choose name you create in the command alert  in Command option and if box in Command preview show your command, you have to do it right, click Create button and you are done for creating alert action. You will see your action name in the list of Alert actions, next we create template message for alert.

Go to Manage alerts -> Templates. There are three type of alerts, the firs and the last type is that we will use, click Critical condition -> next to step 2 -> choose the name of action you create before in Default action -> next to last step -> enable for alert recovery and fill the Field 3 with whatever message you want to send to you if something bad happens, do the same instruction in Warning condition too.

Up to here we are done to create alert system integrated to whatsapp.

For test you can just create the agent,then create module and choose default action with action your create before to send alert to whatsapp number your already set.

This is example for my bot alert notification send to my whatsapp :

Server alert via whatsapp

Example for the Server alert via whatsapp

 

Happy Monitoring !

The post How to Install PandoraFMS and Setup Whatsapp Alerts appeared first on LinOxide.

Viewing all 58 articles
Browse latest View live