WebMail Lite – Manage and Download Mails From Gmail, Yahoo, Outlook and Others

WebMail Lite is a web application that can be used to manage and download mails form your own local mail server or from a public mail service, such as Gmail, Yahoo!, Outlook or others. WebMail Lite application acts as an client interface for IMAP and SMTP services, allowing any configured email account to sync and handle inbox messages locally.

Requirements

  1. LAMP Stack Installed in CentOS/RHEL
  2. LAMP Stack Installed in Ubuntu
  3. LAMP Stack Installed in Debian

In this topic we’ll learn how to install and configure the latest version of WebMail Lite PHP application in Debian, Ubuntu and CentOS server.

Step 1: Initial Settings for WebMail Lite

1. Before starting to install WebMail Lite application in your server, first assure that the following PHP modules and extensions are installed and enabled in your LAMP stack, by issuing the following commands.

------------ On CentOS and RHEL ------------ # yum install epel-release
# yum install php-xml php-mcrypt php-mbstring php-curl
------------ On Debian and Ubuntu ------------
# apt install php7.0-xml php7.0-mcrypt php7.0-mbstring php7.0-curl

2. Next, go ahead and install the unzip utility in your system, that we’ll be using it to extract the content of the WebMail Lite zip compressed file archive.

# yum install zip unzip [On CentOS/RHEL]
# apt install zip unzip [On Debian/Ubuntu]


3. On the next step, modify PHP default configuration file in order to change following PHP variables. Also, make sure to update PHP timezone setting to reflect your server physical location.

# vi /etc/php.ini [On CentOS/RHEL]
# nano /etc/php/7.0/apache2/php.ini [On Debian/Ubuntu]

Search, edit and update the following variables PHP configuration file.

file_uploads = On
allow_url_fopen = On
upload_max_file_size = 64M
date.timezone = Europe/Bucharest

Replace the PHP time.zone variable accordingly. To get a list of all time zones available in PHP, consult the official PHP Timezone docs.

4. After you’ve finished editing the PHP configuration file according to the settings explained above, restart Apache HTTP daemon to reflect changes by issuing the following commands.

# systemctl restart httpd [On CentOS/RHEL]
# systemctl restart apache2 [On Debian/Ubuntu]

Step 2: Create WebMail Lite Database

5. The WebMail Lite webmail client application uses a RDBMS database as backend, such as MySQL database, in order to store user configurations, contacts and other required settings.

In your installed LAMP stack, log in to MariaDB/MySQL database as execute the below commands to create a new database that will be used by WebMail application. Also, setup a user and a password to manage WebMail Lite database.

# mysql -u root -p
MariaDB [(none)]> create database mail;
MariaDB [(none)]> grant all privileges on mail.* to 'webmail'@'localhost' identified by 'password1';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> exit
Create WebMail Lite Database

Create WebMail Lite Database

Step 3: Download WebMail Lite

6. In order to install WebMail Lite application, first visit WebMail Lite download web page and grab the latest zip archive by issuing the following command.

# wget https://afterlogic.org/download/webmail_php.zip 

7. Next, extract WebMail Lite zip compressed archive to your current working directory and copy all extracted WebMail Lite files from webmail directory to your web server document root path by issuing the below commands. Also, execute ls command to list all files copied to /var/www/html directory.

# unzip webmail_php.zip
# rm -rf /var/www/html/index.html
# cp -rf webmail/* /var/www/html/
# ls -l /var/www/html/
List WebMail Lite Files

List WebMail Lite Files

8. Also, make sure you grant Apache runtime user write permissions to the your web server document root path by issuing the below command. Again, run ls command to list permissions in /var/www/html/ directory.

# chown -R apacahe:apache /var/www/html/ [On CentOS/RHEL]
# chown -R www-data:www-data /var/www/html/ [On Debian/Ubuntu]
# ls -al /var/www/html/
List Permissions of WebMail Lite

List Permissions of WebMail Lite

Step 4: Install WebMail Lite

9. In order to install WebMail Lite, open a browser and navigate your server’s IP address or domain name via HTTP protocol. Append the /install string after at your URL, as shown in the below example.

http://yourdomain.tld/install

10. At the initial installation screen, a series of server compatibility tests and pre-installation checks will be perform by WebMail Lite installer script in order to detect if all required PHP extensions and settings are installed and properly configured to install WebMail Lite.

It will also check if the web server runtime user can write in webroot data folder and write the configuration file. If all requirements are in order, hit on Next button to continue.

WebMail Pre Installation Checks

WebMail Pre Installation Checks

11. On the next screen read and accept the license agreement by hitting on I Agree button.

Accept WebMail License Agreement

Accept WebMail License Agreement

12. Next, add WebMail Lite MySQL database host address and database credentials and hit on Test database button to test database connection. Check Create database Tables and hit on Next button to continue.

WebMail Database Settings

WebMail Database Settings

13. Next, write a password for mailadm user and hit on Next button to continue. The mailadmin user is the most privileged account used for administering WebMail Lite application.

Set WebMail Admin Password

Set WebMail Admin Password

14. In the next screen, you can check the connection to a mail server via IMAP and SMP protocols. In case you’ve already configured a mail server at your premises, enter the IP address of the mail server in server host filed and test SMTP connection.

If the mail server runs locally, use 127.0.0.1 IP address to test the mail server connection. When you finish hit on Next button to continue install the application.

Check WebMail Email Connection

Check WebMail Email Connection

After the installation process of WebMail Lite finishes, hit on Exit button to complete the installation process.

WebMail Installation Completed

WebMail Installation Completed

15. Afterwards, navigate to the following address in order to access WebMail Lite Admin Panel and setup your mail server settings.

https://yourdomain.tld/adminpanel 

To log in to WebMail Lite admin panel, use the mailadm user and the password configured during the installation process.

WebMail Admin Login

WebMail Admin Login

16. In order to configure mail services for your domain, navigate to Domains -> Default settings and add your mail server IP address in incoming mail field and in outgoing mail field.

Also, check use incoming mail’s login/password of the user in order to authenticate to SMTP mail server. Replace the IP addresses and port number according to your own mail server settings. Hit on Save button to apply the new settings.

In case you want to use WebMail Lite application to manage a Gmail account, use the settings as illustrated in the below screenshot.

WebMail Email Settings

WebMail Email Settings

17. In order to log in to WebMail Lite application, navigate to your domain name via HTTP protocol and add your email server log in credentials. In the below screenshot, for demonstration purposes, we’ll log in to WebMail Lite application with a Gmail account.

http://yourdomain.tld 
WebMail User Login

WebMail User Login

18. After logging in to WebMail Lite you should be able to read all your account mail messages or compose and send new messages, as illustrated in the following screenshot.

WebMail User Inbox

WebMail User Inbox

Congratulations! You have successfully installed and configured WebMail Lite application at your premises. In order to secure visitors connections to WebMail Lite application, enable Apache HTTP server SSL configuration with a free certificate obtained from Let’s Encrypt CA.

How to Install WordPress with LSCache, OpenLiteSpeed and CyberPanel

OpenLiteSpeed is a high-performance event-driven open source web server developed and maintained by LiteSpeed Technologies. In this article, we will see how we can use CyberPanel to get up and running with LSCache and WordPress on OpenLiteSpeed in few clicks.

What is LSCache?

LSCache is a full-page cache built directly into OpenLiteSpeed web server, it is similar to Varnish but more efficient because we remove the reverse proxy layer from the picture when LSCache is used.

LSCache WordPress Plugin!

LiteSpeed has also developed a WordPress plugin that communicates with OpenLiteSpeed web server to cache the dynamic content which greatly reduces the load time, increases performance and puts less load on your server.

LiteSpeed’s plugin provides powerful cache-management tools that, due to LSCache’s tight integration into the server, are impossible for other plugins to replicate. These include tag-based smart purging of the cache, and the ability to cache multiple versions of generated content based on criteria such as mobile vs. desktop, geography, and currency.


LSCache has the ability to cache personalized copies of a page, which means that caching can be extended to include logged-in users. Pages that are publicly uncacheable may be cached privately.

In addition to LSCache’s advanced cache-management capabilities, the WordPress plugin also provides additional optimization functionality such as CSS/JS minification and combination, HTTP/2 Push, lazy load for images and iframes, and database optimization.

What is CyberPanel?

CyberPanel is a control panel on top of OpenLiteSpeed, you can use it to create websites and install WordPress with one click.

It also features:

  • FTP
  • DNS
  • Email
  • Multiple PHPs

In this article, we will see how we can efficiently make use of all of these technologies to get up and running in no time.

Step 1: Install CyberPanel – ControlPanel

1. The first step is to install CyberPanel, you can use the following commands to install CyberPanel on your Centos 7 VPS or dedicated server.

# wget http://cyberpanel.net/install.tar.gz
# tar zxf install.tar.gz
# cd install
# chmod +x install.py
# python install.py [IP Address]

After successful CyberPanel installation, you will get login credentials as shown below.

###################################################################
CyberPanel Successfully Installed Visit: https://192.168.0.104:8090 Username: admin Password: 1234567 ###################################################################

2. Now login into CyberPanel using above credentials.

CyberPanel Login

CyberPanel Login

Cyber Panel Dashboard

Cyber Panel Dashboard

Step 2: Install WordPress in CyberPanel

3. To setup WordPress with LSCache, first we need to create a website by going to Main > Websites > Create Website section and fill out all details as shown.

Create Website in CyberPanel

Create Website in CyberPanel

4. Now go to Main > Websites > List Websites section, click on Launch icon to launch the website panel, so that WordPress can be installed.

List Websites

List Websites

Once the website panel is launched you will have the following options on your screen:

Website Information

Website Information

5. On this window, open File Manager and delete everything from the public_html folder. Now scroll down to the bottom and you will see a tab which says WordPress with LS Cache.

Install WordPress with LSCache

Install WordPress with LSCache

6. In the path box do not enter anything if you want WordPress to be installed in the website document root. If you enter any path it will be relative to the website home directory.

For example, if you enter wordpress, your WordPress installation directory will be tecmint.com/wordpress.

7. Once you click on “Install WordPress“, CyberPanel will download WordPress and LSCache, create the database, and setup a WordPress site. Once CyberPanel is finished installing WordPress you will need to visit your website domain to configure your website.

WordPress Installation Completed

WordPress Installation Completed

In this example we’ve used tecmint.com, so we will visit this domain to configure our site. These are very basic settings and you can follow the onscreen instructions to complete your configurations.

Step 3: Activate LiteSpeed Cache Plugin

8. Once WordPress is installed, you can login to the dashboard at https://tecmint.com/wp-admin. It will ask for the username/password combination that you set up during wordpress configuration.

Activate LSCache on WordPress

Activate LSCache on WordPress

The LSCache plugin is already installed, so you just need to go into Installed Plugins in your WordPress dashboard and activate it.

9. Now verify LSCache by going to example.com and see your responce headers will look something like.

Check LSCache Headers

Check LSCache Headers

You can see that this page is now served from cache and the request didn’t hit the backend at all.

Step 4: Advance LiteSpeed Cache Options

  • Purge Cache – If for some reason you want to purge the cache you can do so via the LSCache. On this page you have numerous ways to purge the cache.
LSCache Purge

LSCache Purge

Minify, Combine, and HTTP/2 Push

  • Minification – When code is minified, all unnecessary whitespace characters, newline characters, and comments are removed. This shrinks the size of the source code.
  • Combination – When a website includes several JavaScript (or CSS) files, those files may be combined into one. This reduces the number of requests made by the browser and, if there was duplicate code, it is removed.
  • HTTP/2 Push – This functionality allows the server to anticipate the browser’s needs and act upon them. One example: when serving index.html, HTTP/2 can reasonably assume that the browser also wants the included CSS and JS files, and will push them, too, without being asked.

All of the above measures give OpenLiteSpeed the ability to serve content faster. These settings can be found in the LiteSpeed Cache settings page under the Optimize tab, and they are all disabled by default. Press the ON button next to each setting that you’d like to enable.

It is possible to exclude some CSS, JS, and HTML from being minified or combined. Enter the URLs to these resources in the appropriate boxes, one per line, to exclude them.

Step 5: Change Default PHP and Install Extensions

10. If, for some reason, you need to change the PHP version for your WordPress website you can do so via CyberPanel:

Change PHP Version

Change PHP Version

11. Some additional WordPress plugins may require you to install additional PHP extensions, or you may want to add Redis to WordPress. You can install missing extensions via CyberPanel from the Server > PHP > Install Extensions tab.

First select the PHP version from the drop down for which you want to install the extension. In the search box, enter the extension name, and finally click Install to install the missing extension.

Install PHP Extensions

Install PHP Extensions

For more information read CyberPanel and OpenLiteSpeed Documentation.

Getting started with .NET for Linux

When you know a software developer’s preferred operating system, you can often guess what programming language(s) they use. If they use Windows, the language list includes C#, JavaScript, and TypeScript. A few legacy devs may be using Visual Basic, and the bleeding-edge coders are dabbling in F#. Even though you can use Windows to develop in just about any language, most stick with the usuals.

If they use Linux, you get a list of open source projects: Go, Python, Ruby, Rails, Grails, Node.js, Haskell, Elixir, etc. It seems that as each new language—Kotlin, anyone?—is introduced, Linux picks up a new set of developers.

So leave it to Microsoft (Microsoft?!?) to throw a wrench into this theory by making the .NET framework, coined .NET Core, open source and available to run on any platform. Windows, Linux, MacOS, and even a television OS: Samsung’s Tizen. Add in Microsoft’s other .NET flavors, including Xamarin, and you can add the iOS and Android operating systems to the list. (Seriously? I can write a Visual Basic app to run on my TV? What strangeness is this?)

Given this situation, it’s about time Linux developers get comfortable with .NET Core and start experimenting, perhaps even building production applications. Pretty soon you’ll meet that person: “I use Linux … I write C# apps.” Brace yourself: .NET is coming.

How to install .NET Core on Linux

The list of Linux distributions on which you can run .NET Core includes Red Hat Enterprise Linux (RHEL), Ubuntu, Debian, Fedora, CentOS, Oracle, and SUSE.

Each distribution has its own installation instructions. For example, consider Fedora 26:

Step 1: Add the dotnet product feed.


        sudo rpm –import https://packages.microsoft.com/keys/microsoft.asc
        sudo sh -c ‘echo -e “[packages-microsoft-com-prod]\nname=packages-microsoft-com-prod \nbaseurl=https://packages.microsoft.com/yumrepos/microsoft-rhel7.3-prod\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc” > /etc/yum.repos.d/dotnetdev.repo’

Step 2: Install the .NET Core SDK.


        sudo dnf update
        sudo dnf install libunwind libicu compat-openssl10
        sudo dnf install dotnet-sdk-2.0.0

Creating the Hello World console app

Now that you have .NET Core installed, you can create the ubiquitous “Hello World” console application before learning more about .NET Core. After all, you’re a developer: You want to create and run some code now. Fair enough; this is easy. Create a directory, move into it, create the code, and run it:


mkdir helloworld && cd helloworld
dotnet new console
dotnet run

You’ll see the following output:


$ dotnet run
Hello World!

What just happened?

Let’s take what just happened and break it down. We know what the mkdir and cd did, but after that?

dotnew new console

As you no doubt have guessed, this created the “Hello World!” console app. The key things to note are: The project name matches the directory name (i.e., “helloworld”); the code was build using a template (console application); and the project’s dependencies were automatically retrieved by the dotnet restore command, which pulls from nuget.org.

If you view the directory, you’ll see these files were created:


Program.cs
helloworld.csproj

Program.cs is the C# console app code. Go ahead and take a look inside (you already did … I know … because you’re a developer), and you’ll see what’s going on. It’s all very simple.

Helloworld.csproj is the MSBuild-compatible project file. In this case there’s not much to it. When you create a web service or website, the project file will take on a new level of significance.

dotnet run

This command did two things: It built the code, and it ran the newly built code. Whenever you invoke dotnet run, it will check to see if the *.csproj file has been altered and will run the dotnet restore command. It will also check to see if any source code has been altered and will, behind the scenes, run the dotnet build command which—you guessed it—builds the executable. Finally, it will run the executable.

Sort of.

Where is my executable?

Oh, it’s right there. Just run which dotnet and you’ll see (on RHEL): 

/opt/rh/rh-dotnet20/root/usr/bin/dotnet

That’s your executable.

Sort of.

When you create a dotnet application, you’re creating an assembly … a library … yes, you’re creating a DLL. If you want to see what is created by the dotnet build command, take a peek at bin/Debug/netcoreapp2.0/. You’ll see helloworld.dll, some JSON configuration files, and a helloworld.pdb (debug database) file. You can look at the JSON files to get some idea as to what they do (you already did … I know … because you’re a developer).

When you run dotnet run, the process that runs is dotnet. That process, in turn, invokes your DLL file and it becomes your application.

It’s portable

Here’s where .NET Core really starts to depart from the Windows-only .NET Framework: The DLL you just created will run on any system that has .NET Core installed, whether it be Linux, Windows, or MacOS. It’s portable. In fact, it is literally called a “portable application.”

Forever alone

What if you want to distribute an application and don’t want to ask the user to install .NET Core on their machine? (Asking that is sort of rude, right?) Again, .NET Core has the answer: the standalone application.

Creating a standalone application means you can distribute the application to any system and it will run, without the need to have .NET Core installed. This means a faster and easier installation. It also means you can have multiple applications running different versions of .NET Core on the same system. It also seems like it would be useful for, say, running a microservice inside a Linux container. Hmmm…

What’s the catch?

Okay, there is a catch. For now. When you create a standalone application using the dotnet publish command, your DLL is placed into the target directory along with all of the .NET bits necessary to run your DLL. That is, you may see 50 files in the directory. This is going to change soon. An already-running-in-the-lab initiative, .NET Native, will soon be introduced with a future release of .NET Core. This will build one executable with all the bits included. It’s just like when you are compiling in the Go language, where you specify the target platform and you get one executable; .NET will do that as well.

You do need to build once for each target, which only makes sense. You simply include a runtime identifier and build the code, like this example, which builds the release version for RHEL 7.x on a 64-bit processor:

dotnet publish -c Release -r rhel.7-x64

Web services, websites, and more

So much more is included with the .NET Core templates, including support for F# and Visual Basic. To get a starting list of available templates that are built into .NET Core, use the command dotnet new –help.

Hint: .NET Core templates can be created by third parties. To get an idea of some of these third-party templates, check out these templates, then let your mind start to wander…

Like most command-line utilities, contextual help is always at hand by using the –help command switch. Now that you’ve been introduced to .NET Core on Linux, the help function and a good web search engine are all you need to get rolling.

Other resources

Ready to learn more about .NET Core on Linux? Check out these resources:

How the OpenType font system works

Digital typography is something that we use every day, but few of us understand how digital fonts work. This article gives a basic, quick, dirty, oversimplified (but hopefully useful) tour of OpenType— what it is and how you can use its powers with free, libre, and open source software (FLOSS). All the fonts mentioned here are FLOSS, too.

What is OpenType?

On the most basic level, a digital font is a “container” for different glyphs plus extra information about how to use them. Each glyph is represented by a series of points and rules to connect those points. I’ll not delve into the different ways to define those “connections” or how we arrived there (the history of software development can be messy), but basically there are two kinds of rules: parabolic segments (quadratic Bézier curves) or cubic functions (cubic Bézier curves).

The TTF file format, generally known as TrueType Font, can only use quadratic Bézier curves, whereas the OTF file format, known as OpenType Font, supports both.

Here is where we need to be careful about what we are talking about: The term “OpenType” refers not only to the file format, but also to the advanced properties of a typeface as a whole (i.e., the “extra information” mentioned earlier).

In fact, in addition to the OpenType file format, there are also substitution tables that, for example, tell the software using that font to substitute two characters with the corresponding typographical ligature; that the shape of a character needs to change according to the characters that surround it (its “contextual alternate”); or that when you write in Greek, a ? at the end of a word must be substituted with a ?. This is what the term “smart fonts” means.

And, to make things more confusing, including OpenType tables on TrueType fonts is possible, such as what happens on Junicode.

A quick example

Let’s see a quick example of smart fonts in use. Here is an example of Cormorant with (top) and without (bottom) OpenType features enabled:

Each OpenType property has its own “tag” that is used to activate those “specialties.” Some of these tags are enabled by default (like liga for normal ligatures or clig for contextual ligatures), whereas others must be enabled by hand.

A partial list of OpenType tags and names can be found in Dario Taraborelli’s Accessing OpenType font features in LaTeX.

Querying fonts

Finding out the characteristics of an OpenType font is simple. All you need is the otfinfo command, which is included in the package lcdf typetools (on my openSUSE system, it’s installed as texlive-lcdftypetools). Using it is quite simple: On the command line, issue something like:

otfinfo [option] /path/to/the/font

The option -s provides the languages supported by the font, whereas -f tells us which OpenType options are available. Font license information is displayed with the -i option.

If the path to the font contains a space, “scape” that space with an inverted bar. For example, to know what Sukhumala Regular.otf offers when installed in the folder ~/.fonts/s/, simply write in the terminal:

otfinfo -f ~/.fonts/s/Sukhumala\ Regular.otf

Using OpenType tables on LibreOffice Writer

LibreOffice version 5.3 offers good support for OpenType. It is not exactly “user-friendly,” but it’s not that difficult to understand, and it provides so much typographical power that it shouldn’t be ignored.

To simultaneously activate “stylistic sets” 1 and 11 on Vollkorn (see screenshot bellow), in the font name box, write:

Vollkorn:ss01&ss11

The colon starts the “tag section” on the extended font name and the ampersand allows us to use several tags.

But there is more. You can also disable any default option. For example, the Sukhumala font has some strange contextual ligatures that turn aa into ?, ii into ?, and uu into ?. To disable contextual ligatures on Sukhumala, add a dash in front of the corresponding OpenType tag clig:

Sukhumala:-clig

And that’s it. As I said before, it’s not exactly user friendly, especially considering that the font name box is rather small, but it works!

And don’t forget to use all of this within styles: Direct formatting is the enemy of good formatting. I mean, unless you are preparing a quick screenshot for a short article about typography. In that case it’s OK. But only in that case.

There’s more

One interesting OpenType tag that, sadly, does not work on LibreOffice yet is “size.” The size feature enables the automated selection of optical sizes, which is a font family that offers different designs for different point sizes. Few fonts offer this option (some GUST fonts like Latin Modern or Antykwa Pó?tawskiego; an interesting project in its initial stages of development called Coelacanth; or, to a lesser extent, EB Garamond), but they are all great. Right now, the only way to enjoy this property is through a more advanced layout system such as XeTeX. Using OpenType on XeTeX is a really big topic; the fontspec manual (the package that handles font selection and configuration on both XeTeX and LuaTeX) has more than 120 pages, so… not today.

And yes, version 1.5.3 of Scribus added support for OpenType (in addition to footnotes and other stuff), but that’s something I still need to explore.

How to Install Nagios 4 in Ubuntu and Debian

In this topic we’ll learn how to install and configure the latest official version of Nagios Core from sources in Debian and Ubuntu servers.

Nagios Core is a free Open Source network monitoring application designed for monitoring network applications, devices and their related services and in a network.

Nagios can remotely monitor specific operating system parameters via agents deployed on nodes and send alerts via mail or SMS in order to notify administrators in case critical services in a network, such as SMTP, HTTP, SSH, FTP and other fails.

Requirements

  1. Debian 9 Minimal Installation
  2. Ubuntu 16.04 Minimal Installation

Step 1: Install Pre-requirements for Nagios

1. Before installing Nagios Core from sources in Ubuntu or Debian, first install the following LAMP stack components in your system, without MySQL RDBMS database component, by issuing the below command.

# apt install apache2 libapache2-mod-php7.0 php7.0


2. On the next step, install the following system dependencies and utilities required to compile and install Nagios Core from sources, by issuing the follwoing command.

# apt install wget unzip zip autoconf gcc libc6 make apache2-utils libgd-dev

Step 2: Install Nagios 4 Core in Ubuntu and Debian

3. On the first step, create nagios system user and group and add nagios account to the Apache www-data user, by issuing the below commands.

# useradd nagios
# usermod -a -G nagios www-data

4. After all dependencies, packages and system requirements for compiling Nagios from sources are present in your system, go to Nagios webpage and grab the latest version of Nagios Core stable source archive by issuing the following command.

# wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.3.4.tar.gz

5. Next, extract Nagios tarball and enter the extracted nagios directory, with the following commands. Issue ls command to list nagios directory content.

# tar xzf nagios-4.3.4.tar.gz # cd nagios-4.3.4/
# ls
List Nagios Content

List Nagios Content

6. Now, start to compile Nagios from sources by issuing the below commands. Make sure you configure Nagios with Apache sites-enabled directory configuration by issuing the below command.

# ./configure --with-httpd-conf=/etc/apache2/sites-enabled

7. In the next step, build Nagios files by issuing the following command.

# make all

8. Now, install Nagios binary files, CGI scripts and HTML files by issuing the following command.

# make install

9. Next, install Nagios daemon init and external command mode configuration files and make sure you enable nagios daemon system-wide by issuing the following commands.

# make install-init
# make install-commandmode
# systemctl enable nagios.service

10. Next, run the following command in order to install some Nagios sample configuration files needed by Nagios to run properly by issuing the below command.

# make install-config

11. Also, install Nagios configuration file for Apacahe web server, which can be fount in /etc/apacahe2/sites-enabled/ directory, by executing the below command.

# make install-webconf

12. Next, create nagiosadmin account and a password for this account necessary by Apache server to log in to Nagios web panel by issuing the following command.

# htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

13. To allow Apache HTTP server to execute Nagios cgi scripts and to access Nagios admin panel via HTTP, first enable cgi module in Apache and then restart Apache service and start and enable Nagios daemon system-wide by issuing the following commands.

# a2enmod cgi
# systemctl restart apache2
# systemctl start nagios
# systemctl enable nagios

14. Finally, log in to Nagios Web Interface by pointing a browser to your server’s IP address or domain name at the following URL address via HTTP protocol. Log in to Nagios with nagiosadmin user the password setup with htpasswd script.

http://IP-Address/nagios
OR
http://DOMAIN/nagios
Nagios Admin Login

Nagios Admin Login

Nagios Core Dashboard

Nagios Core Dashboard

15. To view your hosts status, navigate to Current Status -> Hosts menu where you will notice that some errors are displayed for localhost host, as illustrated in the below screenshot. The error appears because Nagios has no plugins installed to check hosts and services status.

Check Host Status

Check Host Status

Step 3: Install Nagios Plugins in Ubuntu and Debian

16. To compile and install Nagios Plugins from sources in Debian or Ubuntu, at the first stage, install the following dependencies in your system, by issuing the below command.

# apt install libmcrypt-dev make libssl-dev bc gawk dc build-essential snmp libnet-snmp-perl gettext libldap2-dev smbclient fping libmysqlclient-dev qmail-tools libpqxx3-dev libdbi-dev 

17. Next, visit Nagios Plugins repositories page and download the latest source code tarball by issuing the following command.

# wget https://github.com/nagios-plugins/nagios-plugins/archive/release-2.2.1.tar.gz 

18. Go ahead and extract the Nagios Plugins source code tarball and change path to the extracted nagios-plugins directory by executing the following commands.

# tar xfz release-2.2.1.tar.gz # cd nagios-plugins-release-2.2.1/

19. Now, start to compile and install Nagios Plugins from sources, by executing the following series of commands in your server console.

# ./tools/setup # ./configure # make
# make install

20. The compiled and installed Nagios plugins can be located in /usr/local/nagios/libexec/ directory. List this directory to view all available plugins in your system.

# ls /usr/local/nagios/libexec/
Nagios Plugins Directory

Nagios Plugins Directory

21. Finally, restart Nagios daemon in order to apply the installed plugins, by issuing the below command.

# systemctl restart nagios.service

22. Next, log in to Nagios web panel and go to Current Status -> Services menu and you should notice all hosts services are checked now by Nagios plugins.

From the color code you should see the current services status: green color is for OK status, yellow for Warning and red for Critical status.

Check Host Services

Check Host Services

23. Finally, to access Nagios admin web interface via HTTPS protocol, issue the following commands to enable Apache SSL configurations and restart the Apache daemon to reflect changes.

# a2enmod ssl # a2ensite default-ssl.conf
# systemctl restart apache2

24. After you’ve enabled Apache SSL configurations, open /etc/apache2/sites-enabled/000-default.conf file for editing and add the following block of code after DocumentRoot statement as shown in the below excerpt.

RewriteEngine on
RewriteCond %{HTTPS} off
RewriteRule ^(.*) https://%{HTTP_HOST}/$1
Configure Apache for Nagios

Configure Apache for Nagios

25. You need to restart Apache daemon to apply the configured rules, by issuing the below command.

# systemctl restart apache2.service 

26. Finally, refresh the browser in order to be redirected to Nagios admin panel via HTTPS protocol. Accept the wanting message that gets displayed in the browser and log in to Nagios again with the your credentials.

Nagios HTTPS Dashboard

Nagios HTTPS Dashboard

Congratulations! You have successfully install and configured Nagios Core monitoring system from sources in Ubuntu server or Debian 9.

How to Install Cacti with Cacti-Spine in Debian and Ubuntu

In this tutorial we’ll learn how to install and configure Cacti network monitoring tool in the latest version of Debian and Ubuntu 16.04 LTS. Cacti will be build and installed from source files during this guide.

Cacti is an open source monitoring tool created for monitoring networks, especially network devices, such as switches, routers, servers via SNMP protocol. Cacti interacts with end-users and can be administered via a web tool interface.

Requirements

  1. LAMP Stack Installed in Debian 9
  2. LAMP Stack Installed in Ubuntu 16.04 LTS

Step 1: Install and Configure Prerequisites for Cacti

1. In Debian 9, open sources list file for editing and add the contrib and non-free repositories to the file by changing the following lines:

# nano /etc/apt/sources.list

Add following lines to sources.list file.

deb http://ftp.ro.debian.org/debian/ stretch main contrib non-free
deb-src http://ftp.ro.debian.org/debian/ stretch main
deb http://security.debian.org/debian-security stretch/updates main contrib non-free
deb-src http://security.debian.org/debian-security stretch/updates main
Add Repositories to Debian

Add Repositories to Debian


2. Afterwards, make sure to update the system by issuing the below command.

# apt update
# apt upgrade

3. In your LAMP stack make sure the following PHP extensions are present in the system.

# apt install php7.0-snmp php7.0-xml php7.0-mbstring php7.0-json php7.0-gd php7.0-gmp php7.0-zip php7.0-ldap php7.0-mcrypt

4. Next, edit PHP configuration file and change the time zone setting to match your server’s physical location, by issuing the below command.

# echo "date.timezone = Europe/Bucharest" >> /etc/php/7.0/apache2/php.ini 

5. Next, log in to MariaDB or MySQL database from your LAMP stack installation and create a database for installing Cacti by issuing the following commands.

Replace cacti database name, user and password to match your own configurations and choose a strong password for cacti database.

# mysql -u root -p
mysql> create database cacti;
mysql> grant all on cacti.* to 'cactiuser'@'localhost' identified by 'password1';
mysql> flush privileges;
mysql> exit
Create Cacti Database

Create Cacti Database

6. Also, issue the below commands to allow cacti user select permissions to MySQL data.timezone setting by issuing the below commands.

# mysql -u root -p mysql < /usr/share/mysql/mysql_test_data_timezone.sql # mysql -u root -p -e 'grant select on mysql.time_zone_name to [email protected]'

7. Next, open MySQL server configuration file and add the following lines at the end of the file.

# nano /etc/mysql/mariadb.conf.d/50-server.cnf [For MariaDB]
# nano /etc/mysql/mysql.conf.d/mysqld.cnf [For MySQL] 

Add the following lines to the end of the 50-server.cnf or mysqld.cnf file.

max_heap_table_size = 98M
tmp_table_size = 64M
join_buffer_size = 64M
innodb_buffer_pool_size = 485M
innodb_doublewrite = off
innodb_flush_log_at_timeout = 3
innodb_read_io_threads = 32
innodb_write_io_threads = 16

For MariaDB database also add the following line to the end of the 50-server.cnf file:

innodb_additional_mem_pool_size = 80M
Configure MySQL for Cacti

Configure MySQL for Cacti

8. Finally, restart MySQL and Apache services to apply all settings and verify both services status by issuing the following commands.

# systemctl restart mysql apache2
# systemctl status mysql apache2

Step 2: Download and Prepare Cacti Installation

9. Start install Cacti from sources by downloading and extracting the latest version of Cacti archive and copy all the extract files to Apache web document root, by issuing the following commands.

# wget https://www.cacti.net/downloads/cacti-latest.tar.gz
# tar xfz cacti-latest.tar.gz # cp -rf cacti-1.1.27/* /var/www/html/

10. Remove index.html file from /var/www/html directory, create the Cacti log file and grant Apache with write permissions to web root path.

# rm /var/www/html/index.html
# touch /var/www/html/log/cacti.log
# chown -R www-data:www-data /var/www/html/

11. Next, edit cacti configuration file and modify the following lines as shown in the below example.

# nano /var/www/html/include/config.php

Cacti config.php file sample. Replace cacti database name, user and password accordingly.

$database_type = 'mysql';
$database_default = 'cacti';
$database_hostname = 'localhost';
$database_username = 'cactiuser';
$database_password = 'password1;
$database_port = '3306';
$database_ssl = false;
$url_path = '/';
Cacti Configuration Settings

Cacti Configuration Settings

12. Next, populate cacti database with the cacti.sql script from /var/www/html/ directory by issuing the below command.

# mysql -u cactiuser cacti -p < /var/www/html/cacti.sql 

13. Now install some additional resources, as Cacti engine collects devices data via the SNMP protocol and displays graphics by using RRDtool. Install all of them by issuing following command.

# apt install snmp snmpd snmp-mibs-downloader rrdtool

14. Verify if SNMP service is up and running by restarting snmpd daemon by issuing the below command. Also check the snmpd daemon status and its open ports.

# systemctl restart snmpd.service # systemctl status snmpd.service
# ss -tulpn| grep snmp

Step 3: Download and Install Cacti-Spine

15. Cacti-Spine is a C written replacement for the default cmd.php poller. Cacti-Spine provides a faster execution time. To compile Cacti-Spine pooler from sources install the below required dependencies in your system.

---------------- On Debian 9 ---------------- # apt install build-essential dos2unix dh-autoreconf help2man libssl-dev libmysql++-dev librrds-perl libsnmp-dev libmariadb-dev libmariadbclient-dev
---------------- On Ubuntu ---------------- # apt install build-essential dos2unix dh-autoreconf help2man libssl-dev libmysql++-dev librrds-perl libsnmp-dev libmysqlclient-dev libmysqld-dev 

16. After you’ve installed the above dependencies, download the latest version of Cacti-Spine archive, extract the tarball and compile cacti-spine by issuing the following series of commands.

# wget https://www.cacti.net/downloads/spine/cacti-spine-latest.tar.gz
# tar xfz cacti-spine-latest.tar.gz # cd cacti-spine-1.1.27/

17. Compile and install Cacti-Spine from sources by issuing the following commands.

# ./bootstrap # ./configure # make
# make install

18. Next, make sure spine binary is owned by root account and set the suid bit for the spine utility by running the following commands.

# chown root:root /usr/local/spine/bin/spine # chmod +s /usr/local/spine/bin/spine

19. Now, edit Cacti Spine configuration file and add the cacti database name, user and password to the Spine conf file as illustrated in the below example.

# nano /usr/local/spine/etc/spine.conf

Add following configuration to spine.conf file.

DB_Host localhost
DB_Database cacti
DB_User cactiuser
DB_Pass password1
DB_Port 3306
DB_PreG 0

Step 4: Cacti Installation Wizard Setup

20. To install Cacti, open a browser and navigate to your system IP address or domain name at the following URL.

http://your_IP/install

First, check Acept License Agreement and hit on the Next button to continue.

Cacti License Agreement

Cacti License Agreement

21. Next, check if system requirements and hit Next button to continue.

Cacti Pre-Installation Checks

Cacti Pre-Installation Checks

22. In the next window, select New Primary Server and hit on Next button to continue.

Select Cacti Installation Type

Select Cacti Installation Type

23. Next, verify critical binary locations and versions and change Spine binary path to /usr/local/spine/bin/spine. When you finish, hit Next button to continue.

Verify Cacit Binary Locations

Verify Cacit Binary Locations

24. Next, check if all web server directory permissions are in place (write permissions are set) and hit on Next button to continue.

Cacti Directory Permission Checks

Cacti Directory Permission Checks

25. On the next step check all the templates and hit on Finish button in order to finish the installation process.

Cacti Template Setup

Cacti Template Setup

26. Log in to Cacti web interface with the default credentials shown below and change the admin password, as illustrated in the following screenshots.

Username: admin
Password: admin
Cacti Admin Login

Cacti Admin Login

Change Cacti Admin Password

Change Cacti Admin Password

27. Next, go to Console -> Configuration -> Settings -> Poller and change the Poller Type from cmd.php to Spine binary and scroll down to Save button to save the configuration.

Cacti Poller Settings

Cacti Poller Settings

28. Then, go to Console -> Configuration -> Settings -> Paths and add the following path to Cacti-Spine configuration file:

/usr/local/spine/etc/spine.conf 

Hit on Save button to apply configuration.

Add Cacti Spine Configuration

Add Cacti Spine Configuration

29. The final setup which enables Cacti poller to start collecting data from monitored devices is to add a new crontab task in order to query each device via SNMP every 5 minutes.

The crontab job must be owned by www-data account.

# crontab -u www-data -e

Add Cron file entry:

*/5 * * * * /usr/bin/php /var/www/html/poller.php

30. Wait a few minutes for Cacti to collect data and go to the Graphs -> Default Tree and you should see the graphs collected for your monitored devices.

Cacti Monitoring Graphs

Cacti Monitoring Graphs

That’s all! You have successfully installed and configured Cacti with Cacti-Spine pooler, from sources, in the latest release of Debian 9 and Ubuntu 16.04 LTS server.

How to Install PostgreSQL 10 Using Source Code in Linux

PostgreSQL also called Postgres is a powerful and open source object-relational database system. It is an enterprise level database having features such as write ahead logging for fault tolerance, asynchronous replication, Multi-Version Concurrency Control (MVCC),online/hot backups, point in time recovery, query planner/optimizer, tablespaces, nested transactions (savepoints) etc.

Postgres has its latest version 10 released on 5th Oct 2017 by postgres global development group.

PostgreSQL Features

Features of New version are as follows:

  • Logical Replication: This feature enables replication of individual database objects (be it rows, tables, or selective databases) across standby servers. It provides more control over data replication. Implemented by using publisher-subscriber model.
  • Quorum Commit for Synchronous Replication: In this feature, dba can now specify the number of standby’s that acknowledge that the changes to database has done, so that data can be considered safely written.
  • SCRAM-SHA-256 authentication: Improved security that existing MD5-based password authentication and storage.
  • Improved parallel query execution.
  • Declarative table partitioning.
  • Full text search support for JSON and JSONB.

In this article, we will explain how to install PostgreSQL 10 using source code installation in Linux systems. Those who looking for easy installation from distribution package manager they can follow these below guides.

  1. How to Install PostgreSQL 10 on CentOS/RHEL and Fedora
  2. How to Install PostgreSQL 10 on Debian and Ubuntu

Install PostgreSQL Using Source Code


As postgres is open source database, it can be built from source code according to one’s needs/requirements. we can customize the build and installation process by supplying one or more command line options for various additional features.

Major advantage of using source code installation is it can be highly customized during installation.

1. First install required prerequisites such as gcc, readline-devel and zlib-devel using package manager as shown.

# yum install gcc zlib-devel readline-devel [On RHEL/CentOS]
# apt install gcc zlib1g-dev libreadline6-dev [On Debian/Ubuntu]

2. Download the source code tar file from the official postgres website using the following wget command directly on system.

# wget https://ftp.postgresql.org/pub/source/v10.0/postgresql-10.0.tar.bz2

3. Use tar command to extract the downloaded tarball file. New directory named postgresql-10.0 will be created.

# tar -xvf postgresql-10.0.tar.bz2
# ll
Sample Output
total 19236
-rw-------. 1 root root 933 Mar 18 2015 anaconda-ks.cfg
-rw-r--r--. 1 root root 8823 Mar 18 2015 install.log
-rw-r--r--. 1 root root 3384 Mar 18 2015 install.log.syslog
drwxrwxrwx 6 1107 1107 4096 Oct 3 2017 postgresql-10.0
-rw-r--r-- 1 root root 19639147 Oct 3 2017 postgresql-10.0.tar.bz2

4. Next step for installation procedure is to configure the downloaded source code by choosing the options according to your needs.

# cd postgresql-10.0

use ./configure --help to get help about various options.

Sample Output
# ./configure --help
Defaults for the options are specified in brackets.
Configuration:
-h, --help display this help and exit
--help=short display options specific to this package
--help=recursive display the short help of all the included packages
-V, --version display version information and exit
-q, --quiet, --silent do not print `checking ...' messages
--cache-file=FILE cache test results in FILE [disabled]
-C, --config-cache alias for `--cache-file=config.cache'
-n, --no-create do not create output files
--srcdir=DIR find the sources in DIR [configure dir or `..']
Installation directories:
--prefix=PREFIX install architecture-independent files in PREFIX
[/usr/local/pgsql]
--exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
[PREFIX]

5. Now create a directory where you want to install postgres files and use prefix option with configure.

# mkdir /opt/PostgreSQL-10/
# ./configure --prefix=/opt/PostgreSQL-10
Sample Output
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking which template to use... linux
checking whether NLS is wanted... no
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc supports -Wdeclaration-after-statement... yes
checking whether gcc supports -Wendif-labels... yes
checking whether gcc supports -Wmissing-format-attribute... yes
checking whether gcc supports -Wformat-security... yes
checking whether gcc supports -fno-strict-aliasing... yes
checking whether gcc supports -fwrapv... yes
checking whether gcc supports -fexcess-precision=standard... no
....

6. After configuring, next we will start to build postgreSQL using following make command.

# make

After build process finishes, now install postgresql using following command.

# make install

Postgresql 10 has been installed in /opt/PostgreSQL-10 directory.

7. Now create a postgres user and directory to be used as data directory for initializing database cluster. Owner of this data directory should be postgres user and permissions should be 700 and also set path for postgresql binaries for our ease.

# useradd postgres
# passwd postgres
# mkdir /pgdatabase/data
# chown -R postgres. /pgdatabase/data
# echo 'export PATH=$PATH:/opt/PostgreSQL-10/bin' > /etc/profile.d/postgres.sh

8. Now initialize database using the following command as postgres user before using any postgres commands.

# su postgres
$ initdb -D /pgdatabase/data/ -U postgres -W

Where -D is location for this database cluster or we can say it is data directory where we want to initialize database cluster, -U for database superuser name and -W for password prompt for db superuser.

For more info and options we can refer initdb –help.

9. After initializing database, start the database cluster or if you need to change port or listen address for server, edit the postgresql.conf file in data directory of database server.

Configure PostgreSQL Port

Configure PostgreSQL Port

$ pg_ctl -D /pgdatabase/data/ -l /pglog/db_logs/start.log start

10. After starting database, verify the status of postgres server process by using following commands.

$ ps -ef |grep -i postgres
$ netstat -apn |grep -i 51751
Verify PostgreSQL Database

Verify PostgreSQL Database

We can see that database cluster is running fine, and startup logs can be found at location specified with -l option while starting database cluster.

11. Now connect to database cluster and create database by using following commands.

$ psql -p 51751
postgres=# create database test;
postgres=# \l to list all databases in cluster
postgres=# \q to quit form postgres console
Connect PostgreSQL Database

Connect PostgreSQL Database

That’s It! in our upcoming articles, I will cover configuration, replication setup and installation of pgAdmin tool, till then stay tuned to Tecmint.

4 Tools to Manage EXT2, EXT3 and EXT4 Health in Linux

A filesystem is a data structure that helps to control how data is stored and retrieved on a computer system. A filesystem can also be considered as a physical (or extended) partition on a disk. If not well maintained and regularly monitored, it can become damaged or corrupted in the long run, in so many different ways.

There are several factors that can cause a filesystem to become unhealthy: system crashes, hardware or software malfunctions, buggy drivers and programs, tunning it incorrectly, overloading it with excessive data plus other minor glitches.

Any of these issues can cause the Linux not to mount (or unmount) a filesystem gracefully, thus bringing about system failure.

Read Also: 7 Ways to Determine the File System Type in Linux (Ext2, Ext3 or Ext4)


In addition, running your system with an impaired filesystem may give rise to other runtime errors in operating system components or in user applications, which could escalate to severe data loss. To avoid suffering filesystem corruption or damage, you need to keep an eye on its health.

In this article, we will cover tools to monitor and maintain a ext2, ext3 and ext4 filesystems health. All the tools described here require root user privileges, therefore use the sudo command to run them.

How to View EXT2/EXT3/EXT4 Filesystem Information

dumpe2fs is a command line tool used to dump ext2/ext3/ext4 filesystem information, mean it displays super block and blocks group information for the filesystem on device.

Before running dumpe2fs, make sure to run df -hT command to know the filesystem device names.

$ sudo dumpe2fs /dev/sda10
Sample Output
dumpe2fs 1.42.13 (17-May-2015)
Filesystem volume name: Last mounted on: /
Filesystem UUID: bb29dda3-bdaa-4b39-86cf-4a6dc9634a1b
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 21544960
Block count: 86154752
Reserved block count: 4307737
Free blocks: 22387732
Free inodes: 21026406
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1003
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sun Jul 31 16:19:36 2016
Last mount time: Mon Nov 6 10:25:28 2017
Last write time: Mon Nov 6 10:25:19 2017
Mount count: 432
Maximum mount count: -1
Last checked: Sun Jul 31 16:19:36 2016
Check interval: 0 ()
Lifetime writes: 2834 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 6947324
Default directory hash: half_md4
Directory Hash Seed: 9da5dafb-bded-494d-ba7f-5c0ff3d9b805
Journal backup: inode blocks
Journal features: journal_incompat_revoke
Journal size: 128M
Journal length: 32768
Journal sequence: 0x00580f0c
Journal start: 12055

You can pass the -b flag to display any blocks reserved as bad in the filesystem (no output implies to badblocks):

$ dumpe2fs -b

Checking EXT2/EXT3/EXT4 Filesystems For Errors

e2fsck is used to examine ext2/ext3/ext4 filesystems for errors and fsck checks and can optionally repair a Linux filesystem; it is basically a front-end for a range of filesystem checkers (fsck.fstype for example fsck.ext3, fsck.sfx etc) offered under Linux.

Remember that Linux runs e2fack/fsck automatically at system boot on partitions that are labeled for checking in /etc/fstab configuration file. This is normally done after a filesystem has not been unmounted cleanly.

Attention: Do not run e2fsck or fsck on mounted filesystems, always unmount a partition first before you can run these tools on it, as shown below.

$ sudo unmount /dev/sda10
$ sudo fsck /dev/sda10

Alternatively, enable verbose output with the -V switch and use the -t to specify a filesystem type like this:

$ sudo fsck -Vt ext4 /dev/sda10

Tunning EXT2/EXT3/EXT4 Filesystems

We mentioned from the start that one of the causes of filesystem damage is incorrect tunning. You can use the tune2fs utility to change the tunable parameters of ext2/ext3/ext4 filesystems as explained below.

To see the contents of the filesystem superblock, including the current values of the parameters, use the -l option as shown.

$ sudo tune2fs -l /dev/sda10
Sample Output
tune2fs 1.42.13 (17-May-2015)
Filesystem volume name: Last mounted on: /
Filesystem UUID: bb29dda3-bdaa-4b39-86cf-4a6dc9634a1b
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 21544960
Block count: 86154752
Reserved block count: 4307737
Free blocks: 22387732
Free inodes: 21026406
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1003
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sun Jul 31 16:19:36 2016
Last mount time: Mon Nov 6 10:25:28 2017
Last write time: Mon Nov 6 10:25:19 2017
Mount count: 432
Maximum mount count: -1
Last checked: Sun Jul 31 16:19:36 2016
Check interval: 0 ()
Lifetime writes: 2834 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 6947324
Default directory hash: half_md4
Directory Hash Seed: 9da5dafb-bded-494d-ba7f-5c0ff3d9b805
Journal backup: inode blocks

Next, using the -c flag, you can set the number of mounts after which the filesystem will be checked by e2fsck. This command instructs the system to run e2fsck against /dev/sda10 after every 4 mounts.

$ sudo tune2fs -c 4 /dev/sda10
tune2fs 1.42.13 (17-May-2015)
Setting maximal mount count to 4

You can as well define the time between two filesystem checks with the -i option. The following command sets an interval of 2 days between filesystem checks.

$ sudo tune2fs -i 2d /dev/sda10
tune2fs 1.42.13 (17-May-2015)
Setting interval between checks to 172800 seconds

Now if you run this command below, the filesystem check interval for /dev/sda10 is now set.

$ sudo tune2fs -l /dev/sda10
Sample Output
Filesystem created: Sun Jul 31 16:19:36 2016
Last mount time: Mon Nov 6 10:25:28 2017
Last write time: Mon Nov 6 13:49:50 2017
Mount count: 432
Maximum mount count: 4
Last checked: Sun Jul 31 16:19:36 2016
Check interval: 172800 (2 days)
Next check after: Tue Aug 2 16:19:36 2016
Lifetime writes: 2834 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 6947324
Default directory hash: half_md4
Directory Hash Seed: 9da5dafb-bded-494d-ba7f-5c0ff3d9b805
Journal backup: inode blocks

To change the default journaling parameters, use the -J option. This option also has sub-options: size=journal-size (sets the journal’s size), device=external-journal (specifies the device on which it’s stored) and location=journal-location (defines the location of the journal).

Note that only one of the size or device options can be set for a filesystem:

$ sudo tune2fs -J size=4MB /dev/sda10

Last but not least, the volume label of a filesystem can be set using the -L option as below.

$ sudo tune2fs -L "ROOT" /dev/sda10

Debug EXT2/EXT3/EXT4 Filesystems

debugfs is an simple, interactive command line based ext2/ext3/ext4 filesystems debugger. It allows you to modify filesystem parameters interactively. To view sub-commands or requests, type "?".

$ sudo debugfs /dev/sda10

By default, the filesystem should be opened in read-write mode, use the -w flag to open it in read-write mode. To open it in catastrophic mode, use the -c option.

Sample Output
debugfs 1.42.13 (17-May-2015)
debugfs: ?
Available debugfs requests:
show_debugfs_params, params
Show debugfs parameters
open_filesys, open Open a filesystem
close_filesys, close Close the filesystem
freefrag, e2freefrag Report free space fragmentation
feature, features Set/print superblock features
dirty_filesys, dirty Mark the filesystem as dirty
init_filesys Initialize a filesystem (DESTROYS DATA)
show_super_stats, stats Show superblock statistics
ncheck Do inode->name translation
icheck Do block->inode translation
change_root_directory, chroot
....

To show free space fragmentation, use the freefrag request, like so.

debugfs: freefrag
Sample Output
Device: /dev/sda10
Blocksize: 4096 bytes
Total blocks: 86154752
Free blocks: 22387732 (26.0%)
Min. free extent: 4 KB Max. free extent: 2064256 KB
Avg. free extent: 2664 KB
Num. free extent: 33625
HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range : Free extents Free Blocks Percent
4K... 8K- : 4883 4883 0.02%
8K... 16K- : 4029 9357 0.04%
16K... 32K- : 3172 15824 0.07%
32K... 64K- : 2523 27916 0.12%
64K... 128K- : 2041 45142 0.20%
128K... 256K- : 2088 95442 0.43%
256K... 512K- : 2462 218526 0.98%
512K... 1024K- : 3175 571055 2.55%
1M... 2M- : 4551 1609188 7.19%
2M... 4M- : 2870 1942177 8.68%
4M... 8M- : 1065 1448374 6.47%
8M... 16M- : 364 891633 3.98%
16M... 32M- : 194 984448 4.40%
32M... 64M- : 86 873181 3.90%
64M... 128M- : 77 1733629 7.74%
128M... 256M- : 11 490445 2.19%
256M... 512M- : 10 889448 3.97%
512M... 1024M- : 2 343904 1.54%
1G... 2G- : 22 10217801 45.64%
debugfs: 

You can explore so many other requests such as creating or removing files or directories, changing the current working directory and much more, by simply reading the brief description provided. To quit debugfs, use the q request.

That’s all for now! We have a collection of related articles under different categories below, which you will find useful.

Filesystem Usage Information:

  1. 12 Useful “df” Commands to Check Disk Space in Linux
  2. Pydf an Alternative “df” Command to Check Disk Usage in Different Colours
  3. 10 Useful du (Disk Usage) Commands to Find Disk Usage of Files and Directories

Check Disk or Partition Health:

  1. 3 Useful GUI and Terminal Based Linux Disk Scanning Tools
  2. How to Check Bad Sectors or Bad Blocks on Hard Disk in Linux
  3. How to Repair and Defragment Linux System Partitions and Directories

Maintaining a healthy filesystem always improves the overall performance of your Linux system. If you have any questions or additional thoughts to share use the comment form below.

Get Reactive JavaScript with 5 Courses & 3 eBooks

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Get Reactive JavaScript with 5 Courses & 3 eBooks’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/11/Learn-Reactive-JavaScript-Course.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Check and Install Updates On CentOS and RHEL

Installing updates for software packages or the kernel itself, is a highly recommended and beneficial task for system administrators; more especially when it comes to security updates or patches. While security vulnerabilities are discovered, the affected software must be updated so as to lessen any potential security risks to the whole system.

If you have not configured your system to install security patches or updates automatically, then you need to do it manually. In this article, we will show you how to check and install software updates on CentOS and RHEL distributions.

To check for any updates available for your installed packages, use YUM package manager with the check-update subcommand; this helps you to see all package updates from all repositories if any are available.

# yum check-update

Check All Software Package Updates

Loaded plugins: changelog, fastestmirror
base | 3.6 kB 00:00:00 epel/x86_64/metalink | 22 kB 00:00:00 epel | 4.3 kB 00:00:00 extras | 3.4 kB 00:00:00 mariadb | 2.9 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/2): epel/x86_64/updateinfo | 842 kB 00:00:15 (2/2): epel/x86_64/primary_db | 6.1 MB 00:00:00 Loading mirror speeds from cached hostfile
* base: mirrors.linode.com
* epel: mirror.vorboss.net
* extras: mirrors.linode.com
* updates: mirrors.linode.com
MariaDB-client.x86_64 10.1.28-1.el7.centos mariadb MariaDB-common.x86_64 10.1.28-1.el7.centos mariadb MariaDB-server.x86_64 10.1.28-1.el7.centos mariadb MariaDB-shared.x86_64 10.1.28-1.el7.centos mariadb NetworkManager.x86_64 1:1.8.0-11.el7_4 updates NetworkManager-adsl.x86_64 1:1.8.0-11.el7_4 updates ....

To update a single package to the latest available version, run the command below. In this example, yum will attempt to update the httpd package.

# yum update httpd

Update Apache Package

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.linode.com
* epel: mirror.vorboss.net
* extras: mirrors.linode.com
* updates: mirrors.linode.com
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-45.el7.centos.4 will be updated
--> Processing Dependency: httpd = 2.4.6-45.el7.centos.4 for package: 1:mod_ssl-2.4.6-45.el7.centos.4.x86_64
---> Package httpd.x86_64 0:2.4.6-67.el7.centos.6 will be an update
--> Processing Dependency: httpd-tools = 2.4.6-67.el7.centos.6 for package: httpd-2.4.6-67.el7.centos.6.x86_64
--> Running transaction check
---> Package httpd-tools.x86_64 0:2.4.6-45.el7.centos.4 will be updated
---> Package httpd-tools.x86_64 0:2.4.6-67.el7.centos.6 will be an update
---> Package mod_ssl.x86_64 1:2.4.6-45.el7.centos.4 will be updated
---> Package mod_ssl.x86_64 1:2.4.6-67.el7.centos.6 will be an update
....


To update a package group, the command that follows will update your development tools (C and C++ compiler plus related utilities).

# yum update "Development Tools"

Update Group Packages

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.linode.com
* epel: mirror.vorboss.net
* extras: mirrors.linode.com
* updates: mirrors.linode.com
...

To upgrade all of your system software as well as their dependencies to the latest version, use this command:

# yum update

Update Software Packakges

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.linode.com
* epel: mirror.vorboss.net
* extras: mirrors.linode.com
* updates: mirrors.linode.com
Resolving Dependencies
--> Running transaction check
---> Package MariaDB-client.x86_64 0:10.1.23-1.el7.centos will be updated
---> Package MariaDB-client.x86_64 0:10.1.28-1.el7.centos will be an update
---> Package MariaDB-common.x86_64 0:10.1.23-1.el7.centos will be updated
---> Package MariaDB-common.x86_64 0:10.1.28-1.el7.centos will be an update
---> Package MariaDB-server.x86_64 0:10.1.23-1.el7.centos will be updated
---> Package MariaDB-server.x86_64 0:10.1.28-1.el7.centos will be an update
---> Package MariaDB-shared.x86_64 0:10.1.23-1.el7.centos will be updated
---> Package MariaDB-shared.x86_64 0:10.1.28-1.el7.centos will be an update
---> Package NetworkManager.x86_64 1:1.4.0-19.el7_3 will be obsoleted
---> Package NetworkManager.x86_64 1:1.8.0-11.el7_4 will be obsoleting
....

That’s It! You might like to read these following related articles.

  1. How to Install or Upgrade to Latest Kernel Version in CentOS 7
  2. How to Delete Old Unused Kernels in CentOS, RHEL and Fedora
  3. How to Install Security Updates Automatically on Debian and Ubuntu

Always keep you Linux system up to date with latest security and general package updates. Do you have any questions to ask, use comment form below for that.